Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology Hardware

Why the Z-80's Data Pins Are Scrambled 167

An anonymous reader writes "The Z-80 microprocessor has been around since 1976, and it was used in many computers at the beginning of the PC revolution. (For example, the TRS-80, Commodore 128, and ZX Spectrum.) Ken Shirriff has been working on reverse engineering the Z-80, and one of the things he noticed is that the data pins coming out of the chip are in seemingly random order: 4, 3, 5, 6, 2, 7, 0, 1. (And a +5V pin is stuck in the middle.) After careful study, he's come up with an explanation for this seemingly odd design. "The motivation behind splitting the data bus is to allow the chip to perform activities in parallel. For instance an instruction can be read from the data pins into the instruction logic at the same time that data is being copied between the ALU and registers.

[B]ecause the Z-80 splits the data bus into multiple segments, only four data lines run to the lower right corner of the chip. And because the Z-80 was very tight for space, running additional lines would be undesirable. Next, the BIT instructions use instruction bits 3, 4, and 5 to select a particular bit. This was motivated by the instruction structure the Z-80 inherited from the 8080. Finally, the Z-80's ALU requires direct access to instruction bits 3, 4, and 5 to select the particular data bit. Putting these factors together, data pins 3, 4, and 5 are constrained to be in the lower right corner of the chip next to the ALU. This forces the data pins to be out of sequence, and that's why the Z-80 has out-of-order data pins."
This discussion has been archived. No new comments can be posted.

Why the Z-80's Data Pins Are Scrambled

Comments Filter:
  • C=128 (Score:5, Informative)

    by rossdee ( 243626 ) on Saturday September 27, 2014 @01:57PM (#48010177)

    There was a Z-80 in the C=128 , but it wasn't used.
    There was virtually no CPM software adapted to the C=128

    Typically 128s mostly were used in C=64 mode

    • Re:C=128 (Score:5, Interesting)

      by Ihlosi ( 895663 ) on Saturday September 27, 2014 @02:04PM (#48010215)
      There was a Z-80 in the C=128 , but it wasn't used.

      Yes, I found this part of the article amusing too.

      C128s were cobbled together from too many different parts. And they appeared when the 8-bit generation was already on its way out.

      However, the C128 mode had its uses. The BASIC was had lots of additional features (commands for music, graphics, sprites), and it had a built-in sprite editor. If you didn't know the C64 inside out and could do these things in assembly (blindfolded), the C128 mode gave you much more access to the machines capabilites. Too bad no company ever came up with a killer 8-bit machine. Z80 CPU, more than 64 kB RAM, sound and graphics like SID and VIC-II.

      • Comment removed (Score:5, Informative)

        by account_deleted ( 4530225 ) on Saturday September 27, 2014 @02:21PM (#48010271)
        Comment removed based on user account deletion
        • Re:C=128 (Score:5, Informative)

          by MindPrison ( 864299 ) on Saturday September 27, 2014 @02:41PM (#48010343) Journal

          Too bad no company ever came up with a killer 8-bit machine. Z80 CPU, more than 64 kB RAM, sound and graphics like SID and VIC-II.

          Really? Ever heard of MSX? See: http://en.wikipedia.org/wiki/M... [wikipedia.org] It came with graphics, sprites (TMS9918/9929) and was a standard design carried by several manufacturers.

          Ah, MSX... weirdest computers in history. The Yamaha MSX computer was an awesome music computer with built in FM synthesis, and then you had the vastly different Spectravideo MSX, it was fully compliant with the MSX standard...but it's just that, not everyone was compliant - every MSX computer seemed to be a special variant of itself, something that confused me something so fierce back in the days, I even had a Memotech MSX, weird WEIRD computer.

          The games on the MSX computers wasn't mind blowing, nowhere near the commodore 64 games, it simply lacked the awesome sound capabilities of the 64. They had a wider color range though.

          I remember the war between us Commodore users (65xx type processors) vs the Z80 series, yes - the Z80 was a far superior processor in many ways and sometimes I wished we had that processor just for the extended registers alone, not to mention that the speed was 4 mhz instead of our meager 0.97mhz (could be doubled if you turned off the screen). But the hardware sprites & scrolling is what beat the living bejeezus outta the other competing products.

          And I nearly cried snot when the Commodore 65 didn't make it. It was a super-cool Commodore 64 with beefed up hardware, higher resolution, stereo SID sound (6 channels!) of pure ring-modulated goodness.

          Ah, I'll go stare at my stash of Z0840004PSC, 27xxxx's and the rest of the Chip Pron in my vast land of NOS components...aaaahh.. :)

          • Re: C=128 (Score:2, Informative)

            by Anonymous Coward

            Actually Z-80 vs 65xx is still a valid debate. The 65xx was much more capable at lower clocks. A good example being the "Woz Machine" floppy controller compared with many Z-80 boxes needing a second 6mhz Z-80 to run their floppy drives.

          • Re:C=128 (Score:4, Informative)

            by oldhack ( 1037484 ) on Saturday September 27, 2014 @04:38PM (#48010779)
            Speed of Z80 at 4Mhz was comparable to that of 65xx at 1Mhz - many of Z80 instructions took more cycles to execute than those of 65xx. Sort of CISC-vs.-RISC before the phrase/concepts were invented.
            • Speed of Z80 at 4Mhz was comparable to that of 65xx at 1Mhz

              I thought the work per clock ratio between 6502 and Z80 was closer to 2:1, not 4:1. This would put, say, a 1.8 MHz Atari 800 and a 3.5 MHz Spectrum together, or a 1.8 MHz NES and a 3.6 MHz Master System. Where do you get 4:1?

              • Re: (Score:2, Informative)

                by Anonymous Coward

                I don't know about the 6502, but the Z80 takes 4 clockticks per (basic) instruction. IOW, it's a bit below 1 MIPS at 3.57MHz. The later R800, used in the MSX turbo-R, ran at 7MHz at took 1 clocktick per instruction, which gave it the "28MHz Z80 speed" it was sometimes quoted.

                • by tepples ( 727027 ) <.tepples. .at. .gmail.com.> on Saturday September 27, 2014 @07:41PM (#48011491) Homepage Journal
                  The 6502 takes 2 to 6 clocks for most instructions: 2 to fetch the instruction itself and the first byte of the operand, then additional cycles to fetch the rest of the operand, perform address generation if necessary, and fetch a value from memory. With fewer register-to-register opcodes, more operations have to use memory. But there are probably plenty of studies from the C64 vs. Speccy flamewars of how this plays out in practice.
                  • by Spit ( 23158 )

                    The 6502 memory access style with the unified map worked well for tickling the registers on coprocessors and on bitmaps; very fast and predictable for display timing. The Z80 bus method was abstracted some although still quite useful.

                    Regardless, they are such simple devices there's no excuse not to master them both for maximum pleasure.

                  • by jeremyp ( 130771 )

                    The 6502 had special addressing modes for accessing the bottom 256 bytes of memory. Addresses in both the 6502 and Z80 were 16 bit, thus taking two read cycles to get a whole address into the CPU so that you could then get the content at the address. However, with the 6502, "zero page" addresses could be read in one read cycle. Not only that, but pairs of zero page locations could be used for indirect addressing. They could be treated as a set of (slow) address registers.

                    When I first came actress the Z

                    • by tepples ( 727027 )
                      Yeah, I'm aware of what the 6502 can do, having programmed a few games for a 6502-based platform [pineight.com]. I just never got into the Z80, so I'm not sure how much the plethora of extra registers (AF BC DE HL IX IY) made up for the lack of indirect addressing modes like (d),Y.
          • by Anonymous Coward

            Also cost was a huge factor - the 6502 was significantly less expensive than the Z80s at the time.

            And the 6502 gracefully entered the 16 bit field with 100% backward compatibility in the 65C816.
            Sadly, too late to make a big dent as the 68000 was just around the corner, although Nintendo took
            great advantage of it in their SNES. Franklin Electronic Publishers used the 6502 exclusively in
            their early hand-held spell checkers.

          • Re:C=128 (Score:4, Informative)

            by AmiMoJo ( 196126 ) * on Saturday September 27, 2014 @05:39PM (#48011033) Homepage Journal

            The 6502 may have run at a little under 1MHz, but it was efficient. Most instructions were one or two cycles and they did a lot. It's actually a really fun processor to write for because it's both a nice architecture and very challenging to get the most from.

            • The problem with the 6502 was that if you were writing code for someone else's environment then your use of Page 0 (which many of the index-based instructions used intensively) was restricted because the OS often took up most of that space.

              If you were writing code that was totally stand-alone (ie: no bios or OS to worry about) then the 6502 environment was *very* nice and could perform incredibly well. However, if you were writing code that sat atop a BIOS/OS layer then the Z80 was just so much simpler and

              • The problem with the 6502 was that if you were writing code for someone else's environment then your use of Page 0 (which many of the index-based instructions used intensively) was restricted because the OS often took up most of that space.

                Yes, there was a mindset problem when programming the 6502. In hindsight, page 0 RAM should have been treated as a working space register set, whereas at the time it was treated mostly as a fast RAM littered with persistent variables.

                • by jeremyp ( 130771 )

                  In later models, i.e. well after the 6502 was obsolete for general purpose computers, there was an 8 register that you could set to change which page was regarded as zero page. If that had been available from the start, it would have saved me a lot of time looking for locations that didn't zap the MS Basic interpreter on our Commodore PET. I seem to remember that the floating point accumulators were considered the best bet.

              • Re:C=128 (Score:4, Informative)

                by ChrisMaple ( 607946 ) on Saturday September 27, 2014 @07:27PM (#48011443)

                Most Z80 code was written to be compatible with the 8080. As a result, the second register set wasn't used. Floating point math using the second register set for temporary variables made possible a substantial speedup.

                If the 6502 and Z80 waveforms for various instructions are examined, it quickly becomes apparent that the Z80 effectively divided its clock by 2 before using it. This is why, for the technology available in any particular year, they had comparable performance but the Z80 used twice as many clock cycles.

                The 6502 was a tremendously clever design for making effective use of a small number of transistors. The Z80, striving to be a superset of the 8080, was also a clever and powerful design for its time.

                • by tlhIngan ( 30335 )

                  If the 6502 and Z80 waveforms for various instructions are examined, it quickly becomes apparent that the Z80 effectively divided its clock by 2 before using it. This is why, for the technology available in any particular year, they had comparable performance but the Z80 used twice as many clock cycles.

                  Actually, the problem was the ALU of the Z-80 was only 4 bits wide [righto.com]. So processing an 8 bit operand required two trips through the ALU, thus incurring twice the number of clocks or half the effective clock rat

        • by Megane ( 129182 )
          A lot of people never heard of MSX because by the time it came out, the US was already going 16-bit with PC, Mac, Amiga, and Atari ST, with the C64 firmly entrenched in what was left of the cheapie 8-bit market. And the TMS9918 was pretty weak, being the same graphics chip used in the TI99/4A and Colecovision. (Coleco used the 9928, which had a different video output.) Later versions of MSX video chips were better, but by then it was even more outclassed by 16-bit systems. (The Sega Genesis graphics were al
        • by Ihlosi ( 895663 )
          Yes, MSX and possible the CPC6128 came close, but just that and they came too late. The killer 8-bit machine wouldn't have required any technology that wasn't available when the C64 came out.

          In 1983, the 68k had been out for a few years already and new 8-bit computer designs were doomed.

      • Re:C=128 (Score:5, Informative)

        by Sun ( 104778 ) on Saturday September 27, 2014 @02:48PM (#48010369) Homepage

        And then Commodore went on to (half inherit, half design) the Amiga. Maybe "cobbled together" is too harsh for it, but still. Floppy controller that can decide, per track, whether to work in MFM or RLL (but not read a single sector, mind you), more DMA channels than the CPU can handle, and a display processor with a built-in three commands machine language (one of which was only ever used by one application ever) to change display resolution mid-monitor.

        I loved it, but the Amiga gave the impression that it was designed by engineers that couldn't make up their mind on what choice to make, so they created hardware that would offload all decisions to software.

        One last anecdote. Many have heard of the famous "Guru meditation". What only Amiga users know is that you knew one was coming because the power led would blink three times. Yes, the power led was software controlled, making the Amiga the first ever computer that could play dead.

        Shachar

        • Re:C=128 (Score:4, Informative)

          by Anonymous Coward on Saturday September 27, 2014 @05:01PM (#48010879)

          What only Amiga users know is that the only way the power led can be controlled is by enabling/disabling the low-pass filter on the audio output since the status of the enable signal is indicated by dimming the led. It's not possible to turn it off completely to simulate the computer being dead.
          It's controlled by bit 1 of ciaa->pra. (Address $BFE001)

          It's also possible to read a single sector, but that would require starting the DMA on a timer so it's more cumbersome than reading the entire track and it's not guaranteed to be faster since it's a spinning media. The memory is there so reading a single sector is just pointless. As for MFM/RLL encoding the floppy controller does neither, it reads the raw bits. The order of the bits is interleaved on Amiga formatted disks to allow for blitter accelerated MFM-(de)coding.

          Don't trust anecdotes, the developer guides are available online.

          • by Sun ( 104778 )

            What only Amiga users know is that the only way the power led can be controlled is by enabling/disabling the low-pass filter on the audio output since the status of the enable signal is indicated by dimming the led. It's not possible to turn it off completely to simulate the computer being dead.

            The original Amiga 500, including the early Kickstart 1.3 ones, would actually completely turn off the LED. If you don't believe me, you are welcome to visit me. I still have my original machine in (more or less) working order.

            You are correct that later models would not turn it off completely, but rather only dim it. I only remembered that fact after I hit Send, and thought no one will be anal enough to demand a clarification.

            It's also possible to read a single sector, but that would require starting the DMA on a timer so it's more cumbersome than reading the entire track and it's not guaranteed to be faster since it's a spinning media.

            In other words, the hardware does not support it.

            As for MFM/RLL encoding the floppy controller does neither, it reads the raw bits. The order of the bits is interleaved on Amiga formatted disks to allow for blitter accelerated MFM-(de)coding.

            That is one point I am not as sur

        • built-in three commands machine language (one of which was only ever used by one application ever) to change display resolution mid-monitor

          I'm curious... any more detail on this?

          • by matfud ( 464184 )

            If I remember correctly changing display modes mid scan was often done so that workbench could do things like display HAM images in windows.
            I think the OP may be talking about the "copper" which was a FSM with three states (MOVE, WAIT and SKIP) and a list of instructions to be processed when screen timing events occur. All three were used though and it was used for many things such as sprites and the above mentioned screen mode changes.

            MOVE (put data in pretty much any register such as changing the location

            • by Sun ( 104778 )

              If I remember correctly changing display modes mid scan was often done so that workbench could do things like display HAM images in windows.

              You misremember. The Copper code to change display modes took several scan-lines to run. Having a window with different display mode was impossible. You could, and did, have a UI construct called a "screen", which had its own display mode. You could drag a screen down and see another screen behind it.

              All three were used though and it was used for many things such as sprites and the above mentioned screen mode changes.

              AFAIK, none of those utilized SKIP. They were all based on WAIT and MOVE. If you know differently, please provide further details.

              Maybe I missunderstood the OP?

              I don't think so. I am, however, fairly sure you mis-remember.

              Shachar

              • by matfud ( 464184 )

                SKIP does seem a bit limited if it is only one entry that it skips.
                I think you either overran the time to execte so events had already occured or you finished early and waited for the next frame/whatever.

                So how did workbench give provide windows containing HAM8 images or where they only full screen windows with a menu bar? (too long ago and I do not remember). I was using PC's back then but I was most impressed with a flatmates Amiga. Shame it struggled to run Elite :)

                • by Sun ( 104778 )

                  Menus on the Amiga were bound to the screen, not the window. You are probably remembering a full-window.

                  Shachar

          • by Sun ( 104778 )

            You got a partial answer. If "SKIP" was ever used, I'm interested to know by whom. I only know of a single example, which was EXTREMELY special case.

            The commands:
            MOVE: move immediate value into copper register. These included such actions as where in memory the video memory was (i.e. - where to fetch the picture from), what is the palette, video resolution, where on screen the video fetch starts and where it ends and many others.
            WAIT: wait for the screen update to reach a certain point. There was a mask arg

      • Amstrad CPC6128 was very successful, especially popular in Europe - cheap price as a dedicated monitor was always bundled and a floppy drive was integrated, whereas other stuff like C64, Spectrum etc. tended to be used with a cassette drive and on the television.

        It's not a killer machine, though. Rudimentary sound and no sprites - but it was colorful, with 16 colors out of a 32 color palette (much better than 8 fixed colors on some of the 8bit crap :p)
        Then it did have an upgrade that made it a "killer 8-bit

      • It wasn't an all-in-one-minus-monitor design like a C64 or C128, it was an S100/IEEE696-bus system, but Morrow Designs produced a Z80 processor board that had memory management hardware on it for more than 64K of system memory. Of course there was no operating system that supported it, so it wasn't very useful. Personally, I wrote a RAMdisk driver around it that worked in CP/M 2.2, and had a 128kB RAMdrive.
    • Re:C=128 (Score:5, Interesting)

      by Anonymous Coward on Saturday September 27, 2014 @05:27PM (#48010985)
      The Z80 was used every time the C128 was turned on. It was added specifically to work around a compatibility issue with the C64 and one single, solitary, cartridge (Magic Voice) for the C64. To make it work, Bill Herd added the Z80, It would start (at address 0x0000), run a handful of instructions that initialised the C128 hardware, and then started the 8502 proper. (The Spectacular Rise And Fall Of Commodore, pg. 368)
  • by Anonymous Coward on Saturday September 27, 2014 @02:02PM (#48010207)

    Why didn't they just ask Federico Faggin? According to Wikipedia, he's still alive.

    • by Lord Apathy ( 584315 ) on Saturday September 27, 2014 @02:17PM (#48010263)

      This is what I came in here to say. They guy who designed the chip is still alive. Just shoot him a email, phone call, or smoke signals and just ask him.

      I'm going to go out on a limb here and say there is probably a logical reason behind the arrangement and not some conspiracy or something.

      • I'm going to go out on a limb and guess that, due to how primitive the tools were back then, when they got a signal to a pin they celebrated without worrying which pin it came out on.
    • by Alioth ( 221270 )

      The Z80 is still manufactured in classic form. It may be considered a trade secret and Faggin might not be at liberty to divulge anything about the inner workings of the chip without you signing an NDA.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Not only is it still manufactured, but it is still used. There is a good probability that your DVD burner has a Z80 in it running the show.

        The Z80 is a great example of a 'limited' CISC design (not enough transistors to be complete CISC like say a Motorola 68000 or a DEC NVAX).

        The 6502 is a great example of a 'limited' RISC design, and quite an efficient one. http://www.visual6502.org is still out there, and you can enjoy that yourself. I'm looking forward to the Z80 version, myself.

        And you can build you

      • With an optical microscope you could actually look at a Z80 die, see the transistors (all 8500 of them) and conductors, and write up a schematic of the chip. Considered a trade secret or not, the Z80 is known and completely defined. Nonetheless, it's possible that you're right and contracts may prevent him from talking about stuff that's no longer secret.
        • by matfud ( 464184 )

          If you had read the article you would know that it is part of an effort to deconstruct the Z80 using optical information and yes they can see all the transistors although identifying them and thier function can be tricky as they were layed out by hand to minimise the space needed.

          I have no connection to the project but reading some of the pages on the site are quite interesting if you are electronics minded.

    • Why didn't they just ask Federico Faggin? According to Wikipedia, he's still alive.

      Maybe its time for:

      Interviews: Federico Faggin [wikipedia.org] Answers Your Questions
      Last week you had the chance to pose questions to Federico Faggin .... father of Intel's 4004 [slashdot.org] 43 years ago...

      The man is very accomplished and has an amazing history.

  • by Anonymous Coward

    When I grow up I also want to be an archaelogist.

  • by Anonymous Coward on Saturday September 27, 2014 @02:16PM (#48010257)

    The Z-80 was a great chip and overall system design supported by very capable support peripheral chips in that family. The best part of working with it, is Zilog's Documentation, which was very well written and demonstrated the consistency of the entire product line (in terms of it's functional programming interfaces). There really is not any need to 'reverse engineer' the chip, everything you need to know is freely available already. I think this article and author means to say "Here are some plausible possible reasons behind some of these physical implementation decisions...".

    I think all first year computer science / programming / engineering students should be introduced to this and learn how to write programs for this environment first before moving on to modern systems. True power is being able to write useful stuff with only 64kb of ram and 1mhz of processor, and have it run in an acceptable time frame, and taking those skills and scaling up today's multi-core/ multi-gigahertz/multi-gigabyte address spaces.

    • I think all first year computer science / programming / engineering students should be introduced to this and learn how to write programs for this environment first before moving on to modern systems. True power is being able to write useful stuff with only 64kb of ram and 1mhz of processor, and have it run in an acceptable time frame, and taking those skills and scaling up today's multi-core/ multi-gigahertz/multi-gigabyte address spaces.

      While I agree, I wonder if this is actually true. To what extend does

      • by Euler ( 31942 )

        I think it may be somewhat daunting for a first-year student to go through the learning curve of what would effectively be debugging and optimizing code for an embedded platform... But it needs to be threaded in every CS course how to write code that is appropriate for the given application: speed/code/memory trade-offs and how to avoid code that blocks inappropriately, how to write programs that correctly work with the OS in terms of yielding, using timers, and such. Also, just calling the math library r

      • For all others it might not matter so much how the compiler and the OS handle memory allocations and the like, and it may be more useful to focus on the program structure instead of the implementation on the CPU.

        With a cache miss costing 200 cycles (more or less) and a trip to L2 being more expensive than computing a square root, our choices with regard to memory layout are possibly MORE crucial now to getting decent economy and performance out of the machines we have today.

        • by matfud ( 464184 )

          With programs generally being much much larger many such optimisations can just be lost in the noise. In some domains it is still useful but even more difficult than it was in times gone by.
          Pipelining/OOE/register renaming and the multitude of processors with the same IS but remarkably different implementations, cache sizes/associativity/memory, memory size and connectivity
          How many cores per CPU and how do they share those caches and memory buses.
          How many independent memory banks per processor/chip (can cha

          • I'm not talking about hand tuned assembly, I'm talking about being aware of your data layout and what data is hot, accessed together, and so on. Those things ARE a big deal and are commonly doable, even if they are commonly ignored. The famous case where Bjarne S had an associate demo a simple random data insertion/random deletion using a vector vs a list is simple but instructive.

            Older systems and complexity theory might favor the list, but the vector beat it badly on modern hardware simply because the h
            • by matfud ( 464184 )

              I am not disagreeing with you but some of the things I pointed out above are varibles with respect to harware and most of them DO affect memory locality. If you make an assmption that x parts of data are localy cached you can be truely wrong when run on a compatable chip with a different cache layout or even the same chip but with a different offset that is only partialy aligned with the cache.

              Some algorithms in general tend to keep the data used close to gether and they may benifit. Some algorihms uses dat

              • Depends on the breadth of systems being targeted. If we exclude exotics (which have their own very vertical software and optimizations) it's really remarkable how uniform current platforms are. Everything from the ARM in a tablet to the 16 core desktop will have a pre-fetcher, multilevel cache, and a huge cost for missing. Simple things like grouping the frequently modified data together in a single allocation (like std:::vector) and then unifying those into more complex data structures by indexing into tho
                • by matfud ( 464184 )

                  When your data and or instructions are not aligned on cache boundries and/or not limited by them (4k seems pretty standard for x86 and x64 3rd level caches but the processor may have different ideas about how that is segmented for L2 and L1) then you can cause cache thrashing by trying to tune for those specific variables. (your can cause the same by not tuning for them)

                  You say that it is remarkable how uniform current platforms are? I am interested as to why you think that as even on just x86 and x64 ther

                  • On most current architectures a cache line is 64 bytes or a small multiple of 64 bytes, not 4K, and the CPU will almost always have a prefetcher that will try to optimize sequential accesses. Here is a pretty accessible rundown [msdn.com] on some of the things a person can do very easily.
    • by cnettel ( 836611 ) on Saturday September 27, 2014 @03:38PM (#48010543)

      I think all first year computer science / programming / engineering students should be introduced to this and learn how to write programs for this environment first before moving on to modern systems. True power is being able to write useful stuff with only 64kb of ram and 1mhz of processor, and have it run in an acceptable time frame, and taking those skills and scaling up today's multi-core/ multi-gigahertz/multi-gigabyte address spaces.

      Cheap memory accesses compared to instruction latency, over your whole memory space. Memory locality basically doesn't matter. Branching is also almost free, since you are not pipelined at all. If you would extrapolate a Z80 machine to multi-gigahertz and multi-gigabyte you would get a much simpler low-level structure than you actually have. Some of the lessons learned regarding being frugal make sense, but you will also learn a lot of tricks that are either directly counter-productive, or at the very least steal your attention from what you should be concerned with, even in those very cases where you are really focusing on performance and power efficiency (in addition to the myriad of valid cases where developer productivity is simply more important than optimality).

      It used to be that you couldn't pre-calculate everything since you didn't have enough storage. These days, you shouldn't pre-calculate everything since it will actually be cheaper to do it again rather than risk hitting main memory with a tremendous amount of random accesses, or even worse, swap. Up to some quite serious size data sets a flat data structure with fully sequential accesses can turn out to be preferable to something like a binary tree with multi-level indirections. (Now, the insane amount of references even for what should be a flat array in anything from Java to Python is one reason for why even very good JITs fall short of well-written C, Fortran, or C++.)

    • by Z00L00K ( 682162 )

      Add to it the great book Programming the Z80 [freecomputerbooks.com] by Rodnay Zaks.

      That book is one of the best books I have encountered when it comes to how to utilize a device.

      Personally I think that it should be in the collection of books even if you don't aim to program specifically for the Z80 because it explains a lot of general CPU architecture and logic as well.

    • "Back on Earth, I used to go to a little tavern called the Z-80 Club. Programmers from the nearby industrial park usually gathered there after work. A few students like myself were tolerated, but expected to keep out of the way. One guy always came in at 4:30. He looks about 40 years old with long hair and clean shaven. He'd walk in and be treated like he owned the place and always sit the same little table in the corner. 'Who's he?' I asked after noticing all the deference people showed him. 'Assembly prog
    • While we're at it, all undergraduates should only be allowed to submit programs on punch cards, and have to wait 3 days to see if their program compiled and ran. And walk to class barefoot in the snow.
    • by khallow ( 566160 )
      How about from the currently esoteric point of view of rebuilding human society from scratch?

      The Z-80 is one of a few chips that has the unusual feature that it is both simple enough to be built out of vacuum tubes or crude handmade integrated circuits (for example, you would need somewhere in the neighborhood of 5k to 10k tubes, complex but not impossible for a small machine shop) yet barely complex enough that you can run a simple version of Linux on it (I gather Linux hasn't yet been ported, but there
  • i remember (Score:3, Interesting)

    by acdc_rules ( 519822 ) on Saturday September 27, 2014 @02:19PM (#48010267)
    Ahhh, those were the days. a whole CPU in a 40 pin DIP. you could actually do useful things with this mounted on an experimental breadboard. thanks for bringing back memories.
    • by Euler ( 31942 )

      While you can still buy a lot of embedded processors in DIP format, I haven't considered it very much - but yes, those were the days. Now you can just buy reasonable micros on affordable eval boards or other very simple boards with plenty of wire-wrap headers or solder points. You can cut a lot of the mundane work like breadboarding a voltage regulator, memory, osc, serial transceiver, etc.

    • by Megane ( 129182 )
      Even though most of them aren't DIP anymore, you can do a lot more useful things with modern microcontrollers because they don't waste most of their pins on an address and data bus.
    • by Nethead ( 1563 )

      Agree! Though I was a sixer, I miss those days. Wire wrap board, some '138s and JEDIC RAM. Put a 6522 (or 8255? in your case), maybe a 6551 and you have a computer. I had a hacked image of CBM BASIC that would take I/O from a serial chip. Did wonderful things with that.

      I miss the 8052AH-BASIC very much, made a payphone controller board out of that one. That was one hell of a chip. So quick to code on. Self sensing serial speed, had the software to burn an EPROM, Give it +5, a colorburst xtal, and ha

  • by MillionthMonkey ( 240664 ) on Saturday September 27, 2014 @03:14PM (#48010473)

    Back in 1980 my parents got me a British ZX81 kit to assemble, with 1024 bytes of RAM. (I still have it buried in the closet along with my other antiques- AFAIK it still works.) It ran BASIC so slowly that you could actually read the code about as fast as it executed, so I was "forced" to learn assembly language. I was amazed by how fast it was- it ran a million operations in just a few seconds! (wow.) You had to start by writing a BASIC program:

    10 REM AAAAAAAAAAAAAAAA
    20 PRINT USR(16514)

    Then you had to POKE each assembly instruction into the comment, starting at 16514 for the first "A". The comment line would slowly turn into "10 REM x&$bL;,$_)[vU7z#AAAAAAAA". The next line was 20 PRINT USR(16514) (printing out the return value from the BC register).

    Saving any ZX81 program onto a cassette tape was excruciating- they recorded as several minutes of loud high-pitched screeching. Usually you needed to save them twice because it failed half the time. Then to load the program you had to cue the tape you had to find exactly where the start of the screeching was, rewind several seconds, play the tape, and only then could you hit enter on LOAD. (Otherwise LOAD got confused by the *click* noise when you pushed the play button on the tape player.)

    You young people don't realize what an easy life you have.

    • by pubwvj ( 1045960 )

      What you describe has nothing to do with the Z80 processor but rather the ZX81 kit you had.

      I had a Exidy Sorcerer which had a Z80 in it also. A great computer. It was my third but not my final Z80 based machine. All of them were tops for their time and a lot better than you're describing. You merely had a poorly done implementation.

      It is important to differentiate the OS (what you're really complaining about) from the processor.

      • by david.given ( 6740 ) <dg@cowlark.com> on Saturday September 27, 2014 @04:18PM (#48010685) Homepage Journal
        If you're interested in Z80 operating systems, go look at CP/M (seriously: get an emulator, some tools, and write programs for it). It's a fascinating look into just how minimal you can make an operating system and still have something that's not just functional but which spawned, back in the day, a vast ecosystem of tools and software. You suddenly realise just how much fat there is in a modern system (and also why modern systems have all this fat).
        • One thing I remember from CP/M was that when a program terminated you could save the memory it had been running in to disc as an executable program. A lot of programs (e.g. Wordstar) used this to avoid having to read any kind of configuration file: instead you just changed settings within the program, exited, and saved the memory; when you ran the saved version you had your saved defaults. I also always kept a 0 length file around in case I accidentally exited a program such as a text editor without saving:
          • Also in the WordStar executable was the ASCII text: "Nosy, aren't you?", a message to those disassembling the program.
            • Hah, I never knew that, but that's brought back a memory of disassembling "Halls of the Things" on the ZX Spectrum.

              Was running through the code with my monitor/disassembler (DevPac, for those of you with long memories!) and I found the standard mapping table for the keyboard that pretty much every program had, but this time, immediately following it, was the ASCII text: "Yes cunt, a keyboard table". I nearly fell off my chair laughing... that someone had such hostility to spend the bytes at a time when mem

        • I fell in love with CP/M when I started working with it, after being familiar with several different OSes (mostly on mainframes). Here was an OS that got in my way no more than any other I'd tried, and it used few resources in doing it. (Later, MacOS became my favorite, since it actually seemed to help me, and then I encountered Unix. I haven't found anything I like better than Unix/Linux.)

      • Yes, it was obviously a very shitty system, since they were selling them thirty years ago for about $99. It was like a 1980s version of a Raspberry Pi. But it did have a Z80 in there and that's how I learned assembly when I was a kid; I just dug to it through all the crap it was soldered to.
  • Now if we can only figure out how to create 96 by 64 bit displays.
  • It looks like the firing order for an 8 cylinder engine. I thought maybe the engineer tasked with that pin out was moonlighting in a garage somewhere.
  • by MikeTheGreat ( 34142 ) on Saturday September 27, 2014 @04:11PM (#48010657)

    It's always refreshing to see stuff like this waft across the front page of /. every now and then - I wish there was a way to re-apply the "News For Nerds... Stuff That Matters" logo to top of the page only on stories like this.

    (Pro-Tip: Please mod this "+1 Nostalgic" :) )

  • 5v lines (Score:5, Informative)

    by Alioth ( 221270 ) <no@spam> on Saturday September 27, 2014 @04:19PM (#48010695) Journal

    It's not at all unusual for the 5v and 0v (Vcc and GND) lines to be in the middle of a DIP package (the Slashdot summary sort of implies it's an odd thing). It means the leads within the package are shorter for those lines, lowering parasitic inductance and capacitance for the power supply to the chip, generally you want the decoupling capacitors to be as close to the actual chip as possible so they can be as effective as possible as the power demands change. Putting the supply pins at opposite corners (like it's done on things like 14 pin 74-series standard logic) would very significantly lengthen the distance that the actual supply rails on the chip are from the decoupling capacitors.

    • by Megane ( 129182 )
      In fact, the earliest surface-mount TTL (the 54J/74J series) had the power and ground lines in the middle of the chip. Also, a few random regular TTL chips have middle power pins just to keep you on your toes.
      • by matfud ( 464184 )

        Many modern chips that have analog components have the analog supply and ground close to the pins used for the analog functions (for pretty obvious reasons)

        You many no longer be worried about running a few extra digital datapaths to make the outside data bus be in order but you still need to seperate analog and digital functions and that limits the output pins they can use

  • by Squidlips ( 1206004 ) on Saturday September 27, 2014 @04:54PM (#48010851)
    Thank you!
  • by MindPrison ( 864299 ) on Saturday September 27, 2014 @05:34PM (#48011015) Journal
    Articles like this, makes me warm and fuzzy all over, probably because I'm an old geezer in comparison to kids of today, but I think it's very important for anyone serious about hardware development and/or software development to dive into the past once in a while, it's a great way to learn simplicity and how the hardware inside our relatively complicated devices of today really works.

    I'm a moderator of a major international electronics forum, and I don't have the number on just how many times the young generation feel completely lost when they're fresh out of school, trying to understand very complex structures. They either lack understanding of general electronics, or how the microprocessor works with different layers, ram, rom (especially embedded systems when they are working with complex IDE's with a maze of classes & libraries), they simply forget how the hardware works, and get to focus too much on programming.

    I understand exactly that frustration, especially since this old geezer was lucky enough to grew up with basic home computers like the Commodore 64, Zx81 (Z80 cpu), Spectrum, Oric, Dragon 32, BBC etc. We often did our own hardware modifications, made fast I/O port load&save systems ourselves because we had a basic understanding of how the innards worked, and it really wasn't rocket science.

    Sometimes it is relevant to take a step back in time (Like this article does, explaining some of the oddities with the Z80 processor), and spark interest in these old CPU's and their systems & possible uses even today. As an example, I have a HUGE stash of Micro-Controllers in my workshop, these are an absolute GEM to me. Why? Because they are very simple to work with. Like the good old Commodore 64 or ZX 81 - they don't have advanced hardware layers where you have to do special addressing to access certain memory areas or have to be kind to the operating system in order to write something to control your hardware (homemade or otherwise), it's as simple as writing a few pokes into memory...and you can turn on/off some external units such as relays, lights - or read on/off states from your sensors...maybe build your own satellite tracker the easy way, or control your homemade lawnmover unit.

    And we still have VAST amounts of these MCU's unused all over the world, these are SUPER USEFUL (if you didn't get the above, think standalone apps...like each MCU was an app for a specific task). Many of these CPU's (MCU usually comes with internal memory/Ram/Rom/Flash/ and the most important part...an I/O) ready to use, just program it...and watch it go. If the kids of today understood this, they'd have a BLAST programming these (just watch the maker society with their modern versions...Arduino etc.) and the sky's the limit.

    More articles like these thanks, brings /. back to the roots.
  • I had always wondered why the refresh register only counted 7 bits wide, which made that feature mostly useless when 64K DRAMs came out. (a few 64K DRAMs were made with 7-bit refresh, probably because of the Z80) Turns out that the increment/decrement circuit used in the Z80 had carry lookahead for groups of bits: 7 5 3 and 1. The I and R registers were implemented as a single 16-bit register, and to keep the I register from incrementing all the time, only the first group of the increment circuit was used,

  • I know Fred Weeks personally. He retired from his Zilog days a multi-billionaire and landed in Prince Edward County, Ontario.

  • I worked at Motorola in the late '80s in the Cellular Infrastructure Group. Moto's cellular switch was Z80 based, but it was a helluva hack. The thing had six Z-80s arranged in three nodes, each with an active processor and a hot standby. We had a custom MMU that extended the address space to 24 bits and could be mapped in 4096-byte blocks. Of the 16MB address space, 4MB was shared and simultaneously accessible by the active and standby processors.

    It was mostly programmed in assembly, but we did have a "

  • It was so the CPU was forced to slow down so the typebars didn't jam.
  • Anyone who ever designed circuitry regularly enough with the Z-80 ( I would have designed over 40 boards using the z-80 during my career ) always used to think they did it that way so you could put the ROM chip next to the processor, while only using a few through-board connections. A 16k ROM could easily be connected to the Z-80 on a single-sided PCB with just 6 jumpers that fit neatly beneath the Z-80 chip itself.

    Maybe that's not the reason it was built that way, but working with other designers at the ti

Keep up the good work! But please don't ask me to help.

Working...