Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Ternary Computing Revisited 134

Black Acid writes: "American Scientist's Third Base was a nice introduction to the advantages base 3 but didn't really explain ternary computing. Since 1995, Steve Grubb has maintained trinary.cc which covers many aspects of computing with base 3. Not only are the basic unary and binary gates enumerated, which I independently verified as being basic building blocks, but real-world circuits are described also. Half and full adders, multiplexers and demultiplexers, counters, shift registers, and even the legendary flip-flap-flop are all covered with ternary algebra equations and schematics. Steve Grubb touches on problems of of interfacing to binary computers elegantly, although no schematics are given. Perhaps most impressive are the Transistor Models - schematics of the basic gates which can be built from cheap parts available at your local electronic component store."
This discussion has been archived. No new comments can be posted.

Ternary Computing Revisited

Comments Filter:
  • on, off, ? (Score:4, Funny)

    by eric6 ( 126341 ) on Monday November 19, 2001 @10:03AM (#2584297) Journal
    does ternary processing mean you will be using on, off, and some third state of electricity? "dim"?
    • no, it used the direction of the current or something like that... so one way, the other way and off
    • As stated before, its on / off and maybe ;)
      Is the bit dead? You'll never know till you open the harddrive!
    • Re:on, off, ? (Score:2, Informative)

      by drightler ( 233032 )
      I beleive a previous article said it was "Less Then", "Equal To", "Greater Then" as opposed to "Equal", "Not Equal"
    • by popeyethesailor ( 325796 ) on Monday November 19, 2001 @10:07AM (#2584320)
      Close. It's like On,Off and CowboyNeal.
    • third state of electricity? "dim"?

      No no, its not a state of electricity its the rest state of all the Anonymous Cowards from around here.
    • Re:on, off, ? (Score:3, Informative)

      by Novus ( 182265 )
      Instead of two voltage levels (traditionally zero or +5 V or something like that), you use three voltage levels. It is usually easiest to use zero, +x V and -x V, but in theory, you could use whatever voltages you like.
    • by alienmole ( 15522 ) on Monday November 19, 2001 @10:13AM (#2584347)
      Disclaimer: I haven't read any of the articles.

      However, in stodgy old binary, the levels are typically something like 0 Volts (i.e. "off") and 5 Volts (or 3.5 Volts). A "typical" ternary system would add a negative voltage, like -5V (or -3.5V), since that's easier to detect reliably than an intermediate positive voltage value.

      So to answer your question, yes a "third state of electricity" is used, one which was previously being ignored in binary circuits. Instead of on, off, and dim, think of positive, off, and negative.

    • Or it could be like the infamous three position "Lucas switch" of British Car fame: Off, Dim, Flicker.


      My interpretation of the Lucas switch: Dim, Flicker, Short (after which you let all of the smoke out of the wire, which causes it to quit working)


      For those of you that have tinkered with British or Italian cars, you have dealt with this before (or will soon). :-)


      For those of you who haven't, Lucas is the company that supplied/supplies the British automotive industry with electrical components (switches, relays, etc.) The early design of their electrical connectors left much to be desired. Ironically, this same company produces some of the best brakes and other hydraulic components in the industry (Lucas-Girling).

    • More like +1, 0, -1, or from the article: 0, 1, 2. Using the signed-digit (+1,0, -1) representation has some handy properties, like adders without carry-chains.

      The site is interesting; Mr. Grubb has definitely put some thought into it.

      The one issue that I find problematic is the requirement for bipolar transistors to get a +V and a -V. CMOS uses little power compared to bipolar. A trinary chip would have to be based on bipolar technology which would require more power and be less dense. Also, when laying out a chip one now has to worry about two power supplies instead of one: scratch one layer of metal. Now it's harder to route and even less dense.

      So there might be a use for trinary logic in chips that are already bipolar for some reason. A smidgen of trinary logic latched onto a mostly analog chip of some sort? I don't think they have a chance against current CMOS binary chips.
  • If a... (Score:2, Funny)

    If a Bit is short for Binary Digit...

    Does that mean that a Ternary digit is a 'Tart'? Do 8 Tarts make a 'Tight'?

    We could be having MegaTights of Ram, and GigaTights (or even TeraTights) of disk!

    Tony

    • Opps.... Spelling...

      Tight should be spelled 'Tyte'!

    • If a Bit is short for Binary Digit, then why can't Tit be used for ternary digit?

      That would allow for Megatits and Teratits, and let the nerds guffaw around the water cooler more often.

    • Actually, if Bit is short for BInary Digit than
      a Ternary Digit would be a

      Terd. (Turd?)

      I knew it! Ternary computing is full of shit.
      • You didn't really make to much sense there. You went from bit coming from the first two letters of binary and the last letter of digit and then you said that terd comes from the first three letters of ternary and the first letter of digit. Basicaly, your conclusion was not actually based on any known fact remaining true.
    • If a Bit is short for Binary Digit...

      Actually, 'bit' is short for binary digit.

      That would make the shortform for ternary digits TITS.

      Which would then lead to TYTES, KILOTYTES, MEGATYTES, etc ...

      Of course, it would be better to have something that doesn't sound so alike ... bit-tit, byte-tyte ...

      how about TET and TEET? TEET being the single ternary digit and TET being a logical grouping 3^3=27 TEETS?? The only logic there being that it's easier to say MEGATET than MEGATEET ... and MEGATEETS sound silly. ;)
    • If a Bit is short for Binary Digit...

      Does that mean that a Ternary digit is a 'Tart'?

      I was always partial to 'trit' or 'trin'.

      Just out of curiosity, what do any Mage players think of this? A dream come true?

    • Does that mean that a Ternary digit is a 'Tart'? Do 8 Tarts make a 'Tight'?

      No, I I think it's better called Trinary. Tertiary would be a more appropriate name if base-2 was instead called Secondary. If trinary is used, then you've got Trinary Digits, or trits.

      However, the real question is when they get to base-4 computing, will they call it quits?

    • Hmm.. at my fictional computer company (Zapitron -- an in-joke among my friends), we always called 'em trits, trytes (9 trits), and words (27 trits). But now that everybody's moving from those old 27-trit computers to the new-fangled 81-trit machines, "word" might get redefined.

      BTW, one of the neat things about Zapitron computers is that unlike conventional computers with their software emulated null devices, Zapitron machines have hardware null devices, so that you can burst-read EOFs at billions of EOFs per second. Ain't innovation great?

      • I learned "trit" myself in my fundamentals of computing class last fall. This semester I'm in a class where we're doing a lot of assembly programming, so where when we have to do the bit masking and such, we're reffering to the individual hex digits or half-bytes as nibbles.

        So in trinary, you couldn't go in hex halves, you'd have to go by base-9 thirds. Would a third of a tryte perhaps then be a "katz" ? No, we want a diminutive, not a superlative... Hackneyed is too long... a "stale"? well, that does it for my thesaurus. Suggestions?
    • Binary Digit == Bit

      So maybe a Ternary Bit would be a Trit.

      and a Ternary Byte would be a Trite.

      And a Nibble, of course...

      Hobbes
  • This may be a dumb question, but if ternary logic states are represented by -ve, zero and +ve voltages on the gates, why are the states enumarated as '0', '1' and '2' rather than '-1', '0' and '1'?

    Just wondering.

    • Because you want the states to represent numbers
      in base 3 which has the digits 0,1 and 2.
      • Actually, as the previous article (Third Base) pointed out, it really is -1, 0, and 1. this system works even better than 0, 1, 2 since it it symmetrically balanced. That article used a symbol similar to a 1 with a hyphen through it, for which I'll use the "t" key.

        The example given is 19, which in 0, 1, 2 is expressed as 201 [ (2* 3^2) + (0* 3^1) + (1* 3^0) ] = 18 + 0 + 1 = 19.

        In -1, 0, 1 this is represented as 1t01 [ (1* 3^3) + (-1* 3^2) + (0* 3^1) + (1* 3^0) ] = 27 - 9 + 0 + 1 = 19.

        Although it does often take more digits to represent the number via the second method, the balanced system does have many advantages. Without getting to deeply involved, the important one is that you have one system fo both positive and negative number, thus eliminating the need for the actual sign.

        I hope this basic summary of that previous article helps.
    • In the binary number system there are two digits: zero and one. The ternary number system has three digits: 0, 1, 2. Imagine a hypothetical "decinary" system: How many digits would it have?

      The way these digits are represented in the computer is unimportant, and shouldn't be confused with the way we represent them on paper.
      • Imagine a hypothetical "decinary" system: How many digits would it have?

        Duh! let me guess now...

        Well thanks. Actually I'm, quite familiar with number base.

        I was only wondering anyone had any alternative thoughts on the subject - which one of the posters above did.

      • Imagine a hypothetical "decinary" system..

        Didn't the ENIAC use denary to represent numerical values internally? Or was it another computer of that era?

    • There are two notations to represent trinary:
      • Standard, which is the enumeration 0, 1, 2 and produces numbers such as 122101 and 202210
      • Balanced, which uses -1, 0, 1 as its notation, and is much easier to work with by humans. It produces numbers such as 10101 (note the BOLD should actually be a superline above the '1' for negative notation) and 11001


      For example: 1010
      = [(-1) * 27] + [0 * 9] + [1 * 3] + [0 * 1] OR
      = [(-1) * 3^3] + [0 * 3^2] + [1 * 3^1] + [0 * 3^0]
      = [-27] + [0] + [3] + [0]
      = -24
    • "What an awful dream. Ones and zeroes everywhere. [shudder] And I thought I saw a two." -Bender


      "It was just a dream, Bender. There's no such thing as two." -Fry

  • by CDWert ( 450988 ) on Monday November 19, 2001 @10:19AM (#2584366) Homepage
    This is all nice, but if we have to go to all the effort to reinvent the wheel , why not go all the way, I mean if we have to come up with all new components and software why no go the Analog route ?
    Digital computing gained popularity for many reasons, cost effective to build, easy to program, with the state of current electronics this is no longer neccesarly the case but we there ,
    Analog copmuting has many advantages over digital computing, especially in the AI arena, Since there can never be a digital concept of infinity
    Rockets in the beggining were put into orbit using ANALOG computers, there is a reason, accuracy to the nth factor.

    I played around with analog computing in the 70-early 80's cool stuff if more would have been available, fact wsas everyone was happy with their 8 bit pc.

    Trinary computing sounds a little like taking something that was settled on in the first place and resettling again
    I mean come on isnt the goal of computing to have a supercomputer take control of our national defense grid when it becomes sentient ?
    • by yellowstone ( 62484 ) on Monday November 19, 2001 @10:59AM (#2584569) Homepage Journal
      Disclaimer: if this is a troll, then you got me. Ha, ha.
      Trinary computing sounds a little like taking something that was settled on in the first place and resettling again
      Apparently, you can't be bothered to read the Third Base [sigmaxi.org] link referenced in the body of the story. To summarize:
      1. The cost of representing a particular number in a given base depends on a) how many digits there are in the number in that base, and b) how many digits there are in the base
      2. Analysis of said formula gives a minimal value at e=2.718281828...
      3. Dealing with numbers in irrational bases is problematic, but the same formula also suggests that using base 3 is more optimal than using base 2.
      4. In the end, none of this matters, since AYBABTU.
      • To quote the core argument of the article:

        Evidently we need to optimize some joint measure of a number's width (how many digits it has) and its depth (how many different symbols can occupy each digit position). An obvious strategy is to minimize the product of these two quantities. In other words, if r is the radix and w is the width in digits, we want to minimize rw while holding r^w constant.


        This may be an "obvious" strategy, but is it a useful one? A modern computer typically contains hundreds of millions of digits in base two. According to this theory, the cost of a computer (ie, the value we are trying to minimize) is equal to the radix times the width. If this is true, we can reverse the radix and the width to get a system that has precisely the same cost: thus, a machine that stores one hundred million digits in base two costs the same as a machine that stores two digits in base one hundred million, because two times one hundred million equals one hundred million times two.

        In practice, building an electronic computers capable of distinguishing between one hundred million distinct voltage levels is a practical impossibility. Early attempts to build machines that had just ten distinct voltage levels were abandoned, not because of any theoretical arguments about data density, but because these devices turned out to be extremely difficult to manufacture and notoriously unreliable in operation. A computer with one hundred million distinct voltage levels, if it could be built at all, would certianly cost several million dollars to construct, and it would probably require a special power supply and several pounds of electromagnetic sheilding. It would certianly not "cost the same" as a typical desktop computer.

        Even if we were to ignore the absurdity of the basic premise of the theory, and take for granted that the trinary computer is better than binary in some abstract way, there is still no compelling reason to switch. We have already invested billions of dollars into binary technology, and the benifits of that investment are undeniable. If you think companies like Sun and Apple has a hard time selling theoretically superior hardware in a market dominated by cheap PC clones, imagine how much harder it would be to introduce a computer that is so fundamentally incompatable that it does not even work with binary data. The dominance of the Windows platform proves that people don't want theoretical perfection: they want something that gets the job done, they want it to be cheap, and they want it now.
      • 2. Analysis of said formula gives a minimal value at e=2.718281828...

        My understanding of this is that e is optimal only if the cost of supporting more signal levels grows linearly with the number of signal levels represented. Somehow, I really don't think that this is the case. I'm more inclined to believe that there is a step increase between 2 and 3.
      • The cost of representing a particular number in a given base depends on a) how many digits there are in the number in that base, and b) how many digits there are in the base This assumes that the cost of increasing the base is the same as the cost of increasing the number of digits. There is no particular reason to expect this to be true. So far as I can tell, it definitely is not true when comparing the transistor count of binary, trinary, and higher bases. The step from binary to trinary is a big one -- either your gates are like analog circuits, or they are essentially a double binary gate (100% more circuitry for 50% more states/bit). The step from 3 to 4 is not nearly as big...

        I dissected their inverter circuit in a different post -- in short, it won't work for the intermediate level, and in fact closely resembles a primitive ancestor of TTL binary.

        This is not to say that higher bases are always and everywhere a bad idea in electronics, just that you need to be cautious when taking designs from someone who hopes someone else will build them... Transistors are becoming much cheaper than wires, and higher bases really save on wires. So does time-division multiplexing (e.g., sending the bits twice as fast on half as many wires), and at this point we better know how to do this, and can make it work more reliably at lower cost as compared to trinary. Eventually, multiplexing will hit some sort of practical speed limit, and then sending multi-level signals may be cost-effective. I just don't see any particular reason to stop at 3.
        • I dissected their inverter circuit in a different post -- in short, it won't work for the intermediate level, and in fact closely resembles a primitive ancestor of TTL binary.


          Their inverter seems more related to current tristate outputs (i.e. zero, one, off) than trinary, but you might want to check the RSFQ tristate logic [sunysb.edu]. Seems the links for more info are down at the moment, though.
          • No, their inverter doesn't have an off state. It's either low (Q1 output transistor on to output almost -3V), or high (Q1 off, R2 pulls up to +3V). You need two output transistors for tristate (one in place of R2), one to turn on for high, one to turn on for low, both off for the no output current (tristated).
            • Sorry, I skipped some mental steps in my comment. I looked at their inverter circuit, came to the conclusion that as a 3 level voltage inverter it was seriously lacking (saturated emitter follower??) and looked at the output stages of their other gates to come up with that comment.

              (Circuit tricks that depend on transistor gain, but NOT equal gains, to set the switching thresholds and approximate a connection to ground are not, IMO, useable as a real circuit, even though a 'simulator' will work fine.)

              The best I could come up with out of their circuits was 'rotate down' - voltage switched positive and negative output, and although it's theoretically a complementary emitter follower, zero is open (to a resistor) zero output - add a saturating transistor between the output bases, and you've got a tristate. But you are correct, this is not the inverter that I was looking at.

              Meanwhile, the link I suggested is still applicable for naturally tristate circuitry - josephson junctions and magnetic flux work equally well for switching between positive, negative, and zero with no additional components.
    • Try putting your analog computer next to a microwave, TV, speaker, heater, source of vibration atc. Try comparing the error on an analog calculation under near-perfect circumstancesto that of a digital calculation. How will you store the data? If the data is stored digitally, how do you deal with AD converter errors.

      In short, analog computers are fast and accurate for small calculations under controlled conditions. Digital computers are better for difficult calculations (Despite what you'd think, calculating orbits is not a difficult calculation by modern standards) and under arbitrary conditions, a digital signal is less prone to error.
      • Analog computers were used for orbital calculations for a simple reason:

        It's *very* easy to do differential and integral equations in analog circuits. It's much harder to do them digitally.

        Orbital and aerospace/ballistic calculations involve lots of integro-differential equations.

        FWIW, there's a (relatively) cheap source of analog computers available still - analog synthesizers. They can be interlinked and interchanged with analog computers, and the demands of music make sure they're fairly accurate. Wouldn't trust the space shuttle to an MOTM synth, but then again, I'd just use a binary computer.
    • The original Ternary Computer that the Russians built "The Russian Refrigerator"

      Was Analog. They spent 10 years writing software for it and by the time they had gotten guidance modules written for it they could buy them from Westinghouse for 100x less in digital binary.

      Lordbyron
      www.wylywade.com
    • Analog copmuting has many advantages over digital computing, especially in the AI arena, Since there can never be a digital concept of infinity

      How on earth do you create equipment to handle or even create this value of infinity?

      Binary computer, touch CMOS chip, it dies
      Analog computer, touch chip, you die

      personally I would prefer a computer that I could kill and not the other way around.

      Seriously though.
      IANAEE (I am not an Electrical Engineer), but I don't see any way in which you could generate or store infinite voltages, however infinite currents could be manipulated if the computer was made out of superconductive materials.

      With the current state of superconductor technology, the cooling rig for a machine like that would truly be a beast.
  • by Kiwi ( 5214 ) on Monday November 19, 2001 @10:31AM (#2584409) Homepage Journal
    One of the engineering problems w.r.t. trenary computing is how to have a crypto algoritm for trenary computing, since all of the modern crypto schemes assume binary computing.

    One of the nice things about the Rijndael crypto algorithm is that, becuase of its "wide trail strategy" design, it is easy to adopt to different environments, including trenary computing.

    I am sure that a variant of Rijndael which does everyting in "trits" instead of "bits" would have the same security features as the current Rijndael algorithm. The only thing that would have to be re-invented is the sbox. The rest (changing the galois field to a 3-base instead of a 2-base galois field, and chainging the MDS matrix used) could be simply adopted.

    - Sam
    • Any trinary machine can mimic a binary machine simply by getting rid of one of its states. So if we switch to ternary, the first ports of crypto libraries will just be binary libraries running on ternary machines.

      Second, separate theory from implementation, please. Very few areas of information theory and cryptography are dependent on base-2. That'd be counter to the entire point of math, which is to think abstractly enough that the principles apply to any base. Your statement is sort of like saying "moving from base 10 to base 16 is hard, since we learn arithmetic in base 10 and we'd have to relearn all our arithmetic". It's just not the case. A + B = B + A, no matter what base you use. And likewise, the integer factorization problem and the discrete logarithm problem are damn hard no matter what base you use.

      Implementations are highly dependent on binary systems, yeah--but that's only because we only have binary computers right now. As soon as someone comes up with a ternary computer, rest assured, Blowfish and 3DES and RSA and El Gamal and AES and all sorts of crypto goodness will be running on it in no time flat.

      Think about this one for a moment. Computers are Turing machines. We write in Turing-complete languages.

      But there's nothing in the definition of a Turing machine which requires that it be binary, trinary, or base radical two. The Turing machine doesn't care.
  • i guess every average geek can now make it to 3rd base with their girl. all they need to do is to visit radioshack and purchase their on-and-off switches and gates - sounds like someone's semi-automatic gun's fully loaded already! ;-)

  • Easy answer: Signal to Noise ratio renders ternary logic useless. Either it comes at a slower speed than binary logic or at higher power consumption.

    In addition - the site design doesnt make it look very credible..
    • Signal to Noise ratio renders ternary logic useless.

      The signal-to-noise ratio is similar for both binary and these type of trinary systems, if they are using similar hardware. The three states of a the trinary system given here are 0 (-x V), 1 (0 V), 2(+x V). In other words, the circuitry in both cases still uses the same voltages, and thus the same resolution. The only addition is current direction, which is just a modification of normal binary logic circuits.

      the site design doesnt make it look very credible

      So if someones site doesn't look fancy and professional then their ideas are no good? Perhaps the guy who made the site was too busy coming up with good ideas to have time to make a fancy frames-and-flash site. I think Steve Grubb did a great job with the site. It's simple and direct, he has tons of examples and tables to show you how the concept works. He also has a tutorial which brings you through the ideas gradually, culminating with the actual circuit designs so you can build your own versions of his ideas. What's not to like about it other than the fact that it is plain?

      • The signal-to-noise ratio is similar for both binary and these type of trinary systems, if they are using similar hardware. The three states of a the trinary system given here are 0 (-x V), 1 (0 V), 2(+x V).

        You name it. Binary logic requires 0V and +vcc, ternary -vcc, 0v and +vcc. Thats twice the voltage range. Hence the power usage (almost, due to different switching scheme) quadruples for the same SNR and speed.

        In addition ternary logic will be much more sensitive to process variations. The logic will be A LOT more unrealiable than binary logic, not to speak of initial production yield.

        So if someones site doesn't look fancy and professional then their ideas are no good?

        No - he has too much graphics. Using HTML in a proper way (eg. as _markup_ and not _layout_ langue) would have been fine. Even better would have been some TeXed PDF/PS paper.

    • Not necessairly.

      We are already dealing with circuits that have a voltage swing of 250mV. If I understand this site (which you have pointed out already as less than credible), then we could say 0 = 0v, 1 = 250mV and 2 = 500mV.

      I don't think that ternary logic is useless. But its usefulness is limited. I think it will be most useful in high speed serial systems. Infiniband comes to mind. (Though it wouldn't be implemented there now.)

      I don't agree with your assement of slower than binary, however, it will cost more power. Especially if the first generation of design relied on more comparaters.
      • We are already dealing with circuits that have a voltage swing of 250mV. If I understand this site (which you have pointed out already as less than credible), then we could say 0 = 0v, 1 = 250mV and 2 = 500mV.

        Exactly, thats four times the power requirement. (See above) Not to speak about the problems of building proper comparators for this voltage range.

        • err?

          Isn't that exactly what I said... "...however, it will cost more power. Especially if the first generation of design relied on more comparaters.?
          • Isn't that exactly what I said...

            Isnt this great ? I agree with you ! ;) (Ok, sorry I was too quick while reading).

            About the speed issue: The higher power requirements are generally dynamic power - hence it scales with clockspeed. Therefore you would have to use a slower clock to achive the same logic/power density as with binary logic. Since the power/area ratio is limited you would have to go with a lower clockspeed for the same manufacturing process.

            An addition: I presume that implementing the ternary logic gate in CMOS logic will eat almost as many transistors as an equivalent gate for two binary bits. The only advantage i can see for ternary logic is that of having ~30% less interconnections. Thats not a big one ...

  • The whois record for trinary.cc [trinary.cc] says the registration date was 2000-04-14. Did Steve Grubb maintain the content elsewhere, and then move it over to the trinary.cc domain upon registration?
    • Re:Since 1995? (Score:2, Informative)

      by CloneRanger ( 122623 )
      Actually, I used base3.org and changed to trinary.cc when the cc domain opened up. Before base3.org, it was available from my personal homepage at gate.net.

      -Steve grubb
  • was always half the fun!

    Okay god that was bad
  • by glowingspleen ( 180814 ) on Monday November 19, 2001 @11:24AM (#2584692) Homepage
    "Dude, check this out. So I'm reading this news site for nerds, right, and I see this article on computers that work on some "base 3" kinda deal. So I'm like Damn! Those PCs can get to third base! I wonder if I can learn some tips from em?"

    "So I call one up and we agree to meet out at Woody's on 4th. And this PC never shows up. So I just keep drinking, waiting for it. I musta had too many because all I know is that I woke up in this apartment on 84th with these three midgets screaming at me in portugeuse."

    "Damn unreliable computers."
  • by hamjudo ( 64140 ) on Monday November 19, 2001 @11:40AM (#2584787) Homepage Journal
    It's no big deal. Some comunications lines use 3 states, -v, 0v, +v. The 3 levels represent 0, 1 and same as the last bit. They use the third level so they change the voltage every clock cycle and thus only one frequency travels down the wire.

    Naturally, such systems get enhanced so they can send more data at the cost of a little harmonic purity. For expample, they could get 50% more data through by using pairs of trits to send 3 binary bits. The 9th state would be used to prevent leaving the line at one voltage level too long. The real encodings are better behaved in the analog domain, and therefore more complex, but lookup tables for the trit to binary conversions take very little silicon.

    For those who haven't memorized powers of three, if trinary logic, memory or signalling works better in some situation, 1 trit holds 1 bit, 2 trits hold 3 bits, 12 trits hold 19 bits, 31 trits hold 49 bits, etc...

    Going the reverse is also very simple. If you have an algorithm that works better in trinary, store 1 trit in 2 bits, 3 trits in 5 bits, 5 trits in 8 bits, etc... You don't need special hardware.

  • Cyclic groups are integers modulo n where n is 2,4,p^m,or 2*p^m where p is an ODD prime. This means that (mod 2^m) is not a cyclic group while (mod 3^m) is. Cyclic groups have manifested themselves in numerous cryptographic and number theory applications and ternary computers can implement a (mod 3^m) group as quickly as a binary computer can implement a (mod 2^m) field, all you have to do is discard the most significant digits
    I have been trying to implement a Number Theoretic Transform based multiplication system and not being able to choose a power of two as my modulus is causing a major speed loss. Even though switching to a ternary base (in hardware) would not lead to more than a linear speed difference, having low level functions run two to eight times faster, and doing away with the ugly code used to modulo a prime) is nice.
  • One of his claims is that trinary logic is "better" than binary because it uses another "natural" state of electricity: flow forwards, flow off, and flow backwards. He uses example power supplies of +3v, 0, and -3v.

    However, this is *no different* than power supplies of 0, +3v, and +6v. Shifting your voltage reference does *not* change the power consumed.

    Trinary also does not change your noise margins. Noise margins are a function of the actual circuit, and in an ideal world the noise margins on the high and low sides would be at half the representation voltage. 2.5 volts for 5-volt TTL logic, for example. 1.5 volts for 6-volt trinary logic, for example. See the noise margins decrease? You would have to increase the voltage range to 10 volts to maintain the noise margins. And therefore you would increase the power consumed. And power rises as the *square* of the voltage.

    So let's see. Assuming your gates are feeding 1k resistors, binary logic would consume either 0mA (for a 0) or 25mA (for a 1). Assuming uniformly random distribution of bits, each bit would consume an average of 12.5mA.

    For trinary, and maintaining the noise margins, a 0 would consume 0mA, a 1 would consume 25mA, and a 2 would consume 100mA. Average of 41.7mA. Since each "trit" conveys 3/2 as much information as a bit, the equivalent power consumption per bit for trinary logic is 27.8mA.

    So trinary logic consumes more power per bit than binary logic.

    Bummer.
    • Sorry, but your numbers are way off. Let's ignore that it's mW not mA for Power, Trinary logic doesn't need to use 0/5/10 V, it can use -5/0/5 V leading to 25mW, 0mW and 25mW resulting in an average per trit of 16.7mW, with an equivavalent of 11.1mW per bit.
  • by rdmiller3 ( 29465 ) on Monday November 19, 2001 @11:48AM (#2584839) Journal
    The big disadvantage of using any logic system with more than two states, electrically, is that sometimes in switching from one state to another you must go through a third state which is electrically "valid" but not the correct output for the function you're implementing.


    Electrically, implementation is inevitably binary, at its core... electrical comparisons of boundary conditions. "Trinary" is just a minimal case of "analog", with all of the same disadvantages.


    You want the same noise margins? You'll have to double your voltage. That means you're cutting your speed in half. So overall you're taking a loss because at half speed you could have gotten two whole bits for your money instead of one lousey trit.


    Not to mention the fact that you're using more power, switching between these trinary states due to the longer transition and detection times. Oh boy! Hotter chips! Bleah!

    • The big disadvantage of using any logic system with more than two states, electrically, is that sometimes in switching from one state to another you must go through a third state which is electrically "valid" but not the correct output for the function you're implementing.

      True, but that's what clocks or asynchronous handshakes are for - because even with only two levels, there's a lot of 'Wrong' state in between.

      Admittedly, it might not be possible to define a 'gray code' signalling scheme using trinary digits, but a trinary clock signal ( -1, 0, +1, 0, -1) etc, could have definite benefits - quad speed data rate, data in trits would be effectively a 6X interface...

      Electrically, implementation is inevitably binary, at its core... electrical comparisons of boundary conditions. "Trinary" is just a minimal case of "analog", with all of the same disadvantages.

      Say what?? Where'd this come from? Binary itself is still a minimal case of analog - just with a single breakpoint instead of two with trinary.

      You want the same noise margins? You'll have to double your voltage. That means you're cutting your speed in half. So overall you're taking a loss because at half speed you could have gotten two whole bits for your money instead of one lousey trit.

      This depends very much on the implementation, but it is NOT necessarily cutting the speed in half. Assuming that the circuit has normal RC timeconstants, double the swing is NOT double the lag. (It's about 1.4 times the lag) This might not be significant in clocked ciruits, and at worst, the loss is about equal to the gain achieved by trits over bits.

      Not to mention the fact that you're using more power, switching between these trinary states due to the longer transition and detection times.

      Here you may be partly correct.

      Assuming that there is a separate power supply connection for each signal state, the leakage in the drive transistors is probably going to be about double that of a normal binary output. The active power using a positive-zero-negative type signalling scheme should work out similar, since the voltage relative to ground won't be increased. The dynamic power would probably be comparable despite the larger swing since only 1/3 of the transitions involve the full possible swing while carrying 50% more information. And as I noted in the previous point, the transition (and detection) times are also comparable on a 'amount of data transferred' basis.

      Oh boy! Hotter chips! Bleah!

      All things considered, it looks like the heat to computing power ratio is going to similar for both. But if there truly are algorithms or applications that are more easily rendered using trits (and there may well be so) then the advantage for them may go to the trinary logic.

      There may also be some uses for trinary base computing where the storage of additional logic states is NOT an overhead. Quantum flux gates - which unfortunately can't amplify or fan out yet - can store digits as fluz quanta - gates can be designed such that there is no overhead to such a device holding 2 quanta instead of one - and these chips will definitely NOT run hot. (Of course, cooling to superconducting temperatures may have its own problems.... for those interested, this is a link to the RSFQ lab [sunysb.edu] pages, and a link to an item on superconducting trinary circuits [sunysb.edu]. 100+GHz on 3.5um technology.)
      • Sigh... All "logic" is essentially binary, Monseur 'Liquor'.

        The whole point of what I was saying was that if you represent logic states with voltage levels you would be throwing away a quarter of your bandwidth using three states. Here's why:

        To determine what state a 'trit' is in, TWO BINARY COMPARISONS must be made, one against each dead zone. Why, that sure looks like two whole bits-worth of information! But your trinary system doesn't allow one of the four states, and thus throws away what could have been a useful bit.

        If you make those two comparisons simultaneously to keep the same bandwidth, then you'll need more voltage spread and higher currents (and an oven mitt to handle your circuit).

        It just doesn't pay to throw out a perfectly good bit. There's ALWAYS a trade-off. If you think you're getting something for nothing, you're probably wrong.

        -Rick
  • i assume that the three states will be similar to TRUE/FALSE/NULL in sql. and we all know what fun and twisted logic NULL can involve.

  • Just a few thoughts. (Score:3, Interesting)

    by Anonymous Coward on Monday November 19, 2001 @12:09PM (#2584962)


    Looking at the schematics I see that it is based on a analog design style. The transistors are bipolar and there are plenty of resistors used for biasing. All in all, it looks more like an amplifier than a digital gate.

    A previous poster commented on returning to analog computing. While there are several major problems with analog computing, I want to just mention a few.

    High Implementation Cost

    Currently, resistors are considered a somewhat "expensive" item in VLSI designs, since they use a lot of area and lead to static power dissipation. Using bipolars instead od MOSFETS is probably a mistake from a fabrication standpoint, but I don't see why the schematic couldn't be modified to take this into account.

    In other words, any design using these gates would be big and power hungry. This isn't to say that a base-3 system is infeasible, only that this implementation doen't map very well to existing technology.

    I think a MOSFET only implementaion would be required before we can take base-3 really seriously. Maybe something using depletion mode MOSFETS would work better.

    No Component Architecture

    Analog components are difficult to interconnect. Without going into too much detail, they don't just snap togeater like legos; rather, each brick must be modified slightly depending on what it connects to.

    The schematics shown also exibit this problem; the author freely admits it. While I feel that this problem could be partially solved by automated tools, it is still a big hassle. Not just because I'm lazy either. Many tools operate using O(n ln n) or even O(n^2) algorithms. Increasing the time constant or adding unnecessary coupling means that the tools won't finish their production times for very long periods of time. A Xilinx FPGA synthesis run, which is comparativly simple, can allready take several hours to complere. This hurts the design cycle time, since even small changes can require a full recompile totaling hours. No telling how long it would take to make a full microprocessor - many days I am sure.

    A true digital design, by contrast, does not exibit this problem. Again, I don't feel that this problem is insurmountable. The problem here is all those resistors whose values must be changed. Remove them and remove the problem as well.

    Problem with interconnect

    One more problem is that in most modern designs the design area is dominated by interconnect. Active areas (made of real transisotrs) are connected by routing channels, and the channels are getting to be quite large. Ternary logic doesn't exactly help with this, since we now have three power rails. This is at worst a second order problem though, since it doesn't really increate the interconnection between any two components, just the interconnection between all the components.

    Underdeveloped logic family

    While on the topic of power rails, I can't help but wonder about the clock. It seems underutilized. What should a negative (-1) pulse on the clock do? Or the control lines on a flip flop. Take a D-type flip flow for example. if load=1 loads a new value, and load=0 hold the old value, what does load=-1 do? load an inverted value perhaps? Until clocking a flip flop behavior is defined, I don't see any complete designs coming out. These ideas probably just need some more "brain time".

    Error Correction and Asthetics

    Lastly, let me say what I do like. The fundamental advantage of digital gates are that signals can be regenerated. In a five volt system, if you have a 4.89 volt signal, it is probably supposed to be a 5.00, so the gate boost it up and passes it on. This means that error do not propogate. This is the essence of Digital design.

    The ternary design style we see is not incompatable with this notion. In binary, The decision is made around the transisors meta-stability point, typically 2.5V. This means that the fundemental decision is to determine if a signal is [ s>2.5V ] | [ 0V ] | [s
    • See Emitter-Coupled Logic. ECL operates in the linear region. That's why it needs huge power supplies and throws lots of heat. So, there are potential problems, but there is also more experience than you may have thought.
  • Machines are good at jumping though hoops and are just as happy counting in base 2 as base 22! I do think people would be better off not using base ten. For example base 12 has the advantage of being evenly divisible by 2, 3 and 4 so you can evenly divide things in half, thirds or quarter which are the most common divisions. Base ten chokes on dividing things into thirds. Using base 12 or base 16 would also allow a more compact representation which would be a big help dealing with screens which are always too small. Of course base 16 has the major advantage of easy translation to base 2. So everyone would be comfortable with hex numbers since those would be normal numbers. The US seems to be willing to ignore the base 10 metric system forever maybe we can introduce a base 12 or base 16 measurement system. :-)
  • I just took a quick glance at the schematics for some simple gates in ternary logic, and it seems to me that they are quite a bit more complex than their couterparts using binary logic (i.e. they use a lot more gates to do the same thing). Therefore they are more expensive. It seems to me that that could hold up moving to ternary logic, as everything about design is geared towards making things cheaper.

  • if a bit stands for "binary digit", so by analogy, shouldn't a "ternary bit" be called a tit?

    "I am busily ignoring some thousand of implications I have determined to be irrelevant."
  • Hm, this seems to mean that one is never late learning Intercal. B-)

    PLEASE DO. Argh.
  • 13*3/20*2 (Score:3, Informative)

    by ahde ( 95143 ) on Monday November 19, 2001 @01:57PM (#2585567) Homepage
    Consider again the task of representing all numbers from 0 through decimal 999,999. In base 10 this obviously requires a width of six digits, so that rw=60. Binary does better: 20 binary digits suffice to cover the same range of numbers, for rw=40. But ternary is better still: The ternary representation has a width of 13 digits, so that rw=39. (If base e were a practical choice, the width would be 14 digits, yielding rw=38.056.) so the theoretical approximate is less than 2% efficiency increase. But what they don't say is that it takes 2 operantions to distinguish a ternary system: v < 1 ? 0 : v < 2 ? 1 : 2 as oppose to binary: v < 1 ? 0 : 1 In the real world you can't do "equal to" because there is the potential for infinite precision -- you can always measure more closely. And you can't measure on/off, because then your circuit is dead and no further calculations can be done. You can't use positive/negative, because you want three. So you need something that can detect voltage levels. The problem is, that if such a device can be created that distinguish 3 different voltage levels, and do so more efficiently than using to binary operations (see above ternary notation :) -- it could also be used to perform 2 binary operations which doubles the efficiency of the binary system rw = 40 / 2 = 20 almost twice as efficient as the ternary circuit.
    • it takes 2 operantions to distinguish a ternary system: v < 1 ? 0 : v < 2 ? 1 : 2 as oppose to binary: v < 1 ? 0 : 1

      You're applying binary logic to ternary digits. I would foresee that the logic would be something like this: cmp(v) ? -1 : 0 : 1;

      C's '?' operator inherently assumes binary digit. So, I think that if we were going to implement trit-based computer, we have to slightly change programming language structures.

      Anyway, it won't happen, because people don't like radical changes. Besides, a lot of investments already take place in bits...

  • Base e (Score:3, Interesting)

    by Animats ( 122034 ) on Monday November 19, 2001 @02:18PM (#2585667) Homepage
    There's something to be said for a base e representation, where numeric values are represented as logarithms. Audio data should be represented logarithmically, because 16 bits used linearly doesn't offer much dynamic range. With a linear representation, on soft passages most of the high bits are 0. It's possible to end up with 6-bit or 4-bit audio on some quiet sections of classical music. And the big peaks in rock music have to be scaled down during mixing. Much of CD mastering revolves around cramming the dynamic range into a limited space. With logarithmic audio, that's not a problem.

    A friend of mine who's into digital pro audio looked into building logarithmic audio gear, but the recording industry went to 24-bit linear instead, which provides more headroom.

  • by brer_rabbit ( 195413 ) on Monday November 19, 2001 @02:20PM (#2585672) Journal
    One big disadvantage of trinary is the number of transistors involved. I don't know if the author's schematics were minimal or not, but his inverter required 2 transistors and 5 resistors. A standard CMOS inverter requires 2 transistors and *zero* resistors. On top of that, the transistors were BJT (Bipolar Junction Transistors), not CMOS which are what current most common.

    The other functions will take a lot more real estate if realized in trinary too. The Full Adder he had listed has 20 gates of varying complexity, that would take at least 2 transistors per gate, probably resistors as well considering his schematics. A binary/CMOS implementation can be done in about 30 transistors.
  • ...my gallium arsenide quantum ternary clockless computer.

    It's going to kick some wicked ass, especially with those 12-gigabyte multilayer CD-ROMS and fungus-based hard-drives.
  • The economic claim for trinary computation is bogus, because we do not know that the cost of building width w base r device is proportional to r*w. The cost is likely to increase more than linearly with the base. For example if it is r^k*w, the "optimal" base would be r=exp(1/k). For k=2, r=1.65, and for k=1.5, r=1.95. In these cases binary or a _smaller_ base is better.

    However, there would be a real benefit in using trinary logic, as it can be used to represent true, false and unknown. In many real situations a lot of extra logic is devoted to dealing with unknowns because the binary system forces a decision. This would be cleaned up quite a bit in trinary system. In fact, the ease of rounding using trancation is a nice little demonstration of this property.
  • For some really, really interesting stuff, look into using two bases for computation.

    One of the professors in our department has been doing some heavy research into computations using more than one base. The idea goes like this:

    1. Select two bases that will be able to represent your expected number space best. He started out using 2 and 3, but you can easily use 2 and 7, 2 and 13, 2 and 9973, whatever floats your boat. In fact, you can use any real number as your second base.
    2. Map your real numbers to the DBNS number space thusly:
      • 6 = 2^1 * 3^1
      • 18 = 2^1 * 3^2
      • 231.67 ~= 2^3 * 3^3.0637
      Keep in mind that there are many possibilities for your mapping (he's developed optimizations for finding good mappings)
    3. Exploit the exponential nature of the system! A multiplication is now a simple addition of exponents. Division is a subtraction.
    4. Keep in mind that huge numbers (or huge precision) can be maintained with a small number of bits in the exponents. For example, the number 105413504 (2^7*7^7) requires only two bytes. In binary it requires 27 bits to be represented.

    Obviously this isn't a universal solution, but think about DSP hardware, where multiplications are expensive and needed all the time. Not to mention exponentiation for cryptography. Also, this brief explanation doesn't do justice to the full potential/applications of DBNS. A lot of work has gone into it.

    If you want to find out more about DBNS, there is a primer at www.rcim.ca/Research/Video_Rate/DBNS/ [www.rcim.ca], miscellaneous papers at people.atips.ca/~eskritt [atips.ca] and a collection of a few published papers at www.atips.ca/research [atips.ca]. Also, some older presentations are archived at wooster.hut.fi/geta/courses/graham/Applications/ [wooster.hut.fi].

    Disclaimer: I'm the web guy for our research group at the U of Calgary. The guy who came up with DBNS is a professor here (Dr. V. Dimitrov [atips.ca]).

  • Impractical circuits (Score:3, Informative)

    by markmoss ( 301064 ) on Monday November 19, 2001 @04:26PM (#2586168)
    The last time ternary logic came up, I was disappointed to see no proposed schematics. Now there are schematics [trinary.cc], but I'm still disappointed. One thing is that they designed with bipolar transistors rather than CMOS -- you cannot put more than a few thousand bipolar transistors on one chip without serious heatsinking... Beyond that, these designs lack quite a lot in speed, power consumption, and reliability as compared to even the 7400-series of TTL bipolar logic chips of the late 60's. And the first one I looked at doesn't even work.

    Their ternary inverter is simply a two-transistor inverting _analog_ amplifier running on +/-3V supplies. If the input is -, Q2 turns on, bringing the base of Q1 low, turning Q1 off, so R2 pulls the output (which isn't explicitly shown) to the + rail. If the input is +, Q2 is off, and apparently this circuit depends on leakage to then bias Q1 on. This brings the output almost to the - rail. So it would work as a binary inverter. It's not nearly as good as a
    TI 7404 [ti.com] (see page 2). The major difference is that R2 was replaced by a transistor, which turns on for high. This speeds up the low-to-high transition, since you get the full output current of the transistor until the output node is charged up. It also saves power, because one
    output transistor is always off and the other always on, so when not switching only leakage currents flow at the output. (This two transistor output is called a "totem pole", and CMOS similarly depends on transistor pairs, one always off so little current flows.) Two more intermediate transistors are added, to control the top transistor on the totem pole and to reduce the resistor count. (On-chip, resistors are not cheaper than transistors.) But if you used it as a binary circuit, trinary.cc's inverter is basically the stripped-down ancestor of the 7404 circuit.
    As a trinary circuit, it also has to take a 0V input and output 0V. This inverter does not do this reliably. It probably could be made to work by adjusting the resistor values until 0.0V in gave 0.0V out, but warm or cool the transistors a few degrees, and the amplifier bias will shift so that the output swings to the + or - rail. When you are trying to put the mid-level through it, you are running it like an analog amplifier, and analog amplifiers are unstable without negative feedback.

    Nor would adding a few transistors and a negative feedback loop to stabilize it make it work well enough. A trinary inverter should take an input that is not right at any logic level, decide which level is closest, and output the corresponding nomimal voltage. For highs and lows (2 and 0), it does that, since it pins the output to the opposite rail. But even if you can be sure that 0.0V in = 0.0V out, with a circuit that is basically an analog amp, -0.1V in will give more than +0.1V out. So a chain of gates would allow the logic levels to get worse at each gate, until the mid-level became misinterpreted as + or -. To restore the mid-level would take a much more complicated circuit. I lay no claims to being a good designer at the transistor level, but I can't see any possibilities that are not nearly twice as complex as the corresponding binary circuits.
    • Before everyone pick apart what is presented on the website, I wanted to explain a little about whats there...

      Don't focus on whether the transistors are BJT's or how many resistors there are. When you put this on a chip, everything changes. It gets changed to CMOS, NMOS, or something. Resistors are gone. All the famailiar 2nXXXX transistors are basically gone. You use a transistor library customized for the feature size and process and other odds and ends.

      What is important is that the circuits are simple enough to put into a spice simulation and it uses parts that are already in your library so that it can be independantly confirmed or improved. Its much more fun to play with things without having to drop all the way to foundation if all you want to do is study one aspect.

      The transistor models are not ideal...I'll be the first to admit it. What my goal really is about is to get people thinking and to share knowlege.

      I would be more than happy to publish any improved transistor circuits that uses parts that would be in a common spice library. Anyone that is sincerely interested, please contact me through the address listed at the website.

      Cheers,
      -Steve Grubb
      • Steve, my one issue with that is until the circuits are considerably more real-world, there is no basis for cost comparisons between binary and trinary circuits. An yet last time trinary was on /., there were those who didn't have any circuits to point at, yet claimed that the circuits wouldn't be significantly more complex. Since these people obviously lacked the experience to tell a practical circuit from a starting point, I figured I'd better cut them off in advance...

        Just remember, in your spice modeling, try varying the transistor characteristics. It's not a real circuit until it can shrug off immense variations in the silicon. 10 degrees C temperature change can change beta by a factor of 2, for instance. Variation from wafer to wafer is even greater...
      • Unfortunately,( RSFQ (Rapid Single Flux Quantum) [rochester.edu] circuitry is beyond the scope of SPICE simulations, but this appears to me to be a natural fit to the trinary logic paradigm.

        Some circuits have already been physically built and tested - and at least one person feels that they lend themselves to tristate logic gates [sunysb.edu].

        The basic principles are already in the category of proven technology - ever heard of a SQUID sensor?

        Josephson junctions work equally well for either positive or negative currents - and so do magnetic flux quanta. (But this circuitry has to be the ultimate in low-power computing - you can't get much lower discrete amounts of energy than a single quantum of magnetic flux.)

God doesn't play dice. -- Albert Einstein

Working...