Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Editorial Technology Hardware

Beyond Binary Computing? 412

daksis writes "Non base two computing is nothing new. But it is an idea that, for various reasons, never really caught on. Embedded.com is running an op/ed piece that asks if hardware and software engineers are ready to move to ternary or quaternary logic. A move to multi-valued logic provides more computational capability without the standard increase in die size or transistor count. Is the need to make do with the current fabrication technology enough to drive the move to multi-valued logic? Or will Moore's law continue without the need for doing more with less silica based real estate?"
This discussion has been archived. No new comments can be posted.

Beyond Binary Computing?

Comments Filter:
  • Truth Tables * n? (Score:5, Interesting)

    by RobertB-DC ( 622190 ) * on Tuesday August 26, 2003 @05:49PM (#6799244) Homepage Journal
    I learned truth tables when I was a kid, and it was pretty simple:


    a and b = ?
    -----------
    0 and 0 = 0
    0 and 1 = 0
    1 and 0 = 0
    1 and 1 = 1


    But how would you make an AND gate for a trinary system? Would it be like multiplication with signs?


    a and b = ?
    -----------
    - and - = +
    - and 0 = 0
    - and + = -
    0 and - = 0
    0 and 0 = 0
    0 and + = 0
    + and - = -
    + and 0 = 0
    + and + = +


    And then quarternary... if it's just pairs of Boolean digits, no problem. It's just a four-input AND:


    a and b = ?
    -----------
    0x and 0x = 0
    0x and 1x = 0
    1x and 0x = 0
    x0 and x0 = 0
    x0 and x1 = 0
    x1 and x0 = 0
    11 and 11 = 1


    Or is the whole concept of an AND (OR, NAND, NOR, XOR) gate a relic of my Boolean thinking?
    • Re:Truth Tables * n? (Score:5, Interesting)

      by 26199 ( 577806 ) * on Tuesday August 26, 2003 @05:55PM (#6799319) Homepage

      You're a relic, I'm afraid ;-)

      Binary operations can be carried out by considering whatever values you have to be binary numbers, and working from there. Binary operations would probably have to be implemented like that somewhere, because they're quite useful...

      Implementing binary operations using any base which isn't a power of two would, I suspect, be extremely painful...

      But arithmetic and other operations wouldn't have to be based around binary logic; it seems like the circuits might get horribly difficult to reason about, but with decent computerised tools that's hardly a problem...

      • Re:Truth Tables * n? (Score:3, Interesting)

        by HiThere ( 15173 ) *
        O, I don't know, what if the three values were:
        Yes, No, and Maybe?

        And for four values:
        Yes, No, Maybe, and error?

        Much beyond that and you start thinking in coarse probabilities. Go very far in that direction, and I will argue that you should eliminate the yes and no values. (I.e., nothing is either certainly true or certainly false.)

        There are probably other ways to parse the bit patterns. (I.e., I know there are several, but I don't remember them.) But that one makes sense to me without (much) transla
      • Re:Truth Tables * n? (Score:2, Interesting)

        by RLW ( 662014 )
        The trinary algebra would work very similarly to binary or boolean algebra. Rank the values as true, ambiguous, and false. Then a TAND (trinary AND) would take on the value of of the lowest operand. The TOR would take on the highest value etc.

        However, the article was really talking about gaining extra memory capacity by using greater than bi-valent state memory. So then you would have some sort of mapping function - as described in the article - in which case from the user/developer stand point nothing wo
      • Fuzzy Logic (Score:4, Interesting)

        by Corpus_Callosum ( 617295 ) on Wednesday August 27, 2003 @05:21AM (#6803057) Homepage
        You're a relic, I'm afraid ;-) ... Binary operations can be carried out by considering whatever values you have to be binary numbers...

        Heh.. I hate to break this to you, but your thinking is a bit behind the times as well...

        Multivalued logic = Fuzzy logic

        The most common AND and OR operations in Fuzzy Logic are min() and max() that together form the basis of a De-Morgan Algrebra (only the law of excluded middle [A AND NOT A = 0, A OR NOT A = 1] must be thrown out)

        AND(A,B) = MIN(A,B)
        OR(A,B) = MAX(A,B)
        NOT(A) = 1-A
        Generally, a trenary logic is composed of { 0, 0.5, 1 } where each value is the "degree" or "belief" in TRUE.

        0 = FALSE
        0.5 = UNKNOWN
        1 = TRUE

        Some of you may recognize this from SQL (yes, SQL does actually have a simple trenary fuzzy logic base).

        The truth table ends up looking like this:

        0 AND 0 = 0
        0 AND 0.5 = 0
        0.5 AND 0.5 = 0.5
        0.5 AND 1 = 0.5
        1 AND 1 = 1

        0 OR 0 = 0
        0 OR 0.5 = 0.5
        0.5 OR 0.5 = 0.5
        0.5 OR 1 = 1
        1 OR 1 = 1

        NOT 0 =1
        NOT 0.5 = 0.5
        NOT 1 = 0

        If we move from trenary to any other precision, the rules stay the same and the table is easily derived ( min, max, 1- ). Generally, it is prefered to always have a 0.5 value, because UNKNOWN is actually a useful truth indicator. The next set after trenary that makes sense is not 4-value-logic (because it would exclude unknown), but instead 5. For instance:

        0 = FALSE
        1/4 = UNLIKELY
        1/2 = UNKNOWN
        3/4 = LIKELY
        1 = TRUE

        At this point, some truly interesting approximate reasoning is possible, although going to 15 values or (ideally) handling multivalue logic as analog until storage/retrieval would be much better. Approximate reasoning is one of the many things that fuzzy logic makes possible. Essentially it is the application of fuzzy logic to determining beliefs where certainty is not important (and in fact the lack of certainty is where the power of such a system comes from - being able to continue computing without full knowledge, only belief)...

        The idea of signals that are analog flying around on a semiconductor, instead of digital, yet time discreet in the same way as digital signals is quite interesting and could probably be done quite easily. Anyone have any ideas on how a min(A,B), max(A,B) and (1-A) operation might look on silicon?
    • Re:Truth Tables * n? (Score:5, Informative)

      by stratjakt ( 596332 ) on Tuesday August 26, 2003 @05:57PM (#6799331) Journal
      The whole concept of AND/OR/NAND is a Boolean construct. The gates define the 16 functions that can be expressed by two boolean variables. Ternary or quarternary logic would more basic functions, and different ones, but it would be easy to implement boolean logic as well (like your quarternary example).

      Try reading this [att.net] for a quick primer.

      It wont happen all at once, its a different paradigm and a definate learning curve, like the difference between imperative, functional and object oriented programming. But it has definate advantages, beyond the Moores law tripe.
      • Re:Truth Tables * n? (Score:5, Interesting)

        by Saeger ( 456549 ) <farrellj@nosPAM.gmail.com> on Tuesday August 26, 2003 @06:42PM (#6799792) Homepage
        But it has definate advantages, beyond the Moores law tripe.

        Tripe? Where do you get that from? Moore's observation about the exponential growth of transistor count is just a specific case of the more general Law of Accelerating Returns [kurzweilai.net].

        Exponential growth isn't tripe-- it's historical trend that hasn't been broken in thousands of years.

        --

        • by quinkin ( 601839 ) on Tuesday August 26, 2003 @11:26PM (#6801625)
          (Moores Law = A Law) => Tripe

          (Exponential Growth = Unbreakable) => Tripe

          I hate to add fuel to this sort of fire, but is Moore's "Law" a law, or an "observation"? They are not equivalent.

          "...historical trend that hasn't been broken in thousands of years." - What codswallop. In a theoretically infinite universe this may be the case, but real life is never that simple. Exponential growth of velocity - diminishing returns as you approach the speed of light. Exponential population growth - always a ceiling....

          I could go on and on - but I won't.

          Q.

          • is Moore's "Law" a law, or an "observation"? They are not equivalent.

            It's closer to an observation than a Law, since scientific laws are usually mathematical descriptions of our worldly observations. e.g. I can observe that an apple falls "faster and faster", but Newton's Laws describe that motion more precisely (and Einsteins theories go a step further by attempting to explain, rather just describe what he observed).

            Exponential growth of velocity - diminishing returns as you approach the speed of light

      • by quinkin ( 601839 ) on Tuesday August 26, 2003 @07:59PM (#6800385)
        I have always felt (but my models have failed miserably so far) that combining binary with uncertainty to create an indeterminant ternary logic would be extremely useful for many rule-of-thumb applications (ie. pattern matching, fuzzy logic).

        Picture a system with:
        1/3 power = 0
        2/3 power = alpha
        3/3 power = 1

        Now consider the case of recursion where each iteration must be deffered until the one above returns - by using uncertain values instead you may be able to perform a range of forward-possibilty operations upon the as yet indeterminant numbers.

        When the higher order recursion results eventually (lets assume) returns a value that determines the alpha value all that is required is to create a specific instance of the generalised results.

        I like the concept - and it seems it could easily be integrated on the same die as a standard ternary chip.

        Q.

        • The problem with this is that fuzzy logic is no more ternary than it is in binary. Fuzzy logic is about effectively weighing options. With ternary logic, you've only increased your options by 50%. Fuzzy logic is generally probabalistic...which means it's nicest when utilized with sufficiently large integers or, more importantly, huge floating point numbers. Like a nice 64 bit quad.

          Consider a typical "fuzzy" logic problem of when to stop a car. You want to weigh variables like the speed of a car, the amount of force to apply, distance to the stopping point and other options like the existance of pedestrians (generally, if there's people within 20 feet of your stopping point, like say at a bus stop near a stop sign, you'll want to slow down quicker at first but roll longer, to decrease the effect of a fluke accident).

          Lots of variables. Lots of choice. Lots of probability to weigh. Having an extra option out of 3 does not help you. Having 64 bits to work with does.

          I have a hard time coming up with problems in my line of software dev in which ternary or quaternary logic is any more useful than nested binary logic or some fun probability and calculus. Mostly because it's rare that I care about anything other than STOP or GO, ON or OFF. About the only time I do care is when I'm dealing with a database (YES, NO or NULL [no data])...all the rest of the time, alternative options are best handled with an enumerated type or a nice exception.

          Anyway, it's all well and good to talk about ternary computing being 'faster' with less overhead, but it's never really going to take off. It will take at least an extra year to train engineers to use the new logic effectively and for them to learn the tricks...and in that year, binary computing will have doubled. And when you live in a world where most software isn't optimized anyway...waiting for a slightly faster logic system that 9/10 of programmers will merely treat as binary because it's easier to understand in more comfortable is a waste of everyone's resources.

          Even the terminology is bad. True, false, alpha. Ugh. If P is sort of true, then kind of do q.
    • Base-10 math? Not bits, but actual digits? All the benefits of BCD with none of the overhead.

      On the other hand, 7-bit ASCII now needs three digits -- 300 values, wasting 172 values, more than have the value-space of a byte ("bydte"?).

      Don't even begin to ask UNICODE to retool for this.

    • As the doc said to Marty McFly, "You're not thinking 4th demensionally"
    • Binary logic (Score:3, Interesting)

      Actually yes, Boolean functions such as AND, OR, etc., typically accept binary input, but logic tables can be created for functions with ternary (or quarternary, etc.) input.

      It's hard to break out of binary thought since the traditional AND/OR in computer science mimic the English language usage of these terms, but in reality one could create any logic table and assign it a name. The fact that AND/OR have clear English meanings confuses the issue when we try to apply them to ternary input; we might as we

      • Re:Binary logic (Score:5, Informative)

        by maraist ( 68387 ) * <michael.maraistN ... m ['AMg' in gap]> on Tuesday August 26, 2003 @08:11PM (#6800465) Homepage
        Hence, making such transistors would allow chip makers to make huge strides in speed without having to handle the engineering problem of packing in more transistors.

        No, they'd just trade the engineering problem of packing more bits into once space with finding ways of unambiguously interpreting a value.

        See the whole power of binary (pardon the pun) has always been it's wonderful noise-suppression ability. Imagine a copper wire running 2 miles with either a 5V or a 0V signal on it. You can apply a simple analog device (say a BJT transistor amplifier) that utilizes an exponential function switching at some precisely known voltage (we'll call it 2.5Volts). Voltages before and beneath this voltage are amplified to either zero or 5V exponentially, such that only voltages within a small delta of the threshold voltage have any ambiguity.

        Thus you can determine the likelyhood of noise on a line, then put digital amplifiers every so often such that no amount of noise will be sufficient to raise or lower the voltage to the ambiguous region.

        The same is true even on micro-scopic wires; Fanning transistor outputs out to too many transistor inputs introduces noise on the wire. CPU's not surprisingly utilize "buffers" which are trivial 2 transistor logic gates which pass the output to the input. This cleans the signal just as the higher-powered digital amplifiers do.

        While I'm not sure which particular technologies are being considered in this trinary/quatrinary logic system, if it is nothing more than a sub-division of voltages, then it's doomed to failure for general processing, and possibly even simple memory storage. First of all, you introduce another whole region of voltage ambiguity. The only way to maintain your safety zone is to up the voltage or provide more amplification stages to garuntee a cleaner signal. But the trend has been to decrease, not increase voltages (lower power consumption, smaller devices, whatever), and adding additional logical devices merely to interpret a signal means that your bit-density is going to suffer.. Exactly impeeding it's whole point.

        Likewise for denser bit-storage, since the probability of bit-error necessarily increases (all else being equal), then you're not as likely to achieve as small or as dense a physical digit. So unless you can at least achieve less than 1.5x logical-digit spacial expansion (due to error-compensating material), you haven't gained anything by going to a trinary system.

        Lastly, the concept of >2 digit computing already has a particular niche where it's trade-offs are acceptible.. Think of 56k modems which encorporate dozens of thousands of "values" for a single digit. They utilize a full 256 voltages for each anticipated time-slice. Of course the analog modem can't anticipate the exact sampling point where the analog phone line gets digitized (happening to transition at that point can be bad), and there is usually a tremendous amount of line noise. But what modems wind up having to do is group several time-slices together and produce a macro-digit with a but-load of error-correcting pad-values. And that's not even enough; the entire packet is still likely to have misrepresented digits, so CRCing and thereby retransmission is often necessary.

        All this effort is worth it because we physically realize extra bandwidth.. But such a "probabalistic" solution (prone to bit-error) is unacceptable at the lowest level of computation. You can't get any less error prone than binary (given the above discussion), and mathmeticians have shown that base-e (2.717) is the optimal number to balance complexity of the number of combinations with the number of digits in a given number. (analogously demonstrated by considering an automated phone system where you have to wait to hear 10 possible choices per menu (the base-10), and you have to go through k menu levels to achieve what you want. The metric is the average wait-time using different bases, and mathmatically the shortest wait time was the
      • Re:Binary logic (Score:5, Interesting)

        by The_Laughing_God ( 253693 ) on Wednesday August 27, 2003 @12:52AM (#6802140)
        While higher-base number systems might have "special case" uses someday, it's important to understand that they are mere steps on the continuum to analog. This trivial seeming fact has some surprising consequences.

        Binary, being the lowest base that can represent any integer mathematics, is not a point on the continuum, it is a defining terminus of the continuum, and has many special properties. Termini (endpoints) often do, especially in one-ended ranges (e.g. base two is the lowest number of sates, but in theory analog has an infinite number of states, and any real-world instantiation of an analog computer can only be an approximation.) One example of an open-ended range where the sole endpoint has unique properties is the prime numbers (which, properly, must be positive integers): the lowest prime, 2, has so many unusual properties that it is often excluded or dealt with as a special case. it is believed (but not quite proven) that there is no highest prime

        This may sound trivial or like mealy-mouthed gibberish, so here's an example:
        In every multi-state binary-like computer, division is computationally 'harder' than multiplication except base two!

        Any algorithm for general division (by an arbitary divisor) involve more multiplications (and then subtractions, according to the results of implicit trial and error subtraction [branchpoints]) than a corresponding extended ('long form') multiplication. The reason this does not occur in base two is that multiplications by the two binary digits 1 and zero is so trivial that it does not need to actually be performed - a compare and branch suffices, which corresponds to the compare and branch preceding the additions of a binary multiplication.

        This is pretty special. While multiplication and division are inverse function, full generalized division is always 'harder' than generalized multiplication. This is quite unlike, say, subtraction, where a 'subtraction circuit' can be constructed to perform subtraction exactly as easily and in roughly the same number of, say, transistors as an adder.

        Binary math has many special properties in group and number theory. We'd lose those in higher base math, and we wouldn't gain new properties to make up for that loss. Two, the low bound, is special.

    • Re:Truth Tables * n? (Score:5, Informative)

      by Maimun ( 631984 ) on Tuesday August 26, 2003 @06:16PM (#6799540)
      I have studied little multi-value logic. In m-valued logic: AND is minimum. OR is maximum. XOR is complement modulo m A friend of mine that was doing testing of multi-value circuits (purely theoretical work, of course) said that some phenomena are seen "more clearly" when the base is bigger than 2. HTH.
      • XOR is complement modulo m
        Sorry, a busy day. Negation is complement mod m. I can't remember about XOR.

        Also, in the binary case, AND and NOT (say) are a basis, any other function can be expressed in those. I dunno if that holds for m-values logic.

    • Re:Truth Tables * n? (Score:5, Interesting)

      by mindriot ( 96208 ) on Tuesday August 26, 2003 @07:03PM (#6799974)

      I think the whole point is not about changing the boolean logic, but merely changing the representation of numbers, such as considering a number as octal and thinking of the values 0..7 as different voltages. Building an adder of course requires new logic circuits, but no one will take away boolean logic from you.

      Besides, there exist many non-binary logic ideas with AND/OR etc. operations (such as the ternary Lukasiewicz logic), even continuous logic (see, for instance, the lecture slides here [uni-karlsruhe.de] -- German unfortunately), but they are /not/ Boolean as they can not satisfy the Boolean axioms.

      So, for you writing software, nothing changes really... but internally, numbers would be represented differently. (Of course, when switching a whole CPU to n-valued calculation, you still need a way to do simple Boolean calculations since that is needed for conditionals.)

    • And then quarternary... if it's just pairs of Boolean digits, no problem. It's just a four-input AND

      While drunk once in my youth, I came up with diaboolean algebra.
      Instead of the boolean true/false, you have true, false, maybe and maybe not. The latter two are equal in interpretation (uncertainty), although with different values.

      diaboolean 0 = boolean 00 = false
      diaboolean 1/3 = boolean 01 = maybe not
      diaboolean 2/3 = boolean 10 = maybe
      diaboolean 1 = boolean 11 = true

      Of course, NOT maybe gives mayb

  • by Anonymous Coward on Tuesday August 26, 2003 @05:52PM (#6799273)
    ...prove we will be using a quaternary system. How many gigaquads of hard drive storage do we need, anyway?
    • > The Star Trek chronicles prove we will be using a quaternary system. How many gigaquads of hard drive storage do we need, anyway?

      A "quad" does not refer to "quaternary," but is just a unit of storage space. It it not known how many bytes are in a quad.

      It is common knowledge among trekkies that the term was invented specifically to avoid describing the data capacity of Star Trek's computers in 20th century terms. It was feared by technical consultant Mike Okuda that any such attempt would look fool
      • A "quad" does not refer to "quaternary," but is just a unit of storage space. It it not known how many bytes are in a quad.

        <voice style="comic-book-guy">
        Of course "quad" is defined...look it up in the TNG tech manual [amazon.com].
        </voice>

        (I'd look it up, but mine's at home. The implications of these statements on my having a life are an exercise left for the reader.)

      • It was feared by technical consultant Mike Okuda that any such attempt would look foolish in just a few years

        And that was very wise of him. Remember Max Headroom [techtv.com]? Max only occupied about 64 MB of RAM. Of course when that show was made, typical home computers had 64K of RAM. Supercomputers [ucar.edu] of the time had 64 MB of RAM.

    • Number of girls times number of possible different poses times times avarage jpeg-filesize for pr0n image

      about that much storage we need ;)
  • Qubits (Score:2, Interesting)

    Aren't quantum computers built around quaternary logic using qubits instead of bits?
    • Re:Qubits (Score:3, Informative)

      by 26199 ( 577806 ) *

      Er. That's q for quantum, not q for quaternary.

      I hope that was a joke :-P

    • Re:Qubits (Score:3, Informative)

      by Anonymous Coward
      No.

      Qubits are bit which can be zero, one, or zero AND one both at the same point in time (although, in order to be measured they must collapse down and become either zero or one).
    • I was wondering while reading this, doesn't it kind of ignore the fact that some of the biggest breakthroughs are more likely to come from things like quantum computing, or even the DNA based processing techniques?
      It strikes me, that to make a statement like "Or will Moore's law continue without the need for doing more with less silica based real estate?", after the article really is just pushing out the other, to me, more likely, possibilities, to make this an important article?
      I'm not saying it's not, a
  • Trinary Computing (Score:5, Informative)

    by Liselle ( 684663 ) <slashdot@NoSPAm.liselle.net> on Tuesday August 26, 2003 @05:54PM (#6799301) Journal
    Didn't the Soviets [icfcst.kiev.ua] already do this? I don't remember it catching on very splendidly, though I guess than can be chalked up to the limitations of the times.
    • Re:Trinary Computing (Score:5, Interesting)

      by DataPath ( 1111 ) on Tuesday August 26, 2003 @06:25PM (#6799630)
      The reason for doing work in trinary computing is that it is closest to the theoretically optimal computing base. The reasoning was something like this:

      Representations of numbers in a particular base have two defining characteristics - the number of values that can occupy a digit (m), and the number of digits it takes to represent that value (n).

      (Here's where the theory takes a leap, at least to me) The most efficient base (or simplest) base for performing computations is the one at which the m*n product is minimized. As an example, we'll take THE ANSWER, 42(base10).
      THE ANSWER in base 16 has a result of 16*2=32
      THE ANSWER in base 10 has a result of 10*2=20
      THE ANSWER in base 8 has a result of 8*2=16

      Here are the interesting cases, though:
      THE ANSWER in base 2 has a result of 2*6=12
      THE ANSWER in base 3 has a result of 3*3=9
      THE ANSWER in base 4 has a result of 4*3=12

      IIRC, according to the article I was reading, the most effective base is actually "e" (euler's constant).
      • Re:Trinary Computing (Score:5, Informative)

        by isomeme ( 177414 ) <cdberry@gmail.com> on Tuesday August 26, 2003 @06:56PM (#6799913) Journal
        The most effective base being e is not coincidental. Consider that the number of digits required to represent a number is proportional to the log to the base in use of that number. Since e is the base of the natural logarithms, with the property that the slope of the curve e^x equals e^x for all x, the product of a base and the logarithm of any number to that base will always reach a minimum for base = e.

      • by Snorpus ( 566772 ) on Tuesday August 26, 2003 @07:23PM (#6800113)
        Here's why (I think) the minimum of m*n is considered optimal:

        Each additional "base" value takes more complex circuitry (base 2 being the simplest).

        But for small values of the base, we need more "bits" to represent a given value. A single hex value can represent the same number as four binary values.

        Those of us old enough to remember using octal notation probably remember wishing that getting to 7 as a largest value was getting close, but not quite, to 9.

        Binary (base 2) was adopted in the early days of computers because (1) electronically it was very easy to design circuits that either were saturated (max current) or cut-off (zero current), and (2) Boolean algebra had been around for 200 or so years, making the transition straightforward (although not easy).

        It's been a long time since I took a semiconductor course, but I would think that a tri-state logic circuit (using -1.5V, 0V, and +1.5V, for example) should be fairly straightforward today.

        Yes, truth-tables and such would become much more complicated, and de-Morgan's theorem would be relegated to the scrap heap, but it would seem to be a way to continue to increase processing power once Moore's Law begins to poop out as feature sizes become sub-atomic.

        Moore's Law itself could continue, just taking advantage of better technology to move to quad-bit, penta-bit, and so forth, computing.

        In deference to those who might be easily offended, I have abstained from using the obvious acronym for a 3-state digit.
  • by digital bath ( 650895 ) on Tuesday August 26, 2003 @05:55PM (#6799311) Homepage
    Looks like systems working with more than ones and zeros would just need a way to encode these different values with different strengths of signals (as opposed to off=0, on=1). Something like no voltage=0, 1/3 voltage = 1, 2/3 voldage = 2 and 3/3 voldage=4. Seems like a very good way to wrap more information in the same signal/clock, but how would the logic work? How would and/or/xor work?

    My mind is too used to binary :) But I'd be willing to learn..

    Sounds like a good idea.
  • Ternary (Score:5, Informative)

    by Anonymous Coward on Tuesday August 26, 2003 @05:58PM (#6799338)
    For reference, Slashdot has done two other stories on ternary computing here [slashdot.org] and here [slashdot.org].

  • In trinary, there'd be no more hex digits, you'd have to chose betwween base 9, and base 27 representation of numbers.

    After 20 years in the biz, my brain is hard-coded for hex, I'd have to retire.
  • by a_ghostwheel ( 699776 ) on Tuesday August 26, 2003 @05:58PM (#6799347)
    It's been a long time since I read an article about that, but AFAIK ternary system is most efficient in storing information (basically if you want to store numbers 0..700, you need 28 states (8+10+10) for decimal system, 20 states (10*2) for binary and 18 for ternary (6*3). This has something to do with 3 being closest to the value of e (2.718...) but I dont remember what exactly. Any /.-ers to fill in?
    • by Andorion ( 526481 ) on Tuesday August 26, 2003 @06:07PM (#6799432)
      Here's a link to what you're talking about:

      Third Base [americanscientist.org]

      It's a good read, stuff I didn't know until I read your post and looked it up =)

      ~Berj
    • Here [americanscientist.org] is an article about why base-e is the most "efficient" continuous base, and thus base-3 becomes the most "efficient" integer base. It also explains a bit (har har) about ternary logic.
    • But I don't think this fact has any important practical consequences. If you store something on a CD, you would obviously be able to fit the same amount of data on the disk, regardless of what base you use. Of course, if you could use another technology (one dot on a CD being a value in a different base, not just on or off), then decimal would be much better than ternary. ;)

      Of course, if you are talking about processing information (CPU vs. RAM), ternary system is a little bit better, but the advantage ove
    • Umm... But if you want to store numbers 0..1000, you need 32 states (2+10+10+10) for decimal systen, 20 states (10*2) for binary, and 21 for ternary (7*3).

      So I say that that reasoning does not prove that base 3 is 'more efficient'.

      Btw what does 'more efficient' mean in this context: less power usage? lower cost? more girls? ...

  • Quaternary or ... (Score:3, Interesting)

    by marcovje ( 205102 ) on Tuesday August 26, 2003 @05:58PM (#6799350)

    I don't see why software would have to deal with
    quarternary logic at all.

    Currently the assembler- hardware logic is already an abstraction (microcode).

    If only the main busses (address bus, data bus, and their modern counterparts) would simply use that, and elementary pieces like barrelshifters would
    be quaternary, one could severely limit the number of lines (and thus transistors)

    However it could be that because of tolerance problems quaternary logic elements have to be larger, and thus don't yield the big benefit one would expect.
  • It's critical mass behind existing systems.

    Take a look at the Itanium. It's not taking off because not enough people 'get' EPIC, and moving to that platform is a lot of work. The speed benefits for a full migration to Itanium are quite large, but nobody wants to hand-tune their millions of lines of code.
  • by Grieveq ( 589084 ) on Tuesday August 26, 2003 @06:00PM (#6799364)
    Increasing the number of states requires you to increase the overall voltage required of the device to acount for noise in the system. So in return for more states you are running at a higher voltage and thus at a higher power consumption level. You still have the same problem.
    • Well, for ternary computing, you could do -3.3 V, 0V and 3.3V. So you don't actually need higher absolute voltage levels.

  • Power (Score:5, Interesting)

    by overshoot ( 39700 ) on Tuesday August 26, 2003 @06:01PM (#6799371)
    The big limit on device complexity and speed now isn't transistor count, it's power. CMOS and related gates have relatively low power because when they're conducting they don't have (much) voltage across them and when they have voltage across them they're not conducting (much).

    If you go to multilevel logic (not just on/off) then you're necessarily going to have intermediate states which both conduct and have voltage across them, with the resulting dramatic increase in power. This is an acceptable tradeoff for charge-storage devices like memories but a non-starter for logic.

  • by civilengineer ( 669209 ) on Tuesday August 26, 2003 @06:01PM (#6799374) Homepage Journal
    you can get either heads, tails, abdomen or heart!
  • That's not good. We'd have to change all those T-shirts with binary messages printed in them.

    Not to mention changing the Slashdot Sigs.
    No. For the excelent afformentioned reasons , i vote we stick with binary.

  • The "base" of the logic is really a pointless distinction. If you have a computing task you want done, it's just a matter of how to "encode" the task such that the computer can accomplish it, and how efficiently (money- and time-wise) that machine can accomplish that task. Every base is isomorphic (can be represented in) to every other base. I mean, your computer has libraries to print out integers in base 10, even though the internal representation is binary. True and false, and even 0 and 1 are human
    • The point is not to look for benefits of information presentation. I agree, 1 byte (= 8 combinations) can be encoded either by 8 binary-stated wires or by 8 states of the single wire.

      The point is that after the density of wires will come closer to its limits, then the amount of states per wire can increase the informational density, making devices more compact. It will not come without a price: you'll have to have more precise "readers" that can read more states (levels?) per wire. I think it will certain

  • From a marketing standpoint, that is. Even if they cost the same to manufacture, the fact that they can advertise it as "twice as many bits" (or something dumb like that) will justify a 2x price increase simply because of dumb consumers. Seems like a great marketing gimmick, and maybe nothing more.
  • It has to do with the decision part of a logic gate. To take an OLD example, in 7400 logic, 3.8V and above is "1", .8 and below is "0", anything else is considered ambiguous.

    All that would have to happen is for a middle range to be established: 2-3V is "2".

    But why stop there? You could have a base10 system by making further divisions. It depends on your ability to differentiate between the various voltage levels.

    You could even have a system wherein a multi-volt circuit checked to see what kind of inp
  • and not binary. We have 10 fingers, 10 toes, etc. We can handle base-10 math easily, but not base-2 math.

    But consider this:

    - there is only one of you, or 2^0
    - you have two parents, or 2^1
    - you have four grandparents, 2^2
    etc, etc.
  • Survey ... (Score:5, Funny)

    by BabyDave ( 575083 ) on Tuesday August 26, 2003 @06:05PM (#6799409)

    Do you think three-valued logic is a good idea?

    1. Yes
    2. No
    3. Maybe
  • I say bring it on. (Score:2, Interesting)

    by torpor ( 458 )
    Then we will never have to check for "NULL" ever again. :)

    Also, I want to work on a computing system wherein *every* single data has its own timestamp.

    I've been experimenting with 64-bit processors in this regard - using regular 32-bits for data, and the remaining 32-bits as a timestamp - and it has produced some interesting results. Treating *all* data as though it has a 'last modified' timestamp...

    If computing systems can be designed to take Time into account with every single operation, it can make f
  • by Dark Lord Seth ( 584963 ) on Tuesday August 26, 2003 @06:07PM (#6799435) Journal
    #define FALSE 0
    #define TRUE 1
    #define MAYBE 2
  • Yields an analog computer. Which is really a digital computer if you count individual electrons...

    Now I am confused.

  • will someone please think of the dialogs we will have to implement for this extended logic.

    There will be no more

    Yes No Cancel

    Now it will be

    Yes No Maybe Cancel

    Yes No Maybe Dont_Know Cancel

    Yes No Yes(but I mean no) No (but I mean yes) Maybe Cancel......

  • Noise Margin (Score:4, Insightful)

    by reporter ( 666905 ) on Tuesday August 26, 2003 @06:14PM (#6799512) Homepage
    When the voltage for digital circuits back in 1970 ranged from 0 volt to 5 volts, there was talk about using, say, a base-3 number system. Imagine how this system might be implemented. Digit 0 would be [0, 1.67) volts. Digit 1 would be (1.67, 3.33) volts. Digit 2 would be (3.33, 5.0] volts.

    Now, for a binary number system, digit 0 is [0, 2.5) volts, and digit 1 is (2.5, 5] volts. Clearly, the noise margin of the binary number system is much better than the noise margin of the base-3 number system.

    Now consider the voltages of modern digital circuits. Consider a voltage range of [0, 1.5] volts. In a base-3 number systm, digit 0 would be [0, 0.5) volt. Digit 1 would be (0.5, 1.0) volt, and digit 2 would be (1.0, 1.5] volts.

    For a binary number system, digit 0 is [0, 0.75) volt, and digit 1 is (0.75, 1.0] volt. Again, the noise margin of the binary number system is much better than the noise margin of the base-3 number system.

    In fact, the noise margin of the binary number system is consistently 50% better than the noise margin of the base-3 number system. The noise margin of the binary number system is always better than the noise margin of the base-n number system, where n > 2. For this reason, engineers have not built and will not build digital systems with any non-binary number system.

    • Re:Noise Margin (Score:4, Informative)

      by be-fan ( 61476 ) on Tuesday August 26, 2003 @07:16PM (#6800060)
      However, look at it this way. The voltage differentials were able to drop from 2.5 volts to 0.75 volts (actually, even less than that inside modern microprocessors) because circuits got that much better at overcoming noise and detecting precise voltages. If you can detect a differential of 0.5 volts that you can go ternary without bothering about noise.

      Besides, you're wrong. People have built digital systems with non-binary number systems. There are flash memory chips that use a 4-level voltage scheme to increase data capacity.
  • In Moore's own words

    No exponential is forever, but we can delay 'Forever'

  • by flend ( 9133 ) on Tuesday August 26, 2003 @06:14PM (#6799517) Homepage
    The nice thing about binary systems are that they are either on or off. As gates and tracks get smaller, interference effects etc. become the limiting factor.

    As we add more states, intermediate voltages, to the system, the difference between states becomes smaller, ie. the difference between states 2 and 3 in a ternary system is less than states 1 and 2 in a binary system.

    Hence a binary system can be made smaller and denser than a ternary system and still work.

    We may gain in logic density but lose out in physical density.
  • >Non base two computing is nothing new. But it is an
    >idea that, for various reasons, never really caught on.

    The various reasons not being so various; A binary system can be constructed in a much more stable fashion than can a trinary or quatrinary system. Everyone knows that 1 is not always exactly 5v (or 3v). Having several values confuses the picture even more

    "but we have progressed enough that trinary systems are much more stable now!"

    No matter what level of stability someone can ge
    • You're still thinking binary. What about

      * Red, Green and Blue
      * Apples, lemons and bananas

      All unique, easily identifiable, only one step between them. No intermediate transition required. Now, we just have use an electronic equivalent.
  • by dwheeler ( 321049 ) on Tuesday August 26, 2003 @06:15PM (#6799534) Homepage Journal
    Base 2,3,4, and 10 are so easy. If you really want a challenge, build a computer using base pi, e, i, 1, or 0 [dwheeler.com] :-).
  • I may be wrong, but hasn't most of the effect of moores law existed based on improvements in chip design and manufacturing alone - ie. the doubling of transisters on a chip or the doubling of clock speed of a die.

    If we were to double the clock speed or transisters of a tirnary computer, would that still give just a 2x performance increase, 8x, or somewhere in between? I can't even get my head around this enough to begin ...

  • Back in the day, as I taught myself the basics of binary arithmetic and logic gates, it amused me to think about how most of the esoteric training for programming involved thinking in two digits.

    I imagined that if a stable 10-state device could exist, programming would no longer need a mathematical priesthood.

    I now think in my old age that someday a stable 10-state device will come. And it will be received with all the joy of wedding guests greeting a police raid. If you take the hex out of coding, how wi
  • Here, here! (Score:5, Funny)

    by jemenake ( 595948 ) on Tuesday August 26, 2003 @06:20PM (#6799583)
    Well, if the word "bit" is a contraction of "binary digit", then I'm all for a move to "ternary digits". We need a lot more of those in this field.
  • by Sparr0 ( 451780 ) <sparr0@gmail.com> on Tuesday August 26, 2003 @06:23PM (#6799605) Homepage Journal
    One of the best parts of Ternary (Trinary, base 3) is that you can use BALANCED Ternary, in which the digits are not 0, 1, and 2, but are -1, 0, and 1. This allows you to represent any integer without a sign bit. Letting N represent -1 digit you can represent -17 in balanced ternary as 101N (1*(3^0),0*(3^1),1*(3^2),N*(3^3)).

    You can check out http://www.trinary.cc/Tutorial/Tutorial.htm [trinary.cc] for many examples of ternary circuits using ternary logic gates.
    • Actually, you'd probably still have a sign tit... -1 for negative, +1 for positive, and 0 for imaginary.

      Hehehehe... I said, 'tit'.

  • Theres one small problem with this is that anyone who knows basic physics and logic design circuits that operate on voltage base would be impossible as you can not creat instantainious changes in voltage as it violates basic physical laws of capacitance (as the capacitance across 2 devices is = to 1/2 the capacitance multiplyed by the voltage squared and the only way for there to be an instantaious change across the 2 would be for there to be and infinite power supply). The only way in order to create a sy
  • Date: Wed, 26 Dec 2001 10:38:34 -0600
    From: Jeff Epler
    To: steve@trinary.cc
    Subject: Trinary adder efficiency
    Do you know of any more efficient trinary adder designs? I've found an
    online abstract of a paper that may have some, but I don't have access
    to the paper itself:
    http://www.computer.org/proceedings/ats/7129/7129 0 387abs.htm
    Also, do you know if a "balanced trinary" adder (-1, 0, 1 trit values)
    is any simpler than your trinary (0, 1, 2) adder?

    I also performed a simplistic comparison of the propo
  • Haven't we covered this before? [slashdot.org]

    I also read a paper once explaining how 3 was the optimal number base when you consider the number of different symbols needed and the width of a string of those symbols needed to represent numbers. I even solved the equations myself coming to this conclusion. You actually find "e" (2.718281828...) as your answer, but the closest whole number is 3, not 2.

    Unfortunately, I don't have a link and Google has failed me :-(
  • by marvin2k ( 685952 ) on Tuesday August 26, 2003 @06:35PM (#6799715)
    The quaternary system would be perfectly suited for women:

    0 = No
    1 = Yes
    2 = No (But I mean yes)
    3 = Yes (But I mean no)

  • Men = Binary Logic.
    Women = Fuzzy Logic.

    Me = Sleepin on the Couch.
  • Another poster provided trinary computing tutorial [trinary.cc]. On one of the pages for the introduction [trinary.cc], the author writes:

    The basis for understanding Trinary Algebra begins with the way that it represents its numbers. They are used to represent two things: Whole and Fractional Numbers. To start with...in Trinary systems, bits are really called

    trits. Its short for Trinary Digits.

    As if we didn't lack sufficient sexual jokes regarding current computer technology. Now we have to introduce "trits" into the fray. N

  • Toomuch heat? (Score:3, Interesting)

    by anethema ( 99553 ) on Tuesday August 26, 2003 @07:02PM (#6799961) Homepage
    The problem I see with this is:

    Already our processors are dissipating a serious amount of heat. This heat is developed only during the switching time.

    Picture a cpu clock:

    _|-|_|-|_|-|_

    Haha, something like that. Anyways, the heat is only developed during the vertical bars of that clock. (Because the vertical bars arent perfectly vertical in the real world and that P(heat)=VI. During the horizontal bars, only V or I are present, so no power, ie: no heat)

    I dont exactly know how this ternary or quaternary computing works, but if its forcing the transistor to work in stages between full off and full on (1 and 0), you will be increasing the heat output by your cpu exponentially.

    Correct me if im wrong on this, but maybe we'll really need those diamond semiconductors to make this feasable for high computing power applications.
  • by Eric Smith ( 4379 ) * on Tuesday August 26, 2003 @07:08PM (#6800009) Homepage Journal
    A move to multi-valued logic provides more computational capability without the standard increase in die size or transistor count.
    No, it doesn't. Let's see you design a 16-quat full adder that takes fewer transistors or less die area than an 32-bit full adder.

    Base 3 or higher are a lose for implementing logic. Base 4 is useful in some kinds of memory, and this has been done by Intel since around 1980-81. Intel used a quaternary ROM (two bits per cell) for the microcode store of the 43203 Interface Processor, and (IIRC) for the 8087. More recently this technique has been used in flash memory.

  • by geekee ( 591277 ) on Tuesday August 26, 2003 @08:49PM (#6800710)
    Most logic circuits, from an analog perspective, are amplifiers. Rather than operating in the linear region, however, these amplifiers, are overdriven to force the output to rail at one extreme or the other , producing a high or low voltage level (0 or 1). CMOS works particularly well iunder these conditions because, in steady state, only a small leaskage current flows through the circuit when it's railed. As the author indicates, you can design logic by comparing a voltage to a fixed threshold, such as in ECL, CML, SCFL, etc., but these circuits are based on differential amplifiers, which typically burn significant current at all times. Not to mention that it's difficult to imagine a circuit which can generate more than to voltage values that does not use significant current at all times. Therefore, it seems the price of non-binary logic in most cases is increased power, which is not a trade-off anyone's willing to make (Flash RAM is an exception because of it's unique nature).
  • by wirelessbuzzers ( 552513 ) on Tuesday August 26, 2003 @09:34PM (#6801020)
    Ternary computing could be quite useful for asynchronous logic. The three states would be 1/yes, 0/no, and n/(no answer yet).

    Basically you want the truth table to be, in order of precedence:
    0 AND * = 0
    n AND 1 = n
    n AND n = n
    1 AND 1 = 1
    (OR is the reverse, swap 0 and 1)
    NOT 1 = 0, NOT 0 = 1, NOT n = n

    Gates can be actually made to follow the right truth tables by diddling your substrate voltages in an otherwise fairly standard CMOS design; this has the effect of making your circuit twice as slow or quadrupling its power consumption though, which sucks. You also have to watch your noise thresholds here, transients can be nasty although they are unlikely to propagate far through a network of n's. This can be mostly fixed by further tinkering with thresholds, but then the leakage current becomes prohibitively high.

    You can also just design extra logic with standard gates and watch your glitches very carefully.

    There may be cleverer ways to do this, or the savings of asynch might be enough to make it useful anyway.
  • Aymara (Score:4, Interesting)

    by fven ( 688358 ) on Tuesday August 26, 2003 @10:25PM (#6801311)
    The aymara people have used trivalent logic for thousands of years. It allows precise definition of "maybe" or "possibly"

    There is a great writeup of these people and their logic at:

    http://www.aymara.org/biblio/igr/igr3.html

    The article mentions that it is very difficult to impossible to express the logic of one culture in the language of another. Thus to understand better the inferences in Aymara logic, we have to resort to mathematics, which is sufficiently general to be understandable and translatable.
  • by Sparr0 ( 451780 ) <sparr0@gmail.com> on Wednesday August 27, 2003 @01:36AM (#6802356) Homepage Journal
    One important thing to remember about truth tables is that the number of operands (the numbers you give to a function to get an output) for a given operator is NOT always the same as the base. For base two you have two operand operations, which we all know as AND OR XOR, but you also have operations that require only one operand, the common NOT (1->0, 0->1) and what I will call EQV (1->1, 0->0). There are also 9 more two operand truth tables that you see in varying degrees of extreme rarity, f/e the following arbitrary truth table that you will never see in practice:
    A B out
    0 0 1
    0 1 1
    1 0 0
    1 1 1

    Apply this to base 3 and you find that there are not just 3-operand operations but 1 and 2 as well. For one operand you can have a rotate-down(0>2,1>0,2>1), shift-down (0>0,1>0,2>1), rotate-up (note that in binary one-operand rotation happens to coincide with NOT), shift-up, and various arbitrary tables like the one above. For two operands you have NeitherBoth (00>0,01>2,02>1,10>2,11>1,12>0,20>1,21>0,22>2), and the arithmetic operators, plus a bunch of others with explanations i cant think of right now. For three operands there are thousands of possible truth tables, many with useful explanations, many many more arbitrary ones. Oh, and for both 2 and 3 operands you have multiple partial or complete counterparts to the traditional binary AND OR XOR that apply the same kinds of rules to the operands.
  • by Salsaman ( 141471 ) on Wednesday August 27, 2003 @02:12AM (#6802519) Homepage
    Bender: what a strange dream...I thought I saw a '2'.
  • by andyr ( 78903 ) <andyr@wizzy.com> on Wednesday August 27, 2003 @03:15AM (#6802735) Homepage Journal
    When I was at Brunel University [brunel.ac.uk] on a post-grad course, we built chips for Associative Processing (pdf)> [kent.edu] or Google HTML [216.239.33.104] that inherently used Ternary logic. The main chip that we built was an Associative memory [ic.ac.uk] chip, that stored binary data, but was addressed by searching for data. There were no address lines. It was a wide field - 40 bits,(this was late 70's) and you presented a search term as Ternary data on the input lines. Each bit was 1,0,X - where X meant "don't care". You could add one field column to another, without any of the data exiting the chip.

    Say you wanted to add an 8 bit field - bits 0-7, to another, bits 8-15, and store the result in a 9 bit field, 16-24.

    Search as follows (CC Field is Carry):-

    Bits: C 1 1 1 1 1 1 1
    Bits: C 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
    Find: X X X X X X X X X X X X X X X X X X # All rows
    Writ: 0 0 X X X X X X X X X X X X X X X X # Clear output
    Find: X X X X X X X X X 0 X X X X X X X 1 # 0+1=1
    Writ: 0 1 X X X X X X X X X X X X X X X X # write 1
    Find: X X X X X X X X X 1 X X X X X X X 0 # 1+0=1
    Writ: 0 1 X X X X X X X X X X X X X X X X # write 1
    Find: X X X X X X X X X 1 X X X X X X X 1 # 1+1=0 carry 1
    Writ: 1 0 X X X X X X X X X X X X X X X X # write 0 carry 1
    Whew. You have added the LSBs of the fields together, in 6 operations. There are 8 more to go. However, you have done it for the entire array which might be thousands of records.

    So there is a fixed processing time for parallel operations on all the data.

    We still had to use two input lines to represent the Ternary value, but, remember, no address lines needed.

    Content Addressable memory chips are also used for lookaside Cache memory in CPUs today.

    Cheers, Andy!

To the systems programmer, users and applications serve only to provide a test load.

Working...