Forgot your password?
typodupeerror
Technology

The Ultimate Limits Of Computers 180

Posted by timothy
from the density-and-entropy dept.
Qui-Gon writes: "Found an interesting article about the 'The Ultimate Limits of Computers' over at Ars Technica. This article is very heavy on the physics of computing, not for the faint-hearted." Somewhere between practical reality and sheer Gedankenexperiment, the exploration here keeps getting more relevant as shrinking die sizes and improving nanotech wear away at previously impassable barriers. The article is based on a paper discussing the absolute limits of computational power.
This discussion has been archived. No new comments can be posted.

The Ultimate Limits Of Computers

Comments Filter:
  • by Anonymous Coward
    "When a scientist says something is possible, they're probably right;
    when a scientist says something is impossible, they're probably wrong."
    --Anonymous
  • by Anonymous Coward
    Computers will never reach a plateau of operating capacity. Even if we were to design a computer so fast that it was limited by the speed of light, we could then design a warp core to generated a warp field around the computer and enable FTL (faster than light) processing. At least, this is how they do it on Star Trek TNG.
  • Suppose you could have a large cluster of small computers to compute things which are highly parallelizable. This cluster should probably be arranged in a sphere to make the computers as close together as possible. The source of computation requests would probably be placed in the centre of the sphere. So the turnaround time of the computers on the surface of the sphere is then bounded by the time needed for light to travel the radius of the sphere twice. To keep the outer computers busy, you need a longer queue of processing requests. Any pointers to papers about this?

    I'd like to see an analysis for the breakeven point; for what problems would a sphere of a given radius containing computers of a given speed provide a speedup, such that any larger radius would diminish the return, despite adding to the aggregate computing power?

  • Reversible computers don't drain the charge to ground. One major goal is minimal power use, which means that, whenever possible, you don't discharge anything. In particular, you want to move the charge from one capacitor to another instead of to ground; which of the capacitors is determines whether the bit is a one or a zero.

    Of course, there are only certain operations you can do without discharging a capacitor to ground or charging it to Vcc. And there's obviously some energy expenditure in the resistence of the wires, capacitor leakage and the effort of getting the computation to go forward.
  • Not exactly that kind of bottom.
  • Note that the given speed limit is on state change rate. Those states could represent much more complex operations than simple arithmetic. All in all the article is nice, but I enjoyed the one on Hans Moravec's page more, where Feynman pondered about the ultimate library. Regards, Marc
  • Sorry, I meant a link found from Ralph Merckle's page. Read here [zyvex.com].
  • It's all a matter of any physical system having a bounded energy having a corresponding bounded rate of state change.

    But like writing fast or dog slow programs with a given programming language, there might be very different approaches (regarding computational power) to use the physics for building fast computers.

    Thus there is a physical limit, as well as a logical limit (theories of computation and complexity) to consider.

  • There are parallel machines out there. As much as I heard, the attempt to generate parallelizing compilers, thus compilers that take a conventional program, analyze its structure and try to generate code that takes advantage of the parallel hardware, has not been very successful.

    And of course it depends totally on the problem, if parallelism can be exploited.

  • Now before you say "BS", think about it. In physics, if you know the initial state (starting position, velocity, acceleration) of an object in an isolated system, you can easily compute where it was at any given time earlier. This uses the same concept. For example, If you add 43 to a register, you can subtract 43 from that register and get your energy back.

    Search for "reversible" in this article [slashdot.org].

  • Your example is one, that can't take advantage of parallelism. I would rather give an divide and conquer type algorithm (like sorting) as example for recursion and parallelism.
  • I noticed his paper addressed quantum state change limits, apparently (without saying it) limited to fermions.

    What equation are you refering to?

  • The only reason for quantum mechanics in this article is the fact that quantum mechanics gives a lower bound for miniaturisation (i.e. you can only keep making computer parts smaller until you get problems with the Heisenberg Unertainty Principle)

    Actually short times allow for high energy fluctuations (dt * dE >= hbar), this is the very basic reasoning for the existance of short lived (so called virtual) particles in quantumn field theory, with its complications on the number of particle being not constant in physical process and the fact that one can not observe bare bone particles.

    This could mean that fast processes have to deal with disturbance from such energy fluctuations, providing physical limits as well for fast computing.

  • No, the limitations that technology can overcome are engineering limitations. The limitations talked about in the article are basic fundamental physics limitations that don't depend on any particular form of technology.

    Considering the physical theory (here: non relativistic quantumn mechanics, which is still an approximation of nature) used to derive the limitations is sufficient.

  • mvw, what part of "This is how fast state changes can happen" don't you understand?

    You imply that I challenged this physical upper limit on state change rate in this discussion, but I did not.

    My point is different. It goes down to the question about the relationship between rate of state changes on one hand, and the ability to perform a fast computation on the other hand.

    The naive view is similiar to the view many cpu buyers have in regard of clocking speed. It goes along the line "The more MHz, the faster the CPU". As we know, this is not a valid measure to compare for example a Pentium IV and a PPC CPU. They are not comparable by clocking speed alone. Eg the one cpu might perform a floating point multiplication much faster than the other cpu in terms of clock cycles.

    The classical theories of computation solve this problem, by mapping programs onto a very simple model of a computer, the Turing machine or register machine, or real RAM and then doing comparisions. The operations in these models are precisely defined, like incrementing a register, or decrementing a register, or testing if it is zero.

    As we know there, a computation needs a certain number of basic operations as well as a certain amount of memory. Classical complexity theory tells us, how exactly memory and number of needed operations are related for these easy computation models. In fact both are subject to the "size" of the input data as well. Like I said, this only well understood for these deliberatly simple choosen computation models (Turing machines etc).

    In this discussion, we were talking about limitations of computing. It makes sense to think about the most powerful computing architecture envisioned so far. And that is the model of quantumn computer at present. In this setup your computer is a quantumn mechanical system. Your input data corresponds to the initial states of the system, your computation is a measurement of the system, this meaning a physical process that kicks the system from its inital state into its final state, the final state encoding your result data.

    The trick with quantumn computers is that one arranges that measurement to reduce a large number of possible solutions/states into one final state. This is decribed sometimes as doing all computations at once.

    You will immediatley notice, that computation in this setup is achieved by exactly one state change, the measurement that kicks your system into final state.

    Now lets translate this abstract mumbo jumbo back into real things. The first quantumn systems explored were some couple of nucleii in a big magnet, a NMR mesurement device. Initial state corresponds to some inital states of these nucleii, typically their rotational and spin degrees of freedom. Measurement means bombing these nucleii with some electromagnetic radiation and looking for the states of the nucleii after that.

    What must be achieved by the experment is, that we must be able to map the input data of a problem onto the initial states of these nucleii. The output states must also correspond to a solution of the problem under consideration. Further the measurement process must correspond to the solution procedure of the problem.

    As we have ttl logic gates for simple operations like AND OR NOT, researchers have to think of building quantumn logic gates. Thus systems that can handle input data with their initial states, where act of measurement corresponds to some useful operation on the input data, and the output states can be translated back into a nice result on the input data.

    Question: What kind of operation can be encoded by such an experiment?

    Actually I don't know. I believe however that it might be a very complex one. Take for example a chain. It just hangs down and by this finds an optimal solution to a variational problem. Or fire some photons from air into water, the travel in way, that corresponds again to the solution of a hairy differential equation. Or a chemical reaction with many molecules can solve some crazy combinatorical problem in a couple of microseconds.

    The topic touched here, is the topic of representation. Everyone with a decent level of math knows, that there are different representations of the same problem, the one harder to solve, the other easier. In fact this on of the basic problem solving strategies in applied mathematics:

    • find a representation where a problem is easy
    • transform problem into that context
    • solve problem in the easy representation
    • transform problem (and solution) back
    Examples are diagonalization of matrices or finding of the inverse of an integral operator (eg Green's functions in partial differential equations) by Fourier transformation, algebraic invertation, and Fourier back transformation.

    So in this regard I want to argue that we can choose different physical setups that might enable us to do a more or less complex computation in one state change.

    So it is not only about quantity (number of states changes) but also about quality (what computation is performerd with this change).

  • You misunderstand what it means for a problem to be complex. NO NP-HARD PROBLEM CAN BE SOLVED BY A DETERMINISTIC MACHINE IN POLYNOMIAL TIME. none, nada.

    You Sir, misunderstand what this kind of theoretical computer science is about.

    It is using a mathematical theory: using mathematical objects like sets and mappings, a simple mathematical model is formulated. Then one tries to analyze the implications of this model by application of the laws of mathematical logic.

    The mathematical model you are talking about is the register machine or Turing machine (or a model of equivalent computational strength). What you cited in upper case is a logical consequence, a theorem or lemma, thus a truth for just this model.

    What makes this computer science and not plain mathematics is, that we believe that these simple models are adequate representations of what we think computers and computations are.

    One of the fundamental assumptions of these models, and that is why I call them classical computational models, is that they are based on a thinking that corresponds to the common sense thinking of classical physics. Like you said, these models assume that only one fundamental operation can be performed in one time step (clock cylcle, whatever unit of time). But, as Feynman never stopped to preach, nature is not classical! It is much more strange. Experiments verify that nature has some weirdnesses only accurately described by quantumn mechanics or theory of relativity so far. These are results that are not compatible with commons sense, but are still compatible with the mathematical models of theoretical physics, thus compatible with mathematical logic. And that is where quantumn computing comes into place. What happens if one makes use of this strange effects, like two photons that seem to interact with no time delay, although they are far apart, or the phenomenom what is colled as collapse of the wave function in quantumn mechanics, to build a faster computer?

    Somehow quantum computers are supposed to make this nondeterministic which means that given a single state I can be in multiple states after the next step, and somehow the right one is picked out at the end.

    You surely heard of Schroedinger's cat. It is a truth from quantumn mechanics, that systems can be in an entangled/mixed state and that only the act of measurement/looking will put it into an decisive output state. The act of measurement itself is one of the conceptual weaknesses of quantumn mechanics. Most physicists just take for granted that it happens and know how to deal with it in calculations and experimental setup. The theory behind it is hardly distinguishable from philosophy. (As far as I understood, it is the problematic of observed system and observer who really can't be seperated into two entities, with two wavefunctions, but are connected, and thus would be described only be a common wavefunction. This way it is hard to get results. We would ultimately have to analyze the wavefunction of the whole universe to get results. Thanks god, this not necessary most of the time, the conceptual separation of the universe into seperate physical subsystems is often useful)

    As our minds were evolved in a environment that is mostly goverend by classical physics, we have really trouble to grasp such strange truths. We know the formulas and are able to set up experiments and get results, but this no intuitive understanding. To quote Feynman again, he was sure that nobody understood quantumn mechanics.

    The radical idea behind quantumn computing is to take advantage of these strange effects. (strange to us classical minds, perhaps not strange to some hypothetical life form evolved in an environment where quantumn mechanics would be dominant in every day life)

    I just don't buy into taking a deterministic model of computation (which the article seems to be using) and expecting it to crank out the general travelling salesman problem in any reasonable time.

    It is hard to common-sense recognize that while the quantumn physical processes are governd by probabilities, the laws that govern these probabilities are exact. For example a given process might have a random outcome, but still I can deduce from the laws that it must have a probability less than 1/3, and can make use of that given bound in the construction of my machine. You see, randomness does not mean, that there is no prediction possibly, also the kind of predictive strength is much less than in classical physics, it is still much beter than using a crystal ball.

    Due to their propabilistic natures, quantum computers were thought to be very instable and not able to yield deterministic results.

    The first crucial idea of quantumn computing was Feynman's (?) idea of recognizing classical computation/complexity theory as based on assumptions of classical physics and thus exploiting the not before used quantumn mechanical behaviour computationally, as described above.

    The second crucial idea was Peter Shor's idea of using error correction strategies in the context of quantumn computing. As I understood, this was the theoretical break through. That idea provides the needed stability.

    Shor by the way, was also the guy who proposed the prime factorization algorithm that was qualitativly better than anything classical computational/complexity theory said.

  • Your input data corresponds to the initial states of the system

    Actually, the preparation of the initial states is already part of the quantumm computational solution procedure.

    For example the idea behind a quantum computer doing a massive parallel database query would be to have all possible search results feature in the superposition of the inital states of the system.

    The query would be the measurement that picks that state as final state, that represents the match.

  • The outcome of a travelling salesman problem is not a state change, unless you think you can solve the problem by flipping a bit.

    These problems can be mapped onto bits and these can be mapped into the qbits of a quantumn computer.

    If we had to tour 2^n cities, the data set would consist of tuples (city1, .., city2^n).

    We could map that onto 2^n x 2^n = 2^2n bits.

    So we would need a 2^2n quantum bits in a quantumn computer, that represent the 2^(2^2n) possible outcomes.

    This boils down to question if it will be possible to get such a large number of qbits in superposition. I believe nobody knows if this is physically or technically possible.

    We also don't know if there is some experimental setup, whose measurement would collapse that superposition into the optimal TSP soulution.

    Perhaps a whole series of more fundamental computational steps/measurements needs to be done by a quantumn computer.

    Like I wrote above, I don't know what kind of basic quantumn computing operations are possible to built, but I believe that it could be more complex ones that simple increments. I am sure a look over the fence into the domain of analog computing would give some useful hints of what might be possible.

    Even if you use the physical analogue of creating lengths of strings tied together at the various city nodes and pull the "cities" in question until the strings are tight and selecting the tight path, every atom moving in the process is a state change

    It will possibly not such a simple mechanical setup but some much more crazy arrangement of basic quantumn computing components, that implement abstract operations. Perhaps it is not a single state change but it might turn out to need qualitativley much less than an algorithm on a classical computer.

    And even if you prove that P=NP, you still need an algorithm that traverses the data, examines it, compares different links, etc, and every step in the algorithm is AT LEAST a state change

    The reasoning from classical complexity theory shows that the work of digesting the input data is part of the overall work, an algorithm performs. But it is also true, that the models assume that one symbol of input data is digested at a time. Perhaps the inherent paralellism of a quantumn computer can save effort here to, by having some read-all-input-at once feature. I simply don't know.

    Hm.. I should really try to get an expert to judge this discussion.

  • by mvw (2916) on Friday June 22, 2001 @05:49AM (#132869) Journal
    Different from what?

    Who says that a complex problem needs a high number of state changes?

    Each state change could be the result of a very high level operation, not something primitive like adding two numbers, but perhaps something like the outcome of the traveling salesman problem. Think of some clever physical setup here.

    It will be due to the cleverness of the computer builders, to make most use out of the limitations.

    Regards, Marc

  • You're thinking in the wrong discipline. Reversable computing happens already all the time.

    Think of it this way - a photon hits an electron in it's ground state, raising it to a higher state. Then, it emits an identical photon and drops to it's ground state again. Effectively, You added energy to store a bit (0 -> 1) of information in an electron. Then the operation reversed, and you gained the original energy back.

    The problem with reversible computing is getting the information out in a reversible way. In my example above, how do we know if the electron is a 0 or a 1? we have to extract the photon to do it, thus destroying the reversibility. Now, there are some quantum properties that can be measured without destroying the reversibility, but they are difficult to control (the system easily loses coherence).

    Reversible computers don't use the same 'modern electronics' that your current computer does. but they are theoretically possible (using only a small amount of energy to observe the state). Just not yet practical. Wait around a bit.
  • Equations 1 and 2, etc. do not require conversion to kinetic energy of any kind; they're just restatements of the fundamental de Broglie wavelength:

    E = h-bar * nu

    from high school physics. This gives the light wavelength if the object is a photon (zero rest mass), otherwise this wave effect is called the "probability wave" and is reputed to have connections to alternate universes, etc.

    It doesn't matter if E is rest mass or kinetic energy; the wavelength effect is the same, and in fact originally derives from special relativity [davis-inc.com]. Heisenberg's principle is essentially a statement of the resolving power of these waves, for instance in electron microscopes, or in any measuring instrument. Equations 1 and 2 say that one can't measure much more often than delta t, which is fairly straightforward since one is working with waves whose period is at least delta t.

  • The upper bound is on the number of measurable physical states that any physical system with mass 1kg can attain in a certain time period.

    The verbiage about E=mc^2 is redundant, the author is simply restating the fact that E in equations 1 and 2 is, and has always been, relativistic mass-energy, not what appears on your PG&E bill.
  • The readers are getting confused by this paragraph:


    The maximum energy an ultimate laptop can contain is given by Einstein's famous formula relating mass and energy: E = m c2. Plugging in the laptop's mass, the speed of light, and combining the result with Eq. 2 tells us that the maximum number of operations per second a 1 kg lump of matter can be made to perform is 5.4258 * 10 50. This means that the speed of a computer is ultimately limited by the energy that is available to it.


    You're misstating the definition of E in equations 1 and 2. E is not "maximum energy", it is defined as the exact mass-energy of the physical system, in this case 1kg. The speed of a computer is not limited by the energy available to it, it is limited by the mass-energy that is in it.

    You might rephrase it by something like "Since E is the total mass-energy of the laptop, a simple unit conversion shows that the maximum number of operations per second ...etc."
  • I recall reading a short article about the limits of computing technology in gaming -- specifically in terms of AI and "solving" classic games such as chess.

    The argument went like this: if a computer system can map out every possible finite state (i.e. board position in chess) of a game, then from any point in the game it can be determined what the winning moves are (if there are any, that is.) For a game like chess there are relatively few board postions possible -- thus, all board positions could be explored and the game could conceivably be "solved" at some point by a computer (of course in a brute force sense only, but solved none-the-less.) Any human player would be hopeless against such an adversary.

    However, in a game such as checkers, there are many many more possible board positions (I think the estimate I recall was 10^69, could be incorrect though . . .) so that compiling a complete library of all board postions would take considerably longer than the projected life-span of the universe (as estimated by the halflife of a proton). This would be true even if you used a ridiculously large computing system -- say, .1% of the available mass in the universe. So, the argument goes, a computer will never be able to brute-force beat a human at checkers.

  • It was an old SciAm article that I'm thinking of . . . if I can dig up the original I'll paste it in.
  • irreversiable gah, spelcheck
  • If you add 43 to a register, you can subtract 43 from that register and get your energy back.

    bit shifting is inherently irreversible.

    Well I guess that means multiplying and dividing by powers of two is out as well, since it turns out to be the same operation. But.. wait.. mult and div is really just a finite number of additions and subtractions.. *hmmm*

    Does it have something to do with the way the cpu handles overflow for certain instructions? Or did I just convert an irreversiable algorithm? ..remember you heard it here first!.. or are you just talking out of your butt? :)

  • Hi pclminion.

    Thanks for the feedback.

    You said:

    There may, in truth, be a relationship between physical entropy and informational entropy but the relationship has not yet been discovered, as this article would lead us to believe. If it ever is discovered, the form of the relationship will certainly not be as simple as is stated here.
    Well, I entirely ommitted Lloyd's calculation for this (as I said in the article). It's too technical to include in an article on a tech enthusiast site like Ars. It's quite followable, mind you, but technical enough that I didn't think it belonged in the article.

    I invite you to read Lloyd's paper [lanl.gov] for yourself, and examine his approach. If you find fault with his math and physics, well, congrats! :) I'd suggest that you let Lloyd know about this, and certainly announce the results of your analysis here. An extra pair of eyes looking at Lloyd's fascinating ideas is good.

    Cheers,
    -Geon

    geonSPAM@ISBADarstechnica.com

  • by geon (7807) on Friday June 22, 2001 @09:19AM (#132879)
    Hello krogoth.

    First, thanks for the feedback.

    Now, it is quite possible that I have made errors in my article. I've gone to some pains to avoid them, but things might have slipped through anyway. One never knows.

    However, I do not think the matter you raise is an error. Allow me to try to explain.

    First, you must keep in mind that the article is an exploration of theoretical limits, not practical ones. Practical considerations are well and good if you want to actually build such devices, but that isn't what I was intending to explore. What I wanted to talk about (and did talk about) are the absolute maximum speed limits for computers. These are almost guaranteed to be ridiculous and impractical, but as a limiting case, I think that they are still interesting.

    The calculation is based on the idea that 1kg of matter has a certain maximum energy associated with it, and that maximum energy is given by Einstein's formula. Because it turns out that the theoretical speed limit of a 'computer' (which as the term is used in the article is simply anything that processes information, basically - particular architectures aren't considered) can be related to the time-energy uncertainty from quantum mechanics, it is then necessary to find out how much energy a given lump of matter can contain. And that's given by the whole E = mc^2 business.

    Of course this limit is not practical. It's a theoretical upper bound. I haven't the faintest idea as to how you'd go about converting 1kg of matter into energy controllably (without, say, temporarily warming up the climate of the city you're working in), or how you'd control it enough to make it compute something you're interested in, and so on. The point isn't to look at the practical limits (those are better looked at from the perspective of current technology, i.e., Moore's law and whatnot, in my opinion), but rather the general theoretical limit.

    Just as a note, you may want to look at Lloyd's paper, as the ideas for the calculation are his, and I'm just summarizing and reporting them. (Lloyd's paper, by the way, is very well written, and it's recommended reading for just about anyone who isn't scared away by some equations).

    But if the above explanation doesn't satisfy you, please post why, and perhaps you can convince me that I (and Lloyd) are in error.

    Cheers, -Geon

    geonSPAM@arsISBORINGtechinca.com

  • Well... I meant that there is a physical limit to the thermodynamics that control computers just as there is a phyical limit to the size that you can design a transistor.

    In a since, we're both right. You're right, my analogy is a poor one, but it still serves its purpose. ;-)

    Thanks
  • One problem to note with reversible computing is that your computer has to have enough memory to store every bit of information ever entered into it. I'm pretty sure that it can compress the info, but eventually you'll have to have an energy expensive and very hot 'information dumping' process. I say it's a problem, but of course normal computing requires the same thing, and doesn't let you choose when you do it.

    Actually, no. It seems that way (that's what I thought at first too). But that's the cool part about it... you simply reverse the process by reversing the direction that you increment the Program Counter. I'll give a short example in pseudo assembly:

    ; assume that $1 and $2 are both 0 ADDI $1 32 ADDI $2 24 ADD $1 $2 ; at this point $1 should hold 56. ; and at no point have we stored anything in memory ; ok.. now let's do these instructions in reverse. ; normally, we wouldn't manually do it this way ; because the CPU would switch it, but this works SUB $1 $2 ; ok.. $1 now contains 32 SUBI $2 24 ; now $2 contains 0 SUBI $1 32; now $1 contains 0 This is a fairly primitive example, but it proves the point. The other thing is that there cannot be a MOVE instruction. Memory can be exchanged with a register value, but not copied. So, all you have to do is run the instructions backwards... I promise!!!
  • botched the assembly... sorry:
    ; assume that $1 and $2 are both 0
    ADDI $1 32
    ADDI $2 24
    ADD $1 $2
    ; at this point $1 should hold 56.
    ; and at no point have we stored anything in memory
    ; ok.. now let's do these instructions in reverse.
    ; normally, we wouldn't manually do it this way
    ; because the CPU would switch it, but this works
    SUB $1 $2 ; ok.. $1 now contains 32
    SUBI $2 24 ; now $2 contains 0
    SUBI $1 32; now $1 contains 0

  • Heh.. Thanks...

    but the research I'm doing /is/ with Dr. Frank. Actually, I have a meeting with him in half an hour to discuss this topic.

    Intermediate state data does /not/ need to be saved. While it is important that state data is not destoyed, it is only necessary to guarentee reversiblity on the hardware level.

    State data is necessary if you're performing irreversible operations.
  • Does it have something to do with the way the cpu handles overflow for certain instructions? Or did I just convert an irreversiable algorithm? ..remember you heard it here first!.. or are you just talking out of your butt? :)

    You hit it right on.. the overflow would be bad. Anytime there is a loss of data, reversibility won't work (unless that data is saved somewhere).
  • by whydna (9312) <whydnaNO@SPAMhotmail.com> on Friday June 22, 2001 @05:22AM (#132885)
    I've just joined a research group at my University to study reversible computing [ufl.edu]. The professor in charge wrote his doctoral thesis on the subject at MIT.

    The concept is that a "normal" CPU erases information on every cycle (clearing registers, overwriting data, shifting data to nowhere, etc). When a CPU erases information, it's dissipated as heat. There are thermodynamic limits to this (kinda like Moore's law). So, if a computer could be designed not to erase data, you could reverse the CPU and get most of your energy back.

    Now before you say "BS", think about it. In physics, if you know the initial state (starting position, velocity, acceleration) of an object in an isolated system, you can easily compute where it was at any given time earlier. This uses the same concept. For example, If you add 43 to a register, you can subtract 43 from that register and get your energy back.

    Of course, certain instructions don't lend themselves to reversibility. For example, bit shifting is inherently irreversible. One option is to maintain a stack of "garbage data", but that's a poor solution. On the other hand, a number of instructions are reversible by default. .. XOR is always reversible, etc. So, a reversible CPU will probably have a more constrictive instruction set, but is still functional.

    Reversibility is not anything new, but it does take a shift in thinking. Algorithms can be designed to run very efficiently on reversible computers, but it takes a bit more effort. Hopefully, we (the community of people studying reversible/adiabatic computers) will develop means of either converting irreversible algorithms or develop ways to make them less innefficient (double negative).

    -Andy
  • by ch-chuck (9622) on Friday June 22, 2001 @06:48AM (#132886) Homepage
    "640K ought to be enough for anybody. "
    - Bill Gates (1955-), in 1981
  • These kind of articles remind me of the futile
    medieval debates on how many angels can dance
    on a head of a pin. Same sub-arguments too-
    whetner angels are material (atoms) or immaterial
    (photons, quantum states), and so on.
  • The other thing is that there cannot be a MOVE instruction. Memory can be exchanged with a register value, but not copied


    One point; the issue with MOVEs is not the fact that you are reading a value in the source register, but rather that you are destroying the value stored in the destination register.


    Now, perhaps you intended that interpretation of the word "copied". But when I think of the word "copy", I normally associated it with the reading of the old stuff, not with destroying the new stuff.

  • You'll just need to use an encoded link to your base station. Or you could carry it in a fanny pack.

    That could add new meaning to: "I'm loosing my mind!"

    Caution: Now approaching the (technological) singularity.
  • "Knowledge is unlimited"


    That's an interesting conjecture. Any idea how you would go about proving it? Personally I don't believe it. It's like saying "there are an unlimited number of fish in the sea" or "there's an unlimited number of trees".

    Well, there is a large number of fish in the sea, but we're taking them out faster than they grow back, so there won't continue to be a large number. There used to be a large number of trees. But we cut them down faster then they grew back, and now every tree is owned by some person or organization. And the number is still decreasing rapidly.

    Personally, I don't think that anything in the universe is unlimited. Stupidity has been suggested, but even there I have my doubts. Still, there's probably a better argument to be made for an unlimited amount of stupidity than for an unlimited amount of unknown things to be learned. In fact, I doubt that the total number of things in the universe that can be know is as large as the powerset of the cardinality of the number of non-virtual elementary particles in the universe. And probably not even as large as the power set of the cardinality of the number of electrons in the universe. Now that's a LARGE number, but it sure isn't unlimited.




    Caution: Now approaching the (technological) singularity.
  • The light speed limit is for light in a vacuum. Nothing moves faster than c. Plently of things move through cecium vapour faster than light.
  • Imagine a Beowulf cluster of those things.

    It'll suck the paint off your house and give your family a permanent orange afro.

    -B
  • keep in mind that the fundamental physics limitations are based on several physical theories - not absolute, god-given truths. we have come a long way since newton.

    second, keep in mind that the current physical theories generally pose more questions than answers, e.g. we know that there are more things that we don't know than things that we know.

    we must remember that our theories are useful tools for many things, but we must also remember that they are just theories. if we don't, then all we have done is replace God with Science.

    my physics professor in College said to me "a good theory is one that can be proven wrong". i still don't quite understand what he meant by it, but maybe it was "i rather know something is wrong for sure than not know whether it's true or false - and theories, by their very nature, can never absolutely be proven true"
  • i like this idea: it's possible that Moore's Law actually is a law of nature, while quantum physics, general relativity and the speed of light are all theories that will crumble eventually.

    not likely. but possible.
  • See: Perpetual motion machine.

    If you know an objects initial state, you can compute where it's been. This is not the same as putting it back in it's initial state.

    If you add 43 to a register, you've had to drain the capacitance from some circuits and charge up others. The capacitance charge is drained to ground, and you will have to expend energy to move those electrons (or some others) back to Vcc. Sorry, no way around that with modern electronics.

    Reversible algorithms make make some times of computing efficient, but not in the 'energy' efficient sense.
  • Performing 5 * 10^50 operations per second with 1 kg of computer is so far beyond anything we have today. What would we use that speed for?

    I was thinking about the same subject from the other direction recently. For my research [umich.edu] I am trying to simulate the growth of silicon germanium crystals. The usual way of approaching a physics problem like this is to make simplified approximations of what actually happens. For example, hard-to-model processes like diffusion are measured by experiment or found from standard relations. But these simplifications can yield a model which is subtly wrong and fails to predict unexpected behavior.

    What I really want to do is model the entire crystal growth from the most fundamental physics (quantum mechanics or something like string theory). That way the simulation is not just an approximation of reality but is indistinguishable from reality! How fast of a computer do I need for this full-detail simulation?

    Well, that depends on the size of the system I'm trying to simulate. How big of a computer do I need to model a 1g chunk of matter? Logically, it seems that at least 1g of computer would be required to model 1g of arbitrary material in real time. (Otherwise, I could simply model a 1g computer with a 0.5g one, which in turn is actually simulated on a 0.25g computer and so on.) So, perfectly simulating a 10g crystal would take 10g of "ultimate laptop" processing. But if, like the article mentions, the laptop uses ordinary matter rather than plasma then the computation rate is 10^40 ops/second rather than 10^51 ops/second. This implies that my 1kg super laptop would need 31.7 years to simulate 1 second of growth for a 10g crytal! How much computation would be required to mimic The Matrix? To perfectly simulate life on Earth would require a computer at least as large as the Earth itself!

    So, although 2 GHz processors sound fast and the ultimate laptop in this article seems unfathomable, we have many applications today that can take all the computation speed available. A near future application for ultracomputing is the modelling of protein folding for drug design. There, the amount of matter being simulated is small fractions of a gram and the leveraging of computer weight and time is worthwhile.

    AlpineR

  • The problem I have with this discussion is that it involves computing at the sub-atomic level. We are getting better at this kind of thing -- for example measuring the spin state of a single atom (Nanodot discussion is here [nanodot.org]). But because we have lots of examples of what can be done using atomic-scale engineering (Nature provides many examples of this), and we have no examples of sub-atomic scale engineering, I deeply doubt we will have robust computers operating at sub-atomic size scales anytime soon.

    It is worth noting that Lloyd's thought experiments in these areas were preceded by similar speculations over 4 years ago in Anders Sandberg's [nada.kth.se] paper The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains [transhumanist.com]. Lloyd has extended them a bit by bringing Black Holes into the picture.

    Now, what we will be able to engineer in this century, using diamondoid molecular nanotechnology, is solar system sized nested layer Dyson shell [aeiveos.com] supercomputers. This is a unique architecture that I have named a Matrioshka Brain [aeiveos.com]. It will allow us to most efficiently use the entire power output of the sun and compute somewhere in the range of 10^42 to 10^52 ops per second.

    Interestingly enough, Michael Franks has a paper "Reversibility in optimally scalable computer architectures" [ufl.edu] which postulates a solar system sized reversible architecture that would out-compute any non-reversible architecture. This too would be using atomic-scale engineering. Unfortunately it requires the power output of an A or B class star (~50,000 suns) and requires an amount of silicon equal to the mass of Saturn (our solar system doesn't even come close to having that unless we mine the sun for it). After we have developed machines of these architectures, our development comes to a slow halt unless our ability to do sub-atomic engineering can be developed. I'll be quite happy with what we can get out of atomic-scale engineering -- it supplies enough computronium for roughly a trillion-trillion human minds for those who choose to upload [minduploading.org].

  • You need to go study this topic in much more detail. Read some of the papers by Michael Frank [aeiveos.com], he is one of the world's experts in this area. Then go study some of the original work by Charles Bennett [aeiveos.com].

    You do need to save the intermediate state information. That is why Frank's design for the ultimate reversible computer is so big (see my other post on this topic). You should keep in mind that you can compute for "free" (if you do it slowly), its erasing bits that costs you money (generates entropy as heat). Logical AND operations erase information such that you cannot run the calculation backwards. You have to design the hardware such that it has no such information destroying operations.

  • The fact that we are accelerating towards the Vinge Singularity [aeiveos.com] is ignored. Kurzweil is extensively documenting this at KurzweilAI.net [kurzweilai.net] (unavailable today). Various estimates place this between 2020 and 2050. Once we have autonomous self-replicating systems (nanobots), we can dismantle the planets [aeiveos.com] and turn it into computronium. We will come very close to the limits of the computational capacity allowed within our solar system within this century. Even if this fast ramp is not realized, you underestimate the progress that will be made in extending the human lifespan. We have the genome now, we will rapidly decode much of it over the next decade and begin to design and implement anti-aging therapies. Unless you are over 50 years old or will reject the use of such technologies, the probability that you will make it to 200 is quite high. Sorry, you have to face the problem. :-)
  • The title is correct, Knowledge is unlimited, but the rest of the post is not; read the short essay Can the Universe be Known?, by Carl Sagan. (I think that's the essay name)
  • Now everyone knows that the NSA is after my barbecue sauce recipe stored in a pgp encrypted file. Of COURSE they'll create a black hole computer just to get it. After all, that barbecue sauce IS kinda red!
  • >For all you know, there are states "below absolute zero"

    Not possible by the definition

    That's like saying something can go slower than 0 mph. Not possible. Since speed(not velocity) is always positive. Since temperature is just a measurement of how fast "stuff" is moving around and absolute 0 is when nothing is moving you can't get colder. Saying otherwise just shows your ignorance.
  • I'm not an expert, but some rudimentary examination and common sense indicate to me...

    you don't need to actually reverse the operations to 'recover' energy in reversible operations. Reversible operations don't destroy information, and thus don't incur the minimum entropy increase caused by destroying information. This means that it is theoretically possible for them to dissipate arbitrarily small amounts of heat, and thus consume arbitrarily small amounts of energy.

    In other words, reversible operations don't allow recovering the energy, they make it possible to not have to pay the energy in the first place.

    One problem to note with reversible computing is that your computer has to have enough memory to store every bit of information ever entered into it. I'm pretty sure that it can compress the info, but eventually you'll have to have an energy expensive and very hot 'information dumping' process. I say it's a problem, but of course normal computing requires the same thing, and doesn't let you choose when you do it.

    Yaknow, maybe I should've blasted the guy and removed any indications of uncertainty in my reply. That's the only way I've ever gotten modded up...

    Regards,
    Bobby

    Bobby Martin aka Wurp
    Cosm Development Team

  • Hrm, I see now. The only requirement is that, if given the same data in reverse order, and you run the program in reverse, you get back the original data. I can see that doesn't require that all of the data put in be stored by the computer. All it says is that the state at any given time can be recovered if given all the input between then and now.

    Very nice! I really should investigate this more sometime, but there aren't enough hours in the day.

    Thanks for the correction.

    Bobby Martin aka Wurp
    Cosm Development Team

  • Landon Noll, who is probably best known to slashdotters for his work on LavaRand, has done some work calculating the limits of how many bits a cryptographic key has to have to be immune to brute-force searching in this universe. As I recall, he was going to publish it in Scientific American, but I can't seem to find it.

    At any rate, taking into account such issues as your computer crushing itself into a black hole if it gets too massive, IIRC, Landon concluded that a key of about 530 bits is Really Safe.

    - jcr
  • The physics limitations apply to massively parallel machines as well. Read the article. Multiple CPU's still work by changing state. If you've got a million CPU's, and together they weigh less than a kilogram, they all have a collective maximum number of state changes per second as computed in the article for an arbitrary device of that mass.
  • > there might be very different approaches
    > (regarding computational power) to use the
    > physics for building fast computers

    Different from what? Like I said, the discussion relied on no particular form of technology. The only assumption is that a computer system must "change state" in order to perform any computation. And a computer is basically a "finite state machine". And if you think a computer can compute something without changing state, then how will you know when it's produced the intended result?
  • I think you may have missed my point, though. Moore's law is sort of an "anti-limit". It's exactly unlike the physical "laws" which place limits on things because Moore's law implies that there is no limit (doubling every 1.5 years, yada yada yada).
  • <em>Who says that a complex problem needs a high number of state changes?
    </em>
    <p>
    Well, we could argue about how complex is "complex", and how high is "high", but that's missing the point.
    <p>
    The outcome of a travelling salesman problem is not a state change, unless you think you can solve the problem by flipping a bit. Even if you use the physical analogue of creating lengths of strings tied together at the various city nodes and pull the "cities" in question until the strings are tight and selecting the tight path, every atom moving in the process is a state change.
    <p>
    And even if you prove that P=NP, you still need an algorithm that traverses the data, examines it, compares different links, etc, and every step in the algorithm is AT LEAST a state change.
  • by volpe (58112) on Friday June 22, 2001 @05:28AM (#132910)
    > There are thermodynamic limits to this (kinda
    > like Moore's law).

    I think you meant to say, "almost, but not quite, entirely UNlike Moore's Law".
  • by volpe (58112) on Friday June 22, 2001 @05:14AM (#132911)
    No, the limitations that technology can overcome are engineering limitations. The limitations talked about in the article are basic fundamental physics limitations that don't depend on any particular form of technology. Note that nowhere is it said that the problem is the size of the tracings on the microchip, or heat dissipation, or whatever. It's all a matter of any physical system having a bounded energy having a corresponding bounded rate of state change. Saying that there will be another technological revolution that surpasses this is like saying we'll be able to cool things below absolute zero when we figure out how to build better condensing coils for our refrigerators.
  • Uh...you are responding with a reference to the wrong Moore's Law. The one this guy is talking about is a law of Thermodynamics by Grimmey Moore, not Intel founder Gordon Moore.

    http://www.google.com/search?hl=en&safe=off&q=Mo or e%27s+Law+Thermodynamics

    Context is EVERYTHING sometimes people, think before you post.
  • Oops, wrong link...sorry, here is the right one...

    http://www.c3f.com/nty0129.html

    It is the first link on the Google page.

    Science at its best.

    Gimmey Moore forever!
  • Actually, 1 L^3 = 10^-9 m^9. So tarsi must have a 9-dimensional brain, but it's a bit small. Which could explain why his brain doesn't sound like it works any better than ours, despite having so many extra dimensions to work with.
    --
  • From the: Damn-that's-neat-but-what's-the-point dept.

    Wow! That's some neat physics (only a part of which I understand). But really do you think we'll need to get anywhere near these sizes and amounts?

    The time will come when the theory has advanced far enough that we'll drop the Von-Neumian-style of doing computing and go with something a bit more, shall I say, better? The human brain certainly doesn't have anything near those figures of capacity, and it's about 1-2kg, occupies about 1 L^3 of space.

    And I don't know about you, but I LOVE the graphics. They are kicking some major ARSE. The refresh rate could be a bit higher, though, I still get blurry vision when stumbling home from the bar. :)
  • It's not. Remember that the 2nd law is only probablistic. There's nothing stopping entropy from decreasing in a closed system, it's just that for unbound systems of large numbers of particles a decrease in entropy is stupdendously unlikely to happen.

    In any case, the underlying definition of the entropy of a system at a given energy is Boltzmann's constant k times the natural log of the number of states the system can assume at the given energy. For the dipole system, the energy of the system is proportional to the number of anti-parallel dipoles times the strength of the imposed field, i.e. E = C*Nap*M, or Nap = E/CM. So if the system has energy E, you know how many dipoles are anti-parallel, but not which ones. If there are N total dipoles and Nap of them anti-parallel then there are Nap!/Nap!(N-Nap)! ways to arrange those Nap dipoles and hence there are that many states. If you examine the behavior of that function (or its natural log), you'll see it increases from Nap=0 to Nap=N/2 and then decreases from Nap=N/2 to Nap=N. This means that the entropy increases from as total energy E goes from 0 to (N/2)(C*M) but then decreases as E goes from (N/2)(C*M) to N(C*M).

    As I said, see a statistical mechanics textbook.
  • That's not true, actually.


    Temperature is defined thermodynamically as:

    1/T = dS/dE

    where S=entropy and E=energy.


    It is possible (and not merely theoretically) for entropy to decrease as temperature increases, in which case temperature does go negative. Weirdly, negative temperatures are considered hotter than any positive temperature. Thankfully (?) this sort of thing can only happen when you have very constrained systems -- the classic example is N magnetic dipoles constrained such that any dipole can only point parallel or anti-parallel to an applied magnetic field. If they are all parallel (lowest energy state) and you apply energy, they will begin to flip over. As they flip over, the entropy of the system rises with the increasing energy, so the system's temperature is positive. But once half the dipoles have flipped and the increasing energy drives more and more of them into the anti-parallel state, the entropy starts decreasing with increased energy, and temperature goes negative.


    Don't believe me? :) Pick up a statistical mechanics text and have a look...

  • I'm trying to get some figures but I believe that is considerably more instructions in a single second than the aggregate computations of every microchip's lifespan that was ever built and operated.
  • by selectspec (74651) on Friday June 22, 2001 @05:12AM (#132919)
    5.4258 * 10^50 maximum operations/sec in a 1kg chunk of matter.

    hmmm

    That is the equivilant of 542,580,000,000,000,000,000,000,000,000,000,000,00 0,000 1Ghz CPU's.

    I think we're covered for awhile.

  • They are able to transfer information faster than the speed of light aparently.

    About 300 times as fast at the moment through cecium vapour. (Aparently it has a negative refraction index...)

    Also, what about non-clocked machines (massively paralel logic machines) such as the one NASA just bought.

    The limits are a bit near-sighted, methinks.

    D.
  • Nothing with mass, however.
  • That is what I also thought. (I only did basic relativity at University... and that was some time ago...)

    But check this out, and please comment on it:

    http://www.time-travel.com/gas.htm
  • Does it really matter since you only need the laptop to enter and display information? As long as the device can communicate information with any larger box (or distributed groups of boxes) it can ignore all of the "limits" mentioned. Treating the ultimate laptop like an enclosed object that cannot communicate limits the power of the laptop immensely and unfairly.
  • In his article he claims that "The maximum energy an ultimate laptop [1kg] can contain is given by Einstein's famous formula relating mass and energy: E = mc2. Plugging in the laptop's mass, the speed of light, and combining the result with Eq. 2 tells us that the maximum number of operations per second a 1 kg lump of matter can be made to perform is 5.4258e50."

    i'm assuming this is at 9e16J per second, which means to make his "ultimate laptop", he would have to split the atoms of 1kg of any material per second... which means he would need to carry a large nuclear power plant around with him (even then, I don't think they go through 1kg/s).

    What he fails to understand is that Einstein's formula is an equivalence, not a potential. Maybe that is the maximum energy a mass can have, but to get at that energy (in J/s) you would have to split enough atoms that that mass was lost (your 'laptop' would get 1kg lighter every second). Unfortunately, his whole article is based on this principle, so you can't use anything he says unless you plan to sustain a nuclear reaction which loses 1kg/s in fission to power this "ultimate laptop".

    He correctly used the values in the formula, but he didn't apply it correctly. Maybe he should have done a bit more research.
  • by Chester K (145560) on Friday June 22, 2001 @08:24AM (#132942) Homepage
    If this picture is correct, then black holes could in principle be 'programmed': one forms a black hole whose initial conditions encode the information to be processed, lets that information be processed by the planckian dynamics at the hole's horizon, and extracts the answer to the computation by examining the correlations in the Hawking radiation emitted when the hole evaporates.

    Wow! Imagine if we could make a computer as large as Earth... I believe a computer that big could calculate the answer to the question of the meaning of life, the universe, and everything!

    And don't even get me started on what we could do with a Beowulf cluster of those things...
  • Every time I read an article about a limit in some area of computing (network speed, storage, CPU speed, stupidity of A. Grove), it seems as if it's the last sign that a new method/paradigm is on the horizon, with a significant breakthrough coming.

    Bring it on!
  • Actually, the article doesn't talk about quantum computing at all.

    The only reason for quantum mechanics in this article is the fact that quantum mechanics gives a lower bound for miniaturisation (i.e. you can only keep making computer parts smaller until you get problems with the Heisenberg Unertainty Principle)

    The article even specifically states that it doesn't refer to a special type of architecture.

    --
    This signature has been deprecated

  • by cvd6262 (180823) on Friday June 22, 2001 @05:32AM (#132953)
    I interned at the Storage Systems Division of IBM in San Jose, CA. We had a brown bag seminar where somebody big (his name escapes me) spoke on the furture of magnetic storage.

    He had a great graph of the last 30+ years of GB/square inch, which seemed to coincide with Moore's Law (which, just like this article, addressed processing issues, I know. Bare with me here.). There were red lines drawn every ten years or so representing what scientists had believed to be the superparamagnetic barrier - the point at which it would be physically impossible to cram any more data onto a disk.

    The guy had a great line every time one of these came up. "In 19XX Dr. XYZ at ABC University discovered the superparamagnetic barrier.... We broke it X years later." (X was usually a single digit.

    My point is that it will be interesting to watch if these "scientific" finding will not require revision. True, this one may be based on sound scientific principles, but so were all those who attempted to predict the superparamagnetic barrier.

  • This bit from towards the end of the original paper is interesting in this regard.

    No information can escape from a classical black hole: what goes in does not come out. [...] Recent work in string theory suggests that black holes do not actually destroy the information about how they were formed, but instead process it and emit the processed information as part of the Hawking radition as they evaporate: what does in does come out, but in an altered form.

    If the latter picture is correct, then black holes could in principle be `programmed': one forms a black hole whose initial conditions encode the information to be processed, lets that information be processed by the Planckian dynamics at the hole's horizon, and gets out the answer to the computation by examining the correlations in the Hawking radiation emitted when the hole evaporates. Despite our lack of knowledge of the precise details of what happens when a black hole forms and evaporates (a full account must await a more exact treatment using whatever theory of quantum gravity and matter turns out to be the correct one), we can still provide a rough estimate how much information is processed during this computation.

    "Black Hole" computers sound like a little beyond the limits of the current technology. However, It sheds light on possibilities of the Galactic Structure.

    ;-)

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • huh?

    The science fiction story angle on it is that eventually all black holes are computers.

    Also, It is considered that all galaxies have black holes in the center.

    So that galatic cores are data centers, among other things.

    Bit of a stretch, but it would be a science fiction story any how.

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • a 1 kg black hole computer is ~10^51 operations per second.

    1 Megahz = 10^6 operations per second 1 Gighz = 10^9 ops per second. Moore's law seems to go for speed now too. amout 10 time faster in 5 year. so we still have to go for about 200 year before we reach this speed.

    I do not think i live for another 200 year. no problem there.

    Damn! the sun does not go black hole for some 10^9 years. no luck here......

  • For example, If you add 43 to a register, you can subtract 43 from that register and get your energy back

    Can you imagine:

    "Oh man. I had a bug in my program, and accidentally substracted more from my registry that what I originally added. Now my computer is literally frozen. Does anyone have an icepick?"
  • Isn't dark matter stuff we can't observe by its emmisions? Like the moon?

    Since no one else responded...

    Dark matter is part of an astronomical theory. When the astrophysicists calculate how fast the universe is expanding, and how much matter is in it, they come up short - they figure there should be about 10 times as much matter as there is. Dark matter is the additional stuff that the scientists haven't observed. Either the theory is wrong, or there's a lot of stuff out there that doesn't radiate.

    Then there are black body objects in physics, which (if I remember correctly) absorb all light that shines on them. They also emit energy in the form of light, and the light's spectrum matches quantum mechanical theory (in fact, one of the reasons for coming up with the quantum theory is to explain black-body radiation). The sun is the closest black-body object (I know, it seems a little confusing, but some definitions are counter-intuitive).

    But that may not be your question. Scientists can observe other stars directly, black holes and dust clouds somewhat indirectly, and anything else that emits energy or blocks light or bends it (anything of significant mass). We've even reached the point that we can detect massive planets orbiting other stars by the way they effect the star they orbit. But even if every star had 10 Jupiters, that would only be about 10% or less of the mass of the star. Dark matter, if the current calculations are correct, would out number stars 10 to 1 - and we should be able to see the effect.

    This either means we have to look elsewhere, or change the theory. If a bunch of aliens kept some massive, galaxy-sized black holes around, at a significant distance from other galaxies, then that may account for dark matter. And what else would the aliens store in those black holes?

  • ...we still have to go for about 200 year before we reach this speed.

    I find it interesting that, for the most part, Moore's law has been an accurate indicator of future computing speed, and the accompying engineering and theoretical knowledge needed to reach that speed. It is amazing to think that, if the law hold up, in 200 years we would have the physics and engineering needed to build such a computer, and perhaps the knowledge to know what steps to take to make an even faster computer!

    The logical solution seems to be to tap into parallel universes (quantum calculations done in a massive parallel fashion?). It also seems that intense sheilding of some sort would be needed, both to keep quantum influences out, and to keep the user from being incinerated. Unless you didn't care, because your computer was outputing to some insignificant parallel universe. Even line-of-sight lasers would be too slow of a bandwidth, since the output would be 3-D light.

    Just think, about 300 years from the first radio signals to black-hole computers - no wonder Seti@Home is failing - all the aliens are playing CounterStrike on their black-hole systems! Who would have guessed that the dark matter was simply off-site storage for 5-D pron?!?

  • by gergi (220700) on Friday June 22, 2001 @05:08AM (#132969)
    The author indicates that computing is limited by quantum mechanics and that we have quite a while (many, many years) until we reach that limit. Well, I suspect that many, many years in the future, researchers will have found yet another way to perform 'computer processing', faster and more efficient than quantum processing.
  • Every year we seem to think we know every thing there is to know about physics, biology and any other science. We are convinced that our current theories are laws of nature. And every year some discovery shatters that belief in a given discipline.
  • These kind of articles remind me of the futile
    medieval debates on how many angels can dance
    on a head of a pin

    Of course, the real futility of the argument is that there is no way to demonstrate the existence of angels, let alone count their numbers. Subatomic particles, however, leave visible tracks in the cloud chambers of particle accelerators. Though it's impratical to count them, it's not impossible, and their behavior, while it is inherently uncertain, is not entirely random, and can be predicted to a degree.

  • by Bonker (243350) on Friday June 22, 2001 @05:14AM (#132984)
    "What was that?"

    "Ah, just another script kiddie trying to DOS the database."

    "I don't understand. He just upped and exploded."

    "Yeah, his quantum computer heated up to the temperature of a supernova and then collapsed in on itself like a black hole. Happens all the time."

    "Really?"

    "You should see it when they try to encode movies with DivX!"


  • While corporate computing needs are never satisfied by todays fastest machines, for the bulk of small businesses and homes, at what point have we reached overkill? I mean look at Windows 3.1 compared to XP. Besides the web browser and related stuff (Media Player, etc) what has really changed? I mean - what is it USED for? Most people use them for Games, Personal Finance, Letter writing, general record keeping, etc. Think about the programs most people use on a regular basis. Web browser, Office suite, finance package, MP3 player, etc.

    All microsoft and other OS developers seem to be able to do is add lots of features that never REALLY get used and a few that do make high impact improvements. But do smart tags, the start menu, right click context menus, etc really require massive improvements in processor speed?

    I can't help but think that Win2K on my Pent III 700 laptop is using the bulk of the resources just to RUN vs the load placed on it by any apps I'm using. That seems to make no sense.

    So that begs the question. The whole idea of Linux from teh start was a free Unix that ran well on OLDER (cheaper - widely available) PCs. Even today that is still true. So if LInux continues to be accepted and moves into teh desktop mainstream someday - will that effect the push on PC technology?

    Its striking that for less than an Apple I in 1977, I built a 1GHz Athlon server with the latest gadgets (SCSI RAID, LCD monitored drive sleds, PC133 SDRAM, etc) A PC with this much power is staggering - even compared to boxen from a year or two ago. But do I really NEED that much power? Not really, CPU wise, but it didn't make sense ot save $20 and get 200 less MHz when AMD, at the time was selling the 1GHZ athlon as the SLOWEST CPU.

    We all know that no matter what Intel & AMD come up with, Micro$oft can overload it with a bloated OS upgrade that gains you squat. But in teh world or real OSes that treat system resources as something to be used scarcely, when will enough PC power be engouh for the bulk of the users (corporate flunkies, personal PCs, and small businesses?) When will we see a split in what is used for servers vs what is used in desktop PCs? Today, the latest CPUs are showing up in desktops almost at the same time they go into servers (Xeon excluded, but even there its getting more blurry)

    Just like always it'll be amazing to see where we are 5 years from now, but I just can't imagine I'll be using a 3GHz desktop PC running RedHat 12.x that probably cost me $1000 :) It boggles the mind much more than the limits physics places on signal transmission on teh dies.... :)

  • No because I never said anyone Gate's statement was stupid because it was generalized. My point was the majority of users at home and in small businesses hardly tax their computers with what they USE, its the bloated OS usign most of the CPU time. The average person using AOL to surf the web and pay bills is like a 90 year old man driving a Porsche 30 MPH on the freeway. Since when did AIM require a massive CPU that could design a nuclear missle?

    And I was shooting more from a point that if Micro$oft spent more time trying to streamline their OS vs bloat it with more Justice Dept attracting 'features' the user's experience might be better and they might be able to get cheaper PCs more to their NEEDS 5 years down the road.

  • From Geon's quote of Lloyd [arstechnica.com]:
    The amount of information that can be stored by the [1-liter] ultimate laptop, ~10^31 bits, is much higher than the ~10^10 bits stored on current laptops. This is because conventional laptops use many degrees of freedom to store a bit, whereas the ultimate laptop uses just one.
    Er, no, Einstein, it's mostly because in the year 2001 the volume reseved on computers for primary and secondary storage is closer to 10 square cm than 1000 cubic ones.

    --Blair
  • The idea of black hole computing is obviously heavy, but the requirements on a heat sink capable of handling the matter-energy conversion of one kg are staggering.

    Overklocking might of course not be strictly necessary, considering the effects of general relativity.

    Staggering might be descriptive of the investment costs for setting up a new singularity for each calculation, given the obvius difficulty of interactivity once a Schwartschild-barrier is in place.

    One must though admire the article authors, not only on their interesting essay, but also on behalf of the courage involved in imagining the prescence of a dissapearing black hole in ones lap.

  • His "calculation" is nothing more than a change of state in a quantum system. In real life, any calculation is likely to involve something more complex than this - the time taken for a single change of state is the theoretical minimum time for a single operation.

    No matter how the machine works, it must involve state changes in order to have calculation of any kind. Barring completely new physics involving something other than normal matter, his calculation is correct.

  • He's talking about the theoretical maximum limit of processing power, not what is actually acheivable. Even in the article he says that there are good reasons for using less than this, and practical concerns like architecture don't come into it at all.

    It's not bad science at all, it's theoretical science.

  • Physics, however, is man made. In your own counter argument you said Moore's observation applies to technology and knowledge, two inherent ingredients to physics.

    To use a phrase: bollocks. Physics is inherent to the Universe, and is independent of what we know about the Universe and how we are able to manipulate it. Obviously, our knowledge of physics changes, but the underlying principles remain the same.

    As our technology and knowledge grows so does our ability to penetrate to the "underlying truths of nature". Hence why we no longer believe newtonian physics to be accurate.

    But they are still accurate, we just now know they are only accurate within a certain domain (speed much less than the speed of light, low masses). What the author is talking about is how the fundamental physical laws of this Universe constrain processing power. Quantum mechanics (the basis of this article) is undoubtedly not the whole picture (which is why superstrings are the focus of such intense research), but in their domain it is correct, and so are the observations made in this article.

    To exceed the limitations described here we will have to do our processing in some other domain - perhaps if we recreate conditions at the very start of the Universe when it was still 10/11-dimensional then we can harness additional computing power, but that wasn't what the article was talking about.

  • by sharkticon (312992) on Friday June 22, 2001 @08:02AM (#132999)

    You're still talking about practical concerns here? This calculations is simply an upper bound upon the amount of processing that can be done with 1kg occupying 1liter of volume. Nothing more. It's purely about information processing, not even about "computers" per se.

    Besides, even if it was concerned with how this might be acheived, how the hell is he going to work all that stuff out? Obviously, such technology would be way beyond what we have today, so there's no way for him to use realistic figures. So trying to add practical details is a waste of time, kind of like trying to calculate how fast we'll be able to travel in the future. We know the theoretical limit (the speed of light), but we can't say what technology will be available in the future and so cannot make any useful predictions.

  • by sharkticon (312992) on Friday June 22, 2001 @06:03AM (#133000)

    Every year we seem to think we know every thing there is to know about physics, biology and any other science.

    You don't know many scientists do you? :)

    If your assertion is true, then why would they bother doing it? If there was nothing left to know, then there would be no point in being a scientist, and no new research projects coming up.

    We are convinced that our current theories are laws of nature.

    The term "law of nature" is pretty loaded, and I doubt it would apply in many cases. And even then, such laws aren't universal. Consider Newton's "laws". Although they're called such, they're only applicable in certain domains (speeds much less than that of light, relatively low masses) and are only approximations to relativity. Similarly, our current physical theories (general relativity and quantum field theory) are only approximations to some higher theory which contains both. No scientist is convinced what we have now is the final "law of nature".

    And every year some discovery shatters that belief in a given discipline.

    I'll admit there have been, and probably always will be, some pretty amazing new discoveries that do come as a big suprise, but shattering belief? I think not. If anything, they often serve to spur on research into the various fields.

    Whilst scientists can easily be as guilty of hubris as anyone else, you're portraying them in a far worse light than is deserved IMHO.

  • I had a real problem with the science behind the article. It states:

    The maximum energy an ultimate laptop can contain is given by Einstein's famous formula relating mass and energy: E = m c2. Plugging in the laptop's mass, the speed of light, and combining the result with Eq. 2 tells us that the maximum number of operations per second a 1 kg lump of matter can be made to perform is 5.4258 * 10 50. This means that the speed of a computer is ultimately limited by the energy that is available to it.

    What he's actually saying is that you are converting the mass of the computer to energy in order to power it. So what part do you convert first? The screen? The RAM? The case? Not to mention that you have to have some way to funnel the energy into the computer without loss - it reminds me of the "massless ropes" and "frictionless pulleys" of a first-semester physics class.

    Sorry folks, this article is misleading. We're going to be stuck with batteries for some time to come.
  • His comments about the computer operating in black hole was interesting. Similar to the minds in the Iain M Banks Culture series of books which are basically supercomputers with the outer shell in real space and the "cpu" in hyperspace......


    Its also worth noting that our two main theories, Relativity and Quantum Mechanics don't work together in that they cannot both be correct, Since a new theory to bring them together is being looked for I personally don't believe quantum computers are the limit.

  • I've never met a scientist who claimed absolute knowledge about anything. Such a beast is more likely an invention of the media and popular culture. It is religious fanatics, on the other hand, who generally claim absolute knowledge about everything, and no experience is ever sufficient to "shatter their belief" in their knowledge or institutions. Contemporary theories are roadmaps to discovery, rather than absolute truths. If experimental knowledge is to advance, then theory is the best device we have for comparison. In other words, theories are meant to be revised or, even better, to be replaced. You seem to take a measure of satisfaction in the fact that theories get "shattered," and that those poor slobs who are "convinced that our current theories are laws of nature" get their comeuppance. How sad.

Thufir's a Harkonnen now.

Working...