The Ultimate Limits Of Computers 180
Qui-Gon writes: "Found an interesting article about the 'The Ultimate Limits of Computers' over at Ars Technica. This article is very heavy on the physics of computing, not for the faint-hearted." Somewhere between practical reality and sheer Gedankenexperiment, the exploration here keeps getting more relevant as shrinking die sizes and improving nanotech wear away at previously impassable barriers. The article is based on a paper discussing the absolute limits of computational power.
Limits of Scientists... (Score:1)
when a scientist says something is impossible, they're probably wrong."
--Anonymous
Is the Enterprise's computer slow? (Score:1)
What about clusters? Latency? (Score:2)
I'd like to see an analysis for the breakeven point; for what problems would a sphere of a given radius containing computers of a given speed provide a speedup, such that any larger radius would diminish the return, despite adding to the aggregate computing power?
Re:Reversibility and Thermodynamics (Score:2)
Of course, there are only certain operations you can do without discharging a capacitor to ground or charging it to Vcc. And there's obviously some energy expenditure in the resistence of the wires, capacitor leakage and the effort of getting the computation to go forward.
Re:THIS, is ther right link (Score:1)
Limit on Operations (Score:2)
Re:Limit on Operations (Score:2)
Re:*Theoretical* limit (Score:2)
But like writing fast or dog slow programs with a given programming language, there might be very different approaches (regarding computational power) to use the physics for building fast computers.
Thus there is a physical limit, as well as a logical limit (theories of computation and complexity) to consider.
Re:"massively parallel" machines will blow limit a (Score:2)
And of course it depends totally on the problem, if parallelism can be exploited.
Re:Reversibility and Thermodynamics (Score:2)
Search for "reversible" in this article [slashdot.org].
Re:"massively parallel" machines will blow limit a (Score:2)
Re:Limit on Operations (Score:2)
What equation are you refering to?
Re:until the next computer revolution (Score:2)
Actually short times allow for high energy fluctuations (dt * dE >= hbar), this is the very basic reasoning for the existance of short lived (so called virtual) particles in quantumn field theory, with its complications on the number of particle being not constant in physical process and the fact that one can not observe bare bone particles.
This could mean that fast processes have to deal with disturbance from such energy fluctuations, providing physical limits as well for fast computing.
Re:*Theoretical* limit (Score:2)
Considering the physical theory (here: non relativistic quantumn mechanics, which is still an approximation of nature) used to derive the limitations is sufficient.
Re:*Theoretical* limit (Score:2)
You imply that I challenged this physical upper limit on state change rate in this discussion, but I did not.
My point is different. It goes down to the question about the relationship between rate of state changes on one hand, and the ability to perform a fast computation on the other hand.
The naive view is similiar to the view many cpu buyers have in regard of clocking speed. It goes along the line "The more MHz, the faster the CPU". As we know, this is not a valid measure to compare for example a Pentium IV and a PPC CPU. They are not comparable by clocking speed alone. Eg the one cpu might perform a floating point multiplication much faster than the other cpu in terms of clock cycles.
The classical theories of computation solve this problem, by mapping programs onto a very simple model of a computer, the Turing machine or register machine, or real RAM and then doing comparisions. The operations in these models are precisely defined, like incrementing a register, or decrementing a register, or testing if it is zero.
As we know there, a computation needs a certain number of basic operations as well as a certain amount of memory. Classical complexity theory tells us, how exactly memory and number of needed operations are related for these easy computation models. In fact both are subject to the "size" of the input data as well. Like I said, this only well understood for these deliberatly simple choosen computation models (Turing machines etc).
In this discussion, we were talking about limitations of computing. It makes sense to think about the most powerful computing architecture envisioned so far. And that is the model of quantumn computer at present. In this setup your computer is a quantumn mechanical system. Your input data corresponds to the initial states of the system, your computation is a measurement of the system, this meaning a physical process that kicks the system from its inital state into its final state, the final state encoding your result data.
The trick with quantumn computers is that one arranges that measurement to reduce a large number of possible solutions/states into one final state. This is decribed sometimes as doing all computations at once.
You will immediatley notice, that computation in this setup is achieved by exactly one state change, the measurement that kicks your system into final state.
Now lets translate this abstract mumbo jumbo back into real things. The first quantumn systems explored were some couple of nucleii in a big magnet, a NMR mesurement device. Initial state corresponds to some inital states of these nucleii, typically their rotational and spin degrees of freedom. Measurement means bombing these nucleii with some electromagnetic radiation and looking for the states of the nucleii after that.
What must be achieved by the experment is, that we must be able to map the input data of a problem onto the initial states of these nucleii. The output states must also correspond to a solution of the problem under consideration. Further the measurement process must correspond to the solution procedure of the problem.
As we have ttl logic gates for simple operations like AND OR NOT, researchers have to think of building quantumn logic gates. Thus systems that can handle input data with their initial states, where act of measurement corresponds to some useful operation on the input data, and the output states can be translated back into a nice result on the input data.
Question: What kind of operation can be encoded by such an experiment?
Actually I don't know. I believe however that it might be a very complex one. Take for example a chain. It just hangs down and by this finds an optimal solution to a variational problem. Or fire some photons from air into water, the travel in way, that corresponds again to the solution of a hairy differential equation. Or a chemical reaction with many molecules can solve some crazy combinatorical problem in a couple of microseconds.
The topic touched here, is the topic of representation. Everyone with a decent level of math knows, that there are different representations of the same problem, the one harder to solve, the other easier. In fact this on of the basic problem solving strategies in applied mathematics:
So in this regard I want to argue that we can choose different physical setups that might enable us to do a more or less complex computation in one state change.
So it is not only about quantity (number of states changes) but also about quality (what computation is performerd with this change).
Re:*Theoretical* limit (Score:2)
You Sir, misunderstand what this kind of theoretical computer science is about.
It is using a mathematical theory: using mathematical objects like sets and mappings, a simple mathematical model is formulated. Then one tries to analyze the implications of this model by application of the laws of mathematical logic.
The mathematical model you are talking about is the register machine or Turing machine (or a model of equivalent computational strength). What you cited in upper case is a logical consequence, a theorem or lemma, thus a truth for just this model.
What makes this computer science and not plain mathematics is, that we believe that these simple models are adequate representations of what we think computers and computations are.
One of the fundamental assumptions of these models, and that is why I call them classical computational models, is that they are based on a thinking that corresponds to the common sense thinking of classical physics. Like you said, these models assume that only one fundamental operation can be performed in one time step (clock cylcle, whatever unit of time). But, as Feynman never stopped to preach, nature is not classical! It is much more strange. Experiments verify that nature has some weirdnesses only accurately described by quantumn mechanics or theory of relativity so far. These are results that are not compatible with commons sense, but are still compatible with the mathematical models of theoretical physics, thus compatible with mathematical logic. And that is where quantumn computing comes into place. What happens if one makes use of this strange effects, like two photons that seem to interact with no time delay, although they are far apart, or the phenomenom what is colled as collapse of the wave function in quantumn mechanics, to build a faster computer?
Somehow quantum computers are supposed to make this nondeterministic which means that given a single state I can be in multiple states after the next step, and somehow the right one is picked out at the end.
You surely heard of Schroedinger's cat. It is a truth from quantumn mechanics, that systems can be in an entangled/mixed state and that only the act of measurement/looking will put it into an decisive output state. The act of measurement itself is one of the conceptual weaknesses of quantumn mechanics. Most physicists just take for granted that it happens and know how to deal with it in calculations and experimental setup. The theory behind it is hardly distinguishable from philosophy. (As far as I understood, it is the problematic of observed system and observer who really can't be seperated into two entities, with two wavefunctions, but are connected, and thus would be described only be a common wavefunction. This way it is hard to get results. We would ultimately have to analyze the wavefunction of the whole universe to get results. Thanks god, this not necessary most of the time, the conceptual separation of the universe into seperate physical subsystems is often useful)
As our minds were evolved in a environment that is mostly goverend by classical physics, we have really trouble to grasp such strange truths. We know the formulas and are able to set up experiments and get results, but this no intuitive understanding. To quote Feynman again, he was sure that nobody understood quantumn mechanics.
The radical idea behind quantumn computing is to take advantage of these strange effects. (strange to us classical minds, perhaps not strange to some hypothetical life form evolved in an environment where quantumn mechanics would be dominant in every day life)
I just don't buy into taking a deterministic model of computation (which the article seems to be using) and expecting it to crank out the general travelling salesman problem in any reasonable time.
It is hard to common-sense recognize that while the quantumn physical processes are governd by probabilities, the laws that govern these probabilities are exact. For example a given process might have a random outcome, but still I can deduce from the laws that it must have a probability less than 1/3, and can make use of that given bound in the construction of my machine. You see, randomness does not mean, that there is no prediction possibly, also the kind of predictive strength is much less than in classical physics, it is still much beter than using a crystal ball.
Due to their propabilistic natures, quantum computers were thought to be very instable and not able to yield deterministic results.
The first crucial idea of quantumn computing was Feynman's (?) idea of recognizing classical computation/complexity theory as based on assumptions of classical physics and thus exploiting the not before used quantumn mechanical behaviour computationally, as described above.
The second crucial idea was Peter Shor's idea of using error correction strategies in the context of quantumn computing. As I understood, this was the theoretical break through. That idea provides the needed stability.
Shor by the way, was also the guy who proposed the prime factorization algorithm that was qualitativly better than anything classical computational/complexity theory said.
Re:*Theoretical* limit (Score:2)
Actually, the preparation of the initial states is already part of the quantumm computational solution procedure.
For example the idea behind a quantum computer doing a massive parallel database query would be to have all possible search results feature in the superposition of the inital states of the system.
The query would be the measurement that picks that state as final state, that represents the match.
Re:*Theoretical* limit (Score:2)
These problems can be mapped onto bits and these can be mapped into the qbits of a quantumn computer.
If we had to tour 2^n cities, the data set would consist of tuples (city1, .., city2^n).
We could map that onto 2^n x 2^n = 2^2n bits.
So we would need a 2^2n quantum bits in a quantumn computer, that represent the 2^(2^2n) possible outcomes.
This boils down to question if it will be possible to get such a large number of qbits in superposition. I believe nobody knows if this is physically or technically possible.
We also don't know if there is some experimental setup, whose measurement would collapse that superposition into the optimal TSP soulution.
Perhaps a whole series of more fundamental computational steps/measurements needs to be done by a quantumn computer.
Like I wrote above, I don't know what kind of basic quantumn computing operations are possible to built, but I believe that it could be more complex ones that simple increments. I am sure a look over the fence into the domain of analog computing would give some useful hints of what might be possible.
Even if you use the physical analogue of creating lengths of strings tied together at the various city nodes and pull the "cities" in question until the strings are tight and selecting the tight path, every atom moving in the process is a state change
It will possibly not such a simple mechanical setup but some much more crazy arrangement of basic quantumn computing components, that implement abstract operations. Perhaps it is not a single state change but it might turn out to need qualitativley much less than an algorithm on a classical computer.
And even if you prove that P=NP, you still need an algorithm that traverses the data, examines it, compares different links, etc, and every step in the algorithm is AT LEAST a state change
The reasoning from classical complexity theory shows that the work of digesting the input data is part of the overall work, an algorithm performs. But it is also true, that the models assume that one symbol of input data is digested at a time. Perhaps the inherent paralellism of a quantumn computer can save effort here to, by having some read-all-input-at once feature. I simply don't know.
Hm.. I should really try to get an expert to judge this discussion.
Re:*Theoretical* limit (Score:4)
Who says that a complex problem needs a high number of state changes?
Each state change could be the result of a very high level operation, not something primitive like adding two numbers, but perhaps something like the outcome of the traveling salesman problem. Think of some clever physical setup here.
It will be due to the cleverness of the computer builders, to make most use out of the limitations.
Regards, Marc
Re:Reversibility and Thermodynamics (Score:1)
Think of it this way - a photon hits an electron in it's ground state, raising it to a higher state. Then, it emits an identical photon and drops to it's ground state again. Effectively, You added energy to store a bit (0 -> 1) of information in an electron. Then the operation reversed, and you gained the original energy back.
The problem with reversible computing is getting the information out in a reversible way. In my example above, how do we know if the electron is a 0 or a 1? we have to extract the photon to do it, thus destroying the reversibility. Now, there are some quantum properties that can be measured without destroying the reversibility, but they are difficult to control (the system easily loses coherence).
Reversible computers don't use the same 'modern electronics' that your current computer does. but they are theoretically possible (using only a small amount of energy to observe the state). Just not yet practical. Wait around a bit.
Re:Is it just me or did the author make a mistake? (Score:2)
E = h-bar * nu
from high school physics. This gives the light wavelength if the object is a photon (zero rest mass), otherwise this wave effect is called the "probability wave" and is reputed to have connections to alternate universes, etc.
It doesn't matter if E is rest mass or kinetic energy; the wavelength effect is the same, and in fact originally derives from special relativity [davis-inc.com]. Heisenberg's principle is essentially a statement of the resolving power of these waves, for instance in electron microscopes, or in any measuring instrument. Equations 1 and 2 say that one can't measure much more often than delta t, which is fairly straightforward since one is working with waves whose period is at least delta t.
Re:Not good science (Score:2)
The verbiage about E=mc^2 is redundant, the author is simply restating the fact that E in equations 1 and 2 is, and has always been, relativistic mass-energy, not what appears on your PG&E bill.
Wait, I see the problem (Score:2)
You're misstating the definition of E in equations 1 and 2. E is not "maximum energy", it is defined as the exact mass-energy of the physical system, in this case 1kg. The speed of a computer is not limited by the energy available to it, it is limited by the mass-energy that is in it.
You might rephrase it by something like "Since E is the total mass-energy of the laptop, a simple unit conversion shows that the maximum number of operations per second
limits of AI and computer gaming (Score:1)
The argument went like this: if a computer system can map out every possible finite state (i.e. board position in chess) of a game, then from any point in the game it can be determined what the winning moves are (if there are any, that is.) For a game like chess there are relatively few board postions possible -- thus, all board positions could be explored and the game could conceivably be "solved" at some point by a computer (of course in a brute force sense only, but solved none-the-less.) Any human player would be hopeless against such an adversary.
However, in a game such as checkers, there are many many more possible board positions (I think the estimate I recall was 10^69, could be incorrect though . . .) so that compiling a complete library of all board postions would take considerably longer than the projected life-span of the universe (as estimated by the halflife of a proton). This would be true even if you used a ridiculously large computing system -- say, .1% of the available mass in the universe. So, the argument goes, a computer will never be able to brute-force beat a human at checkers.
Re:Checkers vs Chess (Score:1)
Re:Reversibility and Thermodynamics (Score:1)
Re:Reversibility and Thermodynamics (Score:2)
bit shifting is inherently irreversible.
Well I guess that means multiplying and dividing by powers of two is out as well, since it turns out to be the same operation. But.. wait.. mult and div is really just a finite number of additions and subtractions.. *hmmm*
Does it have something to do with the way the cpu handles overflow for certain instructions? Or did I just convert an irreversiable algorithm? ..remember you heard it here first!.. or are you just talking out of your butt? :)
Re:Is it just me or did the author make a mistake? (Score:2)
Thanks for the feedback.
You said:
Well, I entirely ommitted Lloyd's calculation for this (as I said in the article). It's too technical to include in an article on a tech enthusiast site like Ars. It's quite followable, mind you, but technical enough that I didn't think it belonged in the article.I invite you to read Lloyd's paper [lanl.gov] for yourself, and examine his approach. If you find fault with his math and physics, well, congrats! :) I'd suggest that you let Lloyd know about this, and certainly announce the results of your analysis here. An extra pair of eyes looking at Lloyd's fascinating ideas is good.
Cheers,
-Geon
geonSPAM@ISBADarstechnica.com
Re:Is it just me or did the author make a mistake? (Score:4)
First, thanks for the feedback.
Now, it is quite possible that I have made errors in my article. I've gone to some pains to avoid them, but things might have slipped through anyway. One never knows.
However, I do not think the matter you raise is an error. Allow me to try to explain.
First, you must keep in mind that the article is an exploration of theoretical limits, not practical ones. Practical considerations are well and good if you want to actually build such devices, but that isn't what I was intending to explore. What I wanted to talk about (and did talk about) are the absolute maximum speed limits for computers. These are almost guaranteed to be ridiculous and impractical, but as a limiting case, I think that they are still interesting.
The calculation is based on the idea that 1kg of matter has a certain maximum energy associated with it, and that maximum energy is given by Einstein's formula. Because it turns out that the theoretical speed limit of a 'computer' (which as the term is used in the article is simply anything that processes information, basically - particular architectures aren't considered) can be related to the time-energy uncertainty from quantum mechanics, it is then necessary to find out how much energy a given lump of matter can contain. And that's given by the whole E = mc^2 business.
Of course this limit is not practical. It's a theoretical upper bound. I haven't the faintest idea as to how you'd go about converting 1kg of matter into energy controllably (without, say, temporarily warming up the climate of the city you're working in), or how you'd control it enough to make it compute something you're interested in, and so on. The point isn't to look at the practical limits (those are better looked at from the perspective of current technology, i.e., Moore's law and whatnot, in my opinion), but rather the general theoretical limit.
Just as a note, you may want to look at Lloyd's paper, as the ideas for the calculation are his, and I'm just summarizing and reporting them. (Lloyd's paper, by the way, is very well written, and it's recommended reading for just about anyone who isn't scared away by some equations).
But if the above explanation doesn't satisfy you, please post why, and perhaps you can convince me that I (and Lloyd) are in error.
Cheers, -Geon
geonSPAM@arsISBORINGtechinca.com
Re:Say what? (Score:1)
In a since, we're both right. You're right, my analogy is a poor one, but it still serves its purpose.
Thanks
Re:Reversibility and Thermodynamics (Score:1)
Actually, no. It seems that way (that's what I thought at first too). But that's the cool part about it... you simply reverse the process by reversing the direction that you increment the Program Counter. I'll give a short example in pseudo assembly:
; assume that $1 and $2 are both 0 ADDI $1 32 ADDI $2 24 ADD $1 $2 ; at this point $1 should hold 56. ; and at no point have we stored anything in memory ; ok.. now let's do these instructions in reverse. ; normally, we wouldn't manually do it this way ; because the CPU would switch it, but this works SUB $1 $2 ; ok.. $1 now contains 32 SUBI $2 24 ; now $2 contains 0 SUBI $1 32; now $1 contains 0 This is a fairly primitive example, but it proves the point. The other thing is that there cannot be a MOVE instruction. Memory can be exchanged with a register value, but not copied. So, all you have to do is run the instructions backwards... I promise!!!
Re:Reversibility and Thermodynamics (Score:1)
; assume that $1 and $2 are both 0
ADDI $1 32
ADDI $2 24
ADD $1 $2
; at this point $1 should hold 56.
; and at no point have we stored anything in memory
; ok.. now let's do these instructions in reverse.
; normally, we wouldn't manually do it this way
; because the CPU would switch it, but this works
SUB $1 $2 ; ok.. $1 now contains 32
SUBI $2 24 ; now $2 contains 0
SUBI $1 32; now $1 contains 0
Re:Reversibility and Thermodynamics - NOT! (Score:1)
but the research I'm doing
Intermediate state data does
State data is necessary if you're performing irreversible operations.
Re:Reversibility and Thermodynamics (Score:1)
You hit it right on.. the overflow would be bad. Anytime there is a loss of data, reversibility won't work (unless that data is saved somewhere).
Reversibility and Thermodynamics (Score:5)
The concept is that a "normal" CPU erases information on every cycle (clearing registers, overwriting data, shifting data to nowhere, etc). When a CPU erases information, it's dissipated as heat. There are thermodynamic limits to this (kinda like Moore's law). So, if a computer could be designed not to erase data, you could reverse the CPU and get most of your energy back.
Now before you say "BS", think about it. In physics, if you know the initial state (starting position, velocity, acceleration) of an object in an isolated system, you can easily compute where it was at any given time earlier. This uses the same concept. For example, If you add 43 to a register, you can subtract 43 from that register and get your energy back.
Of course, certain instructions don't lend themselves to reversibility. For example, bit shifting is inherently irreversible. One option is to maintain a stack of "garbage data", but that's a poor solution. On the other hand, a number of instructions are reversible by default.
Reversibility is not anything new, but it does take a shift in thinking. Algorithms can be designed to run very efficiently on reversible computers, but it takes a bit more effort. Hopefully, we (the community of people studying reversible/adiabatic computers) will develop means of either converting irreversible algorithms or develop ways to make them less innefficient (double negative).
-Andy
Ultimate Limites: 1981 (Score:3)
- Bill Gates (1955-), in 1981
How many angels on a pinhead? (Score:2)
medieval debates on how many angels can dance
on a head of a pin. Same sub-arguments too-
whetner angels are material (atoms) or immaterial
(photons, quantum states), and so on.
Re:Reversibility and Thermodynamics (Score:1)
One point; the issue with MOVEs is not the fact that you are reading a value in the source register, but rather that you are destroying the value stored in the destination register.
Now, perhaps you intended that interpretation of the word "copied". But when I think of the word "copy", I normally associated it with the reading of the old stuff, not with destroying the new stuff.
Re:Ok -- but Moore's law gets us there fast enough (Score:2)
That could add new meaning to: "I'm loosing my mind!"
Caution: Now approaching the (technological) singularity.
Re:Knowledge is unlimited (Score:2)
That's an interesting conjecture. Any idea how you would go about proving it? Personally I don't believe it. It's like saying "there are an unlimited number of fish in the sea" or "there's an unlimited number of trees".
Well, there is a large number of fish in the sea, but we're taking them out faster than they grow back, so there won't continue to be a large number. There used to be a large number of trees. But we cut them down faster then they grew back, and now every tree is owned by some person or organization. And the number is still decreasing rapidly.
Personally, I don't think that anything in the universe is unlimited. Stupidity has been suggested, but even there I have my doubts. Still, there's probably a better argument to be made for an unlimited amount of stupidity than for an unlimited amount of unknown things to be learned. In fact, I doubt that the total number of things in the universe that can be know is as large as the powerset of the cardinality of the number of non-virtual elementary particles in the universe. And probably not even as large as the power set of the cardinality of the number of electrons in the universe. Now that's a LARGE number, but it sure isn't unlimited.
Caution: Now approaching the (technological) singularity.
Re:The speed of light does not matter... (Score:1)
Re:Ok (Score:2)
It'll suck the paint off your house and give your family a permanent orange afro.
-B
"basic fundamental physics limitation" - doesn't e (Score:1)
second, keep in mind that the current physical theories generally pose more questions than answers, e.g. we know that there are more things that we don't know than things that we know.
we must remember that our theories are useful tools for many things, but we must also remember that they are just theories. if we don't, then all we have done is replace God with Science.
my physics professor in College said to me "a good theory is one that can be proven wrong". i still don't quite understand what he meant by it, but maybe it was "i rather know something is wrong for sure than not know whether it's true or false - and theories, by their very nature, can never absolutely be proven true"
Re:Moore's Law vs Physics (Score:1)
not likely. but possible.
Re:Reversibility and Thermodynamics (Score:2)
If you know an objects initial state, you can compute where it's been. This is not the same as putting it back in it's initial state.
If you add 43 to a register, you've had to drain the capacitance from some circuits and charge up others. The capacitance charge is drained to ground, and you will have to expend energy to move those electrons (or some others) back to Vcc. Sorry, no way around that with modern electronics.
Reversible algorithms make make some times of computing efficient, but not in the 'energy' efficient sense.
What to do with all that speed (Score:1)
I was thinking about the same subject from the other direction recently. For my research [umich.edu] I am trying to simulate the growth of silicon germanium crystals. The usual way of approaching a physics problem like this is to make simplified approximations of what actually happens. For example, hard-to-model processes like diffusion are measured by experiment or found from standard relations. But these simplifications can yield a model which is subtly wrong and fails to predict unexpected behavior.
What I really want to do is model the entire crystal growth from the most fundamental physics (quantum mechanics or something like string theory). That way the simulation is not just an approximation of reality but is indistinguishable from reality! How fast of a computer do I need for this full-detail simulation?
Well, that depends on the size of the system I'm trying to simulate. How big of a computer do I need to model a 1g chunk of matter? Logically, it seems that at least 1g of computer would be required to model 1g of arbitrary material in real time. (Otherwise, I could simply model a 1g computer with a 0.5g one, which in turn is actually simulated on a 0.25g computer and so on.) So, perfectly simulating a 10g crystal would take 10g of "ultimate laptop" processing. But if, like the article mentions, the laptop uses ordinary matter rather than plasma then the computation rate is 10^40 ops/second rather than 10^51 ops/second. This implies that my 1kg super laptop would need 31.7 years to simulate 1 second of growth for a 10g crytal! How much computation would be required to mimic The Matrix? To perfectly simulate life on Earth would require a computer at least as large as the Earth itself!
So, although 2 GHz processors sound fast and the ultimate laptop in this article seems unfathomable, we have many applications today that can take all the computation speed available. A near future application for ultracomputing is the modelling of protein folding for drug design. There, the amount of matter being simulated is small fractions of a gram and the leveraging of computer weight and time is worthwhile.
AlpineR
No sub-atomic computing anytime soon (Score:1)
It is worth noting that Lloyd's thought experiments in these areas were preceded by similar speculations over 4 years ago in Anders Sandberg's [nada.kth.se] paper The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains [transhumanist.com]. Lloyd has extended them a bit by bringing Black Holes into the picture.
Now, what we will be able to engineer in this century, using diamondoid molecular nanotechnology, is solar system sized nested layer Dyson shell [aeiveos.com] supercomputers. This is a unique architecture that I have named a Matrioshka Brain [aeiveos.com]. It will allow us to most efficiently use the entire power output of the sun and compute somewhere in the range of 10^42 to 10^52 ops per second.
Interestingly enough, Michael Franks has a paper "Reversibility in optimally scalable computer architectures" [ufl.edu] which postulates a solar system sized reversible architecture that would out-compute any non-reversible architecture. This too would be using atomic-scale engineering. Unfortunately it requires the power output of an A or B class star (~50,000 suns) and requires an amount of silicon equal to the mass of Saturn (our solar system doesn't even come close to having that unless we mine the sun for it). After we have developed machines of these architectures, our development comes to a slow halt unless our ability to do sub-atomic engineering can be developed. I'll be quite happy with what we can get out of atomic-scale engineering -- it supplies enough computronium for roughly a trillion-trillion human minds for those who choose to upload [minduploading.org].
Re:Reversibility and Thermodynamics - NOT! (Score:1)
You do need to save the intermediate state information. That is why Frank's design for the ultimate reversible computer is so big (see my other post on this topic). You should keep in mind that you can compute for "free" (if you do it slowly), its erasing bits that costs you money (generates entropy as heat). Logical AND operations erase information such that you cannot run the calculation backwards. You have to design the hardware such that it has no such information destroying operations.
Wrong! (Score:1)
Re:Knowledge is unlimited (Score:1)
The NSA (Score:2)
Re:This is not a real distinction (Score:1)
Not possible by the definition
That's like saying something can go slower than 0 mph. Not possible. Since speed(not velocity) is always positive. Since temperature is just a measurement of how fast "stuff" is moving around and absolute 0 is when nothing is moving you can't get colder. Saying otherwise just shows your ignorance.
Re:Reversibility and Thermodynamics (Score:1)
you don't need to actually reverse the operations to 'recover' energy in reversible operations. Reversible operations don't destroy information, and thus don't incur the minimum entropy increase caused by destroying information. This means that it is theoretically possible for them to dissipate arbitrarily small amounts of heat, and thus consume arbitrarily small amounts of energy.
In other words, reversible operations don't allow recovering the energy, they make it possible to not have to pay the energy in the first place.
One problem to note with reversible computing is that your computer has to have enough memory to store every bit of information ever entered into it. I'm pretty sure that it can compress the info, but eventually you'll have to have an energy expensive and very hot 'information dumping' process. I say it's a problem, but of course normal computing requires the same thing, and doesn't let you choose when you do it.
Yaknow, maybe I should've blasted the guy and removed any indications of uncertainty in my reply. That's the only way I've ever gotten modded up...
Regards,
Bobby
Bobby Martin aka Wurp
Cosm Development Team
Re:Reversibility and Thermodynamics (Score:1)
Very nice! I really should investigate this more sometime, but there aren't enough hours in the day.
Thanks for the correction.
Bobby Martin aka Wurp
Cosm Development Team
Re: (Score:2)
No it won't (Score:1)
Re:*Theoretical* limit (Score:2)
> (regarding computational power) to use the
> physics for building fast computers
Different from what? Like I said, the discussion relied on no particular form of technology. The only assumption is that a computer system must "change state" in order to perform any computation. And a computer is basically a "finite state machine". And if you think a computer can compute something without changing state, then how will you know when it's produced the intended result?
Re:Say what? (Score:2)
Re:*Theoretical* limit (Score:2)
</em>
<p>
Well, we could argue about how complex is "complex", and how high is "high", but that's missing the point.
<p>
The outcome of a travelling salesman problem is not a state change, unless you think you can solve the problem by flipping a bit. Even if you use the physical analogue of creating lengths of strings tied together at the various city nodes and pull the "cities" in question until the strings are tight and selecting the tight path, every atom moving in the process is a state change.
<p>
And even if you prove that P=NP, you still need an algorithm that traverses the data, examines it, compares different links, etc, and every step in the algorithm is AT LEAST a state change.
Say what? (Score:3)
> like Moore's law).
I think you meant to say, "almost, but not quite, entirely UNlike Moore's Law".
*Theoretical* limit (Score:5)
Re:Say what? (Score:2)
http://www.google.com/search?hl=en&safe=off&q=M
Context is EVERYTHING sometimes people, think before you post.
Re:Say what? (Score:2)
http://www.c3f.com/nty0129.html
It is the first link on the Google page.
Science at its best.
Gimmey Moore forever!
Re:Architecture Change Wanted: Apply Within (Score:2)
--
Architecture Change Wanted: Apply Within (Score:3)
Wow! That's some neat physics (only a part of which I understand). But really do you think we'll need to get anywhere near these sizes and amounts?
The time will come when the theory has advanced far enough that we'll drop the Von-Neumian-style of doing computing and go with something a bit more, shall I say, better? The human brain certainly doesn't have anything near those figures of capacity, and it's about 1-2kg, occupies about 1 L^3 of space.
And I don't know about you, but I LOVE the graphics. They are kicking some major ARSE. The refresh rate could be a bit higher, though, I still get blurry vision when stumbling home from the bar.
Re:Negative temps are possible (Score:1)
In any case, the underlying definition of the entropy of a system at a given energy is Boltzmann's constant k times the natural log of the number of states the system can assume at the given energy. For the dipole system, the energy of the system is proportional to the number of anti-parallel dipoles times the strength of the imposed field, i.e. E = C*Nap*M, or Nap = E/CM. So if the system has energy E, you know how many dipoles are anti-parallel, but not which ones. If there are N total dipoles and Nap of them anti-parallel then there are Nap!/Nap!(N-Nap)! ways to arrange those Nap dipoles and hence there are that many states. If you examine the behavior of that function (or its natural log), you'll see it increases from Nap=0 to Nap=N/2 and then decreases from Nap=N/2 to Nap=N. This means that the entropy increases from as total energy E goes from 0 to (N/2)(C*M) but then decreases as E goes from (N/2)(C*M) to N(C*M).
As I said, see a statistical mechanics textbook.
Negative temps are possible (Score:2)
Temperature is defined thermodynamically as:
1/T = dS/dE
where S=entropy and E=energy.
It is possible (and not merely theoretically) for entropy to decrease as temperature increases, in which case temperature does go negative. Weirdly, negative temperatures are considered hotter than any positive temperature. Thankfully (?) this sort of thing can only happen when you have very constrained systems -- the classic example is N magnetic dipoles constrained such that any dipole can only point parallel or anti-parallel to an applied magnetic field. If they are all parallel (lowest energy state) and you apply energy, they will begin to flip over. As they flip over, the entropy of the system rises with the increasing energy, so the system's temperature is positive. But once half the dipoles have flipped and the increasing energy drives more and more of them into the anti-parallel state, the entropy starts decreasing with increased energy, and temperature goes negative.
Don't believe me?
Re:Ok (Score:2)
Ok (Score:5)
hmmm
That is the equivilant of 542,580,000,000,000,000,000,000,000,000,000,000,00 0,000 1Ghz CPU's.
I think we're covered for awhile.
The speed of light does not matter... (Score:2)
About 300 times as fast at the moment through cecium vapour. (Aparently it has a negative refraction index...)
Also, what about non-clocked machines (massively paralel logic machines) such as the one NASA just bought.
The limits are a bit near-sighted, methinks.
D.
Re:The speed of light does not matter... (Score:2)
Re:The speed of light does not matter... (Score:2)
But check this out, and please comment on it:
http://www.time-travel.com/gas.htm
did slashdot forget about networking? (Score:2)
Is it just me or did the author make a mistake? (Score:3)
i'm assuming this is at 9e16J per second, which means to make his "ultimate laptop", he would have to split the atoms of 1kg of any material per second... which means he would need to carry a large nuclear power plant around with him (even then, I don't think they go through 1kg/s).
What he fails to understand is that Einstein's formula is an equivalence, not a potential. Maybe that is the maximum energy a mass can have, but to get at that energy (in J/s) you would have to split enough atoms that that mass was lost (your 'laptop' would get 1kg lighter every second). Unfortunately, his whole article is based on this principle, so you can't use anything he says unless you plan to sustain a nuclear reaction which loses 1kg/s in fission to power this "ultimate laptop".
He correctly used the values in the formula, but he didn't apply it correctly. Maybe he should have done a bit more research.
Computing on a Cosmic Scale (Score:5)
Wow! Imagine if we could make a computer as large as Earth... I believe a computer that big could calculate the answer to the question of the meaning of life, the universe, and everything!
And don't even get me started on what we could do with a Beowulf cluster of those things...
This is great news! (Score:2)
Bring it on!
Re:until the next computer revolution (Score:2)
The only reason for quantum mechanics in this article is the fact that quantum mechanics gives a lower bound for miniaturisation (i.e. you can only keep making computer parts smaller until you get problems with the Heisenberg Unertainty Principle)
The article even specifically states that it doesn't refer to a special type of architecture.
--
This signature has been deprecated
Just like the superparamagnetic barrier (Score:5)
He had a great graph of the last 30+ years of GB/square inch, which seemed to coincide with Moore's Law (which, just like this article, addressed processing issues, I know. Bare with me here.). There were red lines drawn every ten years or so representing what scientists had believed to be the superparamagnetic barrier - the point at which it would be physically impossible to cram any more data onto a disk.
The guy had a great line every time one of these came up. "In 19XX Dr. XYZ at ABC University discovered the superparamagnetic barrier.... We broke it X years later." (X was usually a single digit.
My point is that it will be interesting to watch if these "scientific" finding will not require revision. True, this one may be based on sound scientific principles, but so were all those who attempted to predict the superparamagnetic barrier.
Re:*Theoretical* limit (Score:2)
No information can escape from a classical black hole: what goes in does not come out. [...] Recent work in string theory suggests that black holes do not actually destroy the information about how they were formed, but instead process it and emit the processed information as part of the Hawking radition as they evaporate: what does in does come out, but in an altered form.
If the latter picture is correct, then black holes could in principle be `programmed': one forms a black hole whose initial conditions encode the information to be processed, lets that information be processed by the Planckian dynamics at the hole's horizon, and gets out the answer to the computation by examining the correlations in the Hawking radiation emitted when the hole evaporates. Despite our lack of knowledge of the precise details of what happens when a black hole forms and evaporates (a full account must await a more exact treatment using whatever theory of quantum gravity and matter turns out to be the correct one), we can still provide a rough estimate how much information is processed during this computation.
"Black Hole" computers sound like a little beyond the limits of the current technology. However, It sheds light on possibilities of the Galactic Structure.
;-)
Check out the Vinny the Vampire [eplugz.com] comic strip
Re:*Theoretical* limit (Score:2)
The science fiction story angle on it is that eventually all black holes are computers.
Also, It is considered that all galaxies have black holes in the center.
So that galatic cores are data centers, among other things.
Bit of a stretch, but it would be a science fiction story any how.
Check out the Vinny the Vampire [eplugz.com] comic strip
still a lot of time to go.... (Score:2)
1 Megahz = 10^6 operations per second 1 Gighz = 10^9 ops per second. Moore's law seems to go for speed now too. amout 10 time faster in 5 year. so we still have to go for about 200 year before we reach this speed.
I do not think i live for another 200 year. no problem there.
Damn! the sun does not go black hole for some 10^9 years. no luck here......
Re:Reversibility and Thermodynamics (Score:2)
Can you imagine:
"Oh man. I had a bug in my program, and accidentally substracted more from my registry that what I originally added. Now my computer is literally frozen. Does anyone have an icepick?"
Re:still a lot of time to go.... (Score:2)
Since no one else responded...
Dark matter is part of an astronomical theory. When the astrophysicists calculate how fast the universe is expanding, and how much matter is in it, they come up short - they figure there should be about 10 times as much matter as there is. Dark matter is the additional stuff that the scientists haven't observed. Either the theory is wrong, or there's a lot of stuff out there that doesn't radiate.
Then there are black body objects in physics, which (if I remember correctly) absorb all light that shines on them. They also emit energy in the form of light, and the light's spectrum matches quantum mechanical theory (in fact, one of the reasons for coming up with the quantum theory is to explain black-body radiation). The sun is the closest black-body object (I know, it seems a little confusing, but some definitions are counter-intuitive).
But that may not be your question. Scientists can observe other stars directly, black holes and dust clouds somewhat indirectly, and anything else that emits energy or blocks light or bends it (anything of significant mass). We've even reached the point that we can detect massive planets orbiting other stars by the way they effect the star they orbit. But even if every star had 10 Jupiters, that would only be about 10% or less of the mass of the star. Dark matter, if the current calculations are correct, would out number stars 10 to 1 - and we should be able to see the effect.
This either means we have to look elsewhere, or change the theory. If a bunch of aliens kept some massive, galaxy-sized black holes around, at a significant distance from other galaxies, then that may account for dark matter. And what else would the aliens store in those black holes?
Re:still a lot of time to go.... (Score:4)
I find it interesting that, for the most part, Moore's law has been an accurate indicator of future computing speed, and the accompying engineering and theoretical knowledge needed to reach that speed. It is amazing to think that, if the law hold up, in 200 years we would have the physics and engineering needed to build such a computer, and perhaps the knowledge to know what steps to take to make an even faster computer!
The logical solution seems to be to tap into parallel universes (quantum calculations done in a massive parallel fashion?). It also seems that intense sheilding of some sort would be needed, both to keep quantum influences out, and to keep the user from being incinerated. Unless you didn't care, because your computer was outputing to some insignificant parallel universe. Even line-of-sight lasers would be too slow of a bandwidth, since the output would be 3-D light.
Just think, about 300 years from the first radio signals to black-hole computers - no wonder Seti@Home is failing - all the aliens are playing CounterStrike on their black-hole systems! Who would have guessed that the dark matter was simply off-site storage for 5-D pron?!?
until the next computer revolution (Score:3)
Knowledge is unlimited (Score:2)
Re:How many angels on a pinhead? (Score:2)
medieval debates on how many angels can dance
on a head of a pin
Of course, the real futility of the argument is that there is no way to demonstrate the existence of angels, let alone count their numbers. Subatomic particles, however, leave visible tracks in the cloud chambers of particle accelerators. Though it's impratical to count them, it's not impossible, and their behavior, while it is inherently uncertain, is not entirely random, and can be predicted to a degree.
*BOOM* (Score:5)
"Ah, just another script kiddie trying to DOS the database."
"I don't understand. He just upped and exploded."
"Yeah, his quantum computer heated up to the temperature of a supernova and then collapsed in on itself like a black hole. Happens all the time."
"Really?"
"You should see it when they try to encode movies with DivX!"
Will our need ever push it this far? (Score:2)
All microsoft and other OS developers seem to be able to do is add lots of features that never REALLY get used and a few that do make high impact improvements. But do smart tags, the start menu, right click context menus, etc really require massive improvements in processor speed?
I can't help but think that Win2K on my Pent III 700 laptop is using the bulk of the resources just to RUN vs the load placed on it by any apps I'm using. That seems to make no sense.
So that begs the question. The whole idea of Linux from teh start was a free Unix that ran well on OLDER (cheaper - widely available) PCs. Even today that is still true. So if LInux continues to be accepted and moves into teh desktop mainstream someday - will that effect the push on PC technology?
Its striking that for less than an Apple I in 1977, I built a 1GHz Athlon server with the latest gadgets (SCSI RAID, LCD monitored drive sleds, PC133 SDRAM, etc) A PC with this much power is staggering - even compared to boxen from a year or two ago. But do I really NEED that much power? Not really, CPU wise, but it didn't make sense ot save $20 and get 200 less MHz when AMD, at the time was selling the 1GHZ athlon as the SLOWEST CPU.
We all know that no matter what Intel & AMD come up with, Micro$oft can overload it with a bloated OS upgrade that gains you squat. But in teh world or real OSes that treat system resources as something to be used scarcely, when will enough PC power be engouh for the bulk of the users (corporate flunkies, personal PCs, and small businesses?) When will we see a split in what is used for servers vs what is used in desktop PCs? Today, the latest CPUs are showing up in desktops almost at the same time they go into servers (Xeon excluded, but even there its getting more blurry)
Just like always it'll be amazing to see where we are 5 years from now, but I just can't imagine I'll be using a 3GHz desktop PC running RedHat 12.x that probably cost me $1000 :) It boggles the mind much more than the limits physics places on signal transmission on teh dies.... :)
Re:Will our need ever push it this far? (Score:2)
And I was shooting more from a point that if Micro$oft spent more time trying to streamline their OS vs bloat it with more Justice Dept attracting 'features' the user's experience might be better and they might be able to get cheaper PCs more to their NEEDS 5 years down the road.
It's generally naff. (Score:2)
--Blair
Black hole laptop computing (Score:2)
The idea of black hole computing is obviously heavy, but the requirements on a heat sink capable of handling the matter-energy conversion of one kg are staggering.
Overklocking might of course not be strictly necessary, considering the effects of general relativity.
Staggering might be descriptive of the investment costs for setting up a new singularity for each calculation, given the obvius difficulty of interactivity once a Schwartschild-barrier is in place.
One must though admire the article authors, not only on their interesting essay, but also on behalf of the courage involved in imagining the prescence of a dissapearing black hole in ones lap.
Wrong (Score:2)
His "calculation" is nothing more than a change of state in a quantum system. In real life, any calculation is likely to involve something more complex than this - the time taken for a single change of state is the theoretical minimum time for a single operation.
No matter how the machine works, it must involve state changes in order to have calculation of any kind. Barring completely new physics involving something other than normal matter, his calculation is correct.
*sigh* (Score:2)
He's talking about the theoretical maximum limit of processing power, not what is actually acheivable. Even in the article he says that there are good reasons for using less than this, and practical concerns like architecture don't come into it at all.
It's not bad science at all, it's theoretical science.
Backwards reasoning (Score:2)
Physics, however, is man made. In your own counter argument you said Moore's observation applies to technology and knowledge, two inherent ingredients to physics.
To use a phrase: bollocks. Physics is inherent to the Universe, and is independent of what we know about the Universe and how we are able to manipulate it. Obviously, our knowledge of physics changes, but the underlying principles remain the same.
As our technology and knowledge grows so does our ability to penetrate to the "underlying truths of nature". Hence why we no longer believe newtonian physics to be accurate.
But they are still accurate, we just now know they are only accurate within a certain domain (speed much less than the speed of light, low masses). What the author is talking about is how the fundamental physical laws of this Universe constrain processing power. Quantum mechanics (the basis of this article) is undoubtedly not the whole picture (which is why superstrings are the focus of such intense research), but in their domain it is correct, and so are the observations made in this article.
To exceed the limitations described here we will have to do our processing in some other domain - perhaps if we recreate conditions at the very start of the Universe when it was still 10/11-dimensional then we can harness additional computing power, but that wasn't what the article was talking about.
Re:*SIGH* (Score:3)
You're still talking about practical concerns here? This calculations is simply an upper bound upon the amount of processing that can be done with 1kg occupying 1liter of volume. Nothing more. It's purely about information processing, not even about "computers" per se.
Besides, even if it was concerned with how this might be acheived, how the hell is he going to work all that stuff out? Obviously, such technology would be way beyond what we have today, so there's no way for him to use realistic figures. So trying to add practical details is a waste of time, kind of like trying to calculate how fast we'll be able to travel in the future. We know the theoretical limit (the speed of light), but we can't say what technology will be available in the future and so cannot make any useful predictions.
Not really (Score:5)
Every year we seem to think we know every thing there is to know about physics, biology and any other science.
You don't know many scientists do you? :)
If your assertion is true, then why would they bother doing it? If there was nothing left to know, then there would be no point in being a scientist, and no new research projects coming up.
We are convinced that our current theories are laws of nature.
The term "law of nature" is pretty loaded, and I doubt it would apply in many cases. And even then, such laws aren't universal. Consider Newton's "laws". Although they're called such, they're only applicable in certain domains (speeds much less than that of light, relatively low masses) and are only approximations to relativity. Similarly, our current physical theories (general relativity and quantum field theory) are only approximations to some higher theory which contains both. No scientist is convinced what we have now is the final "law of nature".
And every year some discovery shatters that belief in a given discipline.
I'll admit there have been, and probably always will be, some pretty amazing new discoveries that do come as a big suprise, but shattering belief? I think not. If anything, they often serve to spur on research into the various fields.
Whilst scientists can easily be as guilty of hubris as anyone else, you're portraying them in a far worse light than is deserved IMHO.
Not good science (Score:2)
The maximum energy an ultimate laptop can contain is given by Einstein's famous formula relating mass and energy: E = m c2. Plugging in the laptop's mass, the speed of light, and combining the result with Eq. 2 tells us that the maximum number of operations per second a 1 kg lump of matter can be made to perform is 5.4258 * 10 50. This means that the speed of a computer is ultimately limited by the energy that is available to it.
What he's actually saying is that you are converting the mass of the computer to energy in order to power it. So what part do you convert first? The screen? The RAM? The case? Not to mention that you have to have some way to funnel the energy into the computer without loss - it reminds me of the "massless ropes" and "frictionless pulleys" of a first-semester physics class.
Sorry folks, this article is misleading. We're going to be stuck with batteries for some time to come.
SciFi/Science get closer (Score:2)
Its also worth noting that our two main theories, Relativity and Quantum Mechanics don't work together in that they cannot both be correct, Since a new theory to bring them together is being looked for I personally don't believe quantum computers are the limit.
Re:Knowledge is unlimited (Score:2)