10GHz Processors And Moore's Law 141
AntiFreeze writes "There is an interesting story on MSNBC about Intel's attempts at producing chips capable of running at faster than 10 gigahertz. There was a previous /. article in early December about this here. This article from MSNBC is much more detailed (both technically and non) than the original article referenced from December, and provides a very intriguing look at what Intel's planning to do over the next four years, and what they'll have to show the general public as soon as April 1st. And as always, there's the heated /. argument about Moore's law buried in there, too."
Re:Impossible. (Score:1)
What I find disturbing (Score:2)
But as once was stated at the first lecture I saw about Moore's Law: If we don't have the technology (or software) to "use" this new hardware, what good is it? The gap between software and hardware is getting larger every day.
Just a small sidenote: apart from me running seti@home and some rendering stuff, my pII-celeron 266 is mostly having a load of 0.02.
Re:Impossible. (Score:2)
Sad.
In order for this to be a "law" (Score:1)
Now, if it was called "Moore's Law of Transistor Growth from 1965 to 2000" where transistor count would double every 18 to 20 months, then this would be a law for that specific time period (given if it really held in that entire time frame).
Re:What are the implications... (Score:2)
Well, they travel through your body all the time, 24 hours a day. Are you going to shut down every radio transmitter in the world, or just live in a faraday cage?
-
Uh-huh.... (Score:2)
[rant]
Haven't you learned ANYTHING from history? Apparently you were too busy studying physics to pay attention. Here's a tip - never, ever, ever tell a scientist that something's impossible, unless you want to be proven wrong...
[/rant]
Re:Impossible. (Score:1)
That would be 1.572864 PHz (Peta-Hertz)
Re:Impossible. (Score:1)
Re:Impossible. (Score:1)
Re:Hmmm... (Score:1)
You were probably using a K6, which in the P2 era was comparable to the P2 Celerons.
Re:Errm..... (Score:1)
So, what's Intel planning? Trying to put the wind up AMD by announcing the 10GHz Itanium on April 1st 2001 and hoping they don't realise what day it is? I don't think that they are that desparate just yet...
Re:What I find disturbing (Score:1)
Also missing is the context... (Score:2)
Re:Impossible. (Score:1)
Re:What I find disturbing (Score:1)
But nowdays you can still run the latest software on a computer almost 2 years old. Do you really think it is because we cannot utilize the extra power?? The reason the power isn't being used is because people are hanging onto hardware longer now. There is a huge gap between CPU speeds (300 Mhz - 1.5 Ghz) and software needs to cater for slightly lower CPU speed audiences.
I'm pretty sure any game coming out within the next month could easily have taken advantage of a 1.5 Ghz chip and used it to its full potential but they need to cater for the lesser machines as well. Why buy a 1.5 Ghz chip then?? Its an investment. Initially, it won't be used 100% which is what you want when you buy a CPU. If you buy a CPU and its already being used 100% (and needing more power) then imagine what it'll be like in a few months.
Re:4.77 Ghz (Score:1)
--
Re:Hmmm... (Score:1)
What support you got? You made 4 unsupported statements, and you even made some more unsupported statements based on the unsupported statements. Please, elabortate =)
Re:Impossible. (Score:1)
Re:Impossible. (Score:1)
[H]ard|OCP for some reason comes to mind...
Re:In order for this to be a "law" (Score:1)
Re:Impossible. (Score:1)
The evolution has been done mostly in the consumer department (CMOS), not in the bleading edge technology (Bi, ECL and GaAs).
You could have 50GHz CPUs right now if the demand existed. But it does not. Supercomputers are not that "super" nowadays
But the speed will not improve in the future by means of raw clock speed: the improved architecture will make the difference.
In fact that is already happening a bit.
How the heck... (Score:1)
oh one of those... (Score:1)
Getting the world hooked on computers and your chips = $billions
Making billions around the world depend on computers and computer chips only to tell them that they cannot advance it any further on April fools day 2001...priceless....
"End of Silicon" (Score:2)
grunt (Score:1)
Re:Impossible.. Why not? (Score:2)
GALLIUM-ARSENIDE FET AMPLIFIERS have been developed which provide low-noise amplification up to about 30 dB in the 7- to 18-gigahertz range. The power output of many of these amplifiers is relatively low, approximately 20 to 200 milliwatts, but that is satisfactory for many microwave applications. Research has extended both the frequency range and the power output of gallium-arsenide FET amplifiers to frequencies as high as 26.5 gigahertz and power levels in excess of 1 watt in multistage amplifiers.
The web page with this info is located at http://www.tpub.com/neets/book11/45o.htm. There is nothing preventing this being used for computing. Advances need to be made to provide syncronised clock signals to all the chip and the power consumption will need to be dealt with. These are analog devices at this time.
heated argument? (Score:2)
What heated argument? They're just saying this is a way to keep it going...
--
How long before we get it. (Score:1)
Re:Using Infiniband for clustering (Score:2)
My point was that the typical processes running on a desktop machine are unsuitable for this type of farming out to a cluster.
Doug
Re:4.77 GHz - 4.77 MHz (Score:1)
Re:They made me make this point! (Score:1)
Re:They made me make this point! (Score:1)
Consider Time Kill's law, circa 1999: Nasdaq doubles every 3 years. It's a statement based on some observations at some point in time. There's data to support it but no explanation to support it. It also seems pretty invalid right now.
Re:Hmmm... (Score:1)
Redundant computing (Score:2)
I think you see the solution to your own problem, so don't go saying it's impossible.
10 GHz? For consumers? For what?? (Score:1)
Smokin' speeds, no wait, and maybe sub-fission temperatures.. Wow!
So what will we use this for? Rendering the highest-quality ever cartoon pr0n and doctoring Natalie Portman pics faster than our neighbor?
Wrong - think of the pipeline (Score:1)
"due to different wire lengths"
At 10ghz, each stage of (say) a 5 stage pipline will take ~2 nanoseconds. Electricity can travel ~1 foot per nanosecond. So therefore, if the data required in (say) a cache fetch instruction stage was more than 1 foot away, or if the cache took a non-zero amount of time to look it up (which it will indeed) then you're going to have you a nice little pipeline stall.
Anyway, I think Intel has a bigger than 5 stage pipeline, which just reduces the distance we can "travel" during each stage.
Consider thyself corrected.
At least Intel's looking into the future... (Score:1)
IMHO, once Intel drops Rambus like the hot turd that they are, many people will see the light and become loyal again. Between the AMD|OC obsession, VIA's spotty chipset drivers that create new bugs while fixing old ones, and 54% L337 H4X0R contingent of the entire AMD customer base, AMD's crusin' for a bruisin' from Chipzilla.
Re:R&D cycle VS conspriacy theory (Score:2)
:-) (Score:1)
10GHz but only one accumulator (Score:1)
Re:Dunno (Score:1)
Distributed stuff is great (heck, my primary research is on distributed wireless crap) but I think there is too much hype and many people really don't understand the costs associated with it. Clusters are useful for *very* small number of situations.
Re:the end of software? (Score:1)
I guess what you are proposing is to go to CISC instead of the current direction of RISC processors. Well, I'm sure there has been many discussions about merits of CISC and RISC proc but both has its advantages and disadvantages.
But I agree what is the purpose of a faster general purpose CPU? Do I really need a faster computer to do Word? Speech recognition and stuff may be cool but I really don't see any killer apps that will make me upgrade my computer (even games don't seem to stretch the capability of the computers as much as they used to)
Re:the end of software? (Score:2)
As for the question about making word run faster.. This entire discussion assumes an underlying desire to ever increase the speed of processing; and specifically the potential limits of More's law - namely continuing human ingenuity.
Note, there are all sorts of problems with hardware based operation, but so long as we have API's like OpenGL, POSIX, MFC, etc, then we don't have to worry about the specifics of how it's implemented. Is the latest kernel hardware accelerated? Who cares from a developer's point of view.
Re:the end of software? (Score:2)
Further, I really only see ASIC's as stepping stones towards development. Isn't the GeForce a full blown processor? This is most likely because of the large volumes..
Rapid switching FPGA could very well be revolutionary, since you'd have one or two pieces of hardware that are reprogrammed for their environment on the fly. But that's vapor-wear at this point. FPGA is (to my understanding) primarily for proof-of concept, or getting something out the door.
-Michael
Good point (Score:1)
Contrary, Moore's Law isn't even an established law: it's just something the co-founder of Intel said while trying to pitch his IPO. I don't have the data, so I'm not even sure how accurate the rule is. As it stands, the rule is kind of like saying, "You should only need four or five gallons of gas to make a 100 mile trip."
Re:Using Infiniband for clustering (Score:2)
thankyou.
:)Fudboy
Re:Impossible. (Score:2)
But that won't stop the chip manufacturers from trying of course.
January 15, 2028 - Intel announces their new 400THz processor, which performs 100 billion floating-point operations in the millisecond before it consumes itself in a nuclear explosion. This is a step up from AMD's recent processor which simply fries any nearby user with bolts of plasma energy. Hobbyists are already looking into ways to overclock the chip.
--
Obfuscated e-mail addresses won't stop sadistic 12-year-old ACs.
Re:They made me make this point! (Score:1)
Re:Impossible. (Score:3)
While everything is obviously shielded, it is still amusing to speculate on the cooking potentials of the insides of your PC.
What is more worrisome is the problem of heat. I recall reading someplace that right now a typical processor runs the energy of a 60 watt light bulb through that piece of ceramic.
When we multiply this with the frequency shifts and the number of transistors, it becomes worrisome.
I occasionally have visions of computers glowing like a flying saucer [smile]
Re:They made me make this point! (Score:1)
I've never heard of anyone who thinks Moore's law is an actual law of physics. That would be just stupid.
Of course, I don't live in America, where everything is possible.
Dunno (Score:3)
Re:They made me make this point! (Score:1)
Re:Impossible. (Score:1)
The halting problem is an NP-Complete problem. Since there is still no proof that P!=NP, there is no proof that the halting problem is impossible.
Umm, no. The halting problem has to do with decidability, not computation time. Decidability only requires that we get an answer in finite time and P and NP are both finite, so the question of P?=NP is irrelevant. This means that some computational problems *are* impossible, which, however has very little to do with the topic at hand.
In the world of mathematics it's easy to show that many things are impossible. In the world of engineering, it's generally pretty hard to determine what is possible except within narrow constraints. This doesn't mean that nothing is impossible, it just means that we're not good at predicting what is impossible; some things that appear hard are easy and some things that appear easy are hard.
Re:Advanced cooling will be needed. (Score:1)
Why a stall in Moore's law would be good for us... (Score:1)
Actually, I think a 10 year stall in Moore's law could be good for the industry as a whole.
The vast majority of software is very inefficiently written, making poor use of the processors we have. Given a 10 year lull in processor speed hikes we would be forced to explore the potential of what we have and squeeze out every last drop of performance we could find.
It would be extremely painful but leave us (as an industry) in better shape to exploit future breakthroughs in processor speed.
Re:What about the crypto? (Score:1)
It does. The previous poster just didn't cover enough data points to demonstrate it ... 128bit = 1 month, 129bit = 2 month, 130bit = 4 month .... you know the powers of 2, don't you? :)
Re:OT:Impossible. (Score:1)
-- mov'ing and setting registers as many cycles apart as possible
-- making a table of MUL's of 320 (getting the stuff from mem goes faster than MUL'ing it real time)
-- completely ASM (of course)
-- some other horror stuff (can't really remember, it was years ago).
So I posted this to a certain newsgroup about gaming in response to somebody asking for it. Blatently noting that this should be the fastest way around. It was not
Sidenote: it had to run on a 286 (so no fancy wide EAX registers available) and it had to be a function.
Re:They made me make this point! (Score:1)
Re:Good point (Score:1)
I think something which Moore's law has in common with Newtons', but not with the above statement, is that it is an apparently unlikely statement with far-reaching consequences. Nobody would've guessed Moore's law in 1920, say. (Well, I presume not). And we all know the amazing effects that Moore's law enables - like computers that can beat Kasparov at chess *by brute force*. Wheras the fuel statement above is both obvious-sounding and devoid of amazing consequences.
Re:Errm..... (Score:2)
Re:Dunno (Score:2)
Re:Impossible. (Score:2)
What are the implications... (Score:2)
What's funny is when I got my 800MHz Athlon, I committed myself to keeping the case cover on all the time for fear of rads. :-) One of my friends and I discussed this, but neither of us know much about atomic physics.
Will processors running at that speed require shielding?
Re:Impossible. (Score:2)
Yeah, right! The future is DNA computing? It's a hack that happens work for some obscure computationally intensive problems that can easily be paralellized. You do not want a DNA computer to replace your desktop, trust me on this. It would take hours just to set up a simple computation. It could make for an interesting co-processor, though, but for mainstream use the gains are probably not worth it. Parellell computing? Maybe, but there are lot's of interesting problems that are not easily paralellized. Anyway, you forgot to mention quantum computing. That is definitely interesting, if it will ever work (and chances are it will not). But for the close future, I'll be willing to bet a lot that Moore's law will probably still be valid for a few more years.
Re:What I find disturbing (Score:2)
This gap is no bigger than the gaps in the past, the 486 ranged from 25 Mhz to 133 Mhz, during the life of the chip, the pentium from 60 to 233 Mhz.
Since you are comparing the early Pentium IIs to the newest Pentium IIIs, an equivalent comparision would be from the 25Mhz 486 to the 233 Pentium. Anyway, my point is, there has always been a gap, and its always been pretty big (at least starting with the 286 and better.)
Your investment argument is weak, since it is an investment that will depreciate almost immediatly to nothing. You would have been better off buying a celeron or PII now and a faster chip later when they are cheaper, than shelling out nearly 1000 bucks for a top of the line chip.
-
Re:What are the implications... (Score:2)
Thats obvious. :) Anyway, you are confusing non-ionizing radiation (radio waves) with ionizing radiation (radioactivity). Yeah, your chip is putting off a bunch of microwave energy, but that isn't ionizing radiation, its the same kind of energy that your cell phone puts out. Its thought that the only health effects of non-ionizing radiation are heating effects. There is some controversy about this, and there could be other chronic effects to long term exposure, but nothing has been proven yet.
Anyway, keep your case on, it helps seal your electronics from stray electromagnetic fields, and helps keep your computer from causing interference in the same way. I am a ham radio operator, and I can tell you how much of a mess computers make on the radio waves even with the case on. (If your case isn't metal, it doesn't make a difference either way, for all you geeks with plexiglass cases)
-
Re:line widths (Score:2)
I also imagine, due to cooling requirements, development may go the route of multiple cheaper processors rather than expensive Apollo project processors (processors that push the theoretical limits). When this happens, software will start to morph to take advantage of it, and I predict we will still see gains comparable to Moore's law.
Nonsense! We don't need it! (Score:2)
Re:They made me make this point! (Score:3)
Re:Dunno (Score:3)
Re:Dunno (Beowulf not the solution) (Score:4)
Better off with multiple slower CPUs, like 1.5 GHz and Beowulf them. More machines to take care of, but better than rushed/poor fabbing of CPUs. Plus you get redundancy and almost unlimited scalability. And ungodly bandwidth if you use gigabit cards instead of just 100bt. It's the way to go for pretty much everything unless you have something custom for one cpu (which is rare these days)
Actually if you are going to have a system of highly interconnected cpu's like in a beowulf cluster then you are limited fairly severly in scalability. This is mostly due to the size of the memory bus. Even if you move up to gigabit ethernet cards the bus is a big limiting factor.
Secondly the class of tasks that a cluster is useful for is not that big. It does nothing towards making a really bloated program run any faster. They are not very good for real time tasks because once you have chopped up a problem and distributed it to all of the processors you have very little time to work on it and get the results back in time.
While very useful the cluster is not likely to be the solution to potential end of Moore's law like growth.
Answer this... (Score:2)
Re:Dunno (Score:2)
And as CPUs get faster SMP cannot help that much either because we have to make sure internal CPU caches that represent the same physical memory area are in sync. Of course there will be advances in this area also but one cannot throw hundreds of CPUs on a mobo and then program it like it were single processor computer.
We're going to need *much better* compilers in the future.
_________________________
Re:What about the crypto? (Score:2)
"Me Ted"
the end of software? (Score:4)
Here are the alternatives. Get away from pipelining (which is a hack that facilitates ever-increasing clock-speeds).. Return to optimized and specialized adders / multiplers, etc. Now that we make things in parallel with 2 - 4 adders, simply produce CPU's with 24 adders, each with no inter-vening pipeline buffers.. The number of transistors significantly goes down for each adder, and through the use high conductive materials (such as diamond) you can achieve large surface area chips. (This assumes that you take on the reverse of existing P4's.. You have the control log and memory interfaces running at 10GHZ while your adder runs at say 100MHZ, which each gate switching with nearly 1/20GHZ probagation delay)
Step two is even more obvious. Specialized hardware.. In the video world, we have only to compare software OpenGL to hardware OpenGL. specialized hardware is monumental because it's the ultimate parallel algorithm. Those algorithms such as MFC, or possibly even OS calls could be hardware controlled.. Granted it makes upgrades a lot harder, but don't we find ourselves spending the money on new video cards every year and a half now? How often does someone upgrade winNT? It already costs $150 for the OS upgrade, what's an additional $50 for the PCI / adaptive AGP card?
To facilitate smoother transitions, I think that FPGA or ASICS might have a popularity explosion. As far as I know, they're still manufactured with huge gate-widths.. Bring an ASICs into the "10GHZ" range, and you have the potential for incredible performance.
In fact, the CPU as we know it might fade away into the anals of history over time. A return to cartraiges perhaps?
-Michael
Errm..... (Score:5)
Impossible. (Score:2)
The reason it is impossible is due to heating issues, and also that down at 0.01 microns a single bit is represented by only a few hundred electrons. Quantum Mechanics states that the uncertainty od such a conglomeration is about 1 in 200 Billion - ie, the 'bit' is only certain to that degree. Given that a processor at this soeed will precess many times this amount per second, it is impossible for a processor to run at this architectural scale because one in every 250 Billion bits will be corrupted - which is fatal. I have estimated that the top speed we are likely to see is about 3GHz at 0.05 microns. To assert otherwise is hogwash.
The future lies in parallel processing and DNA, mark my words. You can bet AMD and Intel are reseaching it now. The traditional CPU is nearly dead.
Re:Impossible. (Score:3)
X-Ray lithography? (Score:2)
--
FIRST IMAGES BY APRIL (Score:4)
Wouldn't happen to look like this [lesbell.com.au] would it?
-----
"Almost isn't good enough - but it's almost good enough."
Re:Dunno (Score:3)
The key is to have a very high computation time (the time a processor can keep working by itself, without having to communicate with other processors) to latency ratio. Unfortunately, for most tasks performed on a desktop machine, lots of data must be accessible at all times.
Take a GUI for example. Most tasks involving windows and other on-screen objects are very short, and they require up-to-date information about where the window is, what events have occurred, etc. Trying to farm any of those calculations to other machines would actually make your GUI appear slower to the user, by adding 2x the network latency to every event that occurs.
SMP, of course, is another matter. Shared memory eliminates the incredibly high cost of going to a network subsystem. Syncronization and locking become an issue however, and you never get a factor N speed up for N processors.
Doug
Error correction (Score:2)
The tricky part is that not only storage and data paths would need ECC - all processing circuitry would need to support error correction with redundant circuits. Even the most basic building blocks would need to be redesigned and replaced with versions that incorporate ECC sanity checking into their internal design to take into account the fact that any intermediate stage may flip a bit. I imagine designing an error correcting adder or multiplier would be a nightmare but it's possible.
The resulting architecture would probably need to be a very simple processor, VLIW perhaps.
And I bet it would emulate a Pentium using Transmeta-style translation
----
Re:Answer this... (Score:3)
Then why does almost every single linux company I know of (regardless of their field) have *at least* a 6-node beowulf cluster. It's not for SETI, my friend. Some folks need that power without having to get a crazy expensive Sun/HP/SGI/DEC/Aviion or with some performance-crippled 8-way xeon. If you BREAK UP the task, it works better. Gigabit is more than enough for databases, etc.
Like I said, there are certain tasks that a cluster is great for, but there are quite a few that it doesn't do you any good to have a cluster. This is mostly due to the fact that you have traded what amounts to very high memory latency for more available cycles.
Secondly there are some tasks that are very hard to break up into sub-problems and so it is very hard to apply a cluster to those types of problems.
A good way to know if a cluster will help solve a problem is to look at how much the processors must share data while working on that problem. If you can limit a processor to writing to a data space that other processors do not depend on then you have a problem that may be well suited for a cluster. If however the result space and the data space overlap then your memory bus can get easily swamped trying to keep everybody's memory up to date.
OT: HUGE CLUSTER (Score:2)
http://tux.anu.edu.au/Projects/Bunyip/Beo-017.jpg [anu.edu.au]
http://tux.anu.edu.au/Projects/Bunyip/Beo-015.jpg [anu.edu.au]
Now *THAT* would be the ultimate quake server or GIMP beast! But... who can afford the electricity?!?!
Other considerations (Score:2)
If you go to an opto system, speed will always be coefficient of medium, ~3,000,000kps. But when you transfer energy, if it isn't converted back into signal, heat is generated too. What would be way kewl(Cool!) is integrated Peltier junctions to help dissipate heat. Built in heal sink!
Another thing is the inductive coupling of longer wires. There's a reason why all those stupid ground returns on a parallel cable! They redirect the induced signal to gnd. Capacitance effectively blunts the wavefront of a signal, but if they work with soliton pulses(essentially a pre squished square wave), they have nothing to blunt/induce. Induction is a rise time effect more than anything else. The trouble with solitons is when is the bloody thing a 0/1???
There's a foot in your mouth (Score:2)
Power dissipation goes down with reduced size. This makes up for the increase with increased speed.
and also that down at 0.01 microns a single bit is represented by only a few hundred electrons.
Only if they make the transistor that small. .01 micron is the minimum size of a feature, not the size of all features. While smaller transistors are nice, smaller busses are actually more important. Anyway, to take your assumption at face value anyway...
Quantum Mechanics states that the uncertainty od such a conglomeration is about 1 in 200 Billion - ie, the 'bit' is only certain to that degree. Given that a processor at this soeed will precess many times this amount per second, it is impossible for a processor to run at this architectural scale because one in every 250 Billion bits will be corrupted - which is fatal.
Certainly so -- if you don't design any error correction into the chip. It only requires about a 20% increase in real estate to implement two parity bits which would require two simultaneous bit failures to create a nonresolvable error. This would also slow things down very little as parity checking can be done in parallel with computation -- it's always going on. Thus, instead of crashing once every minute or so as your calculations suggest, it would crash once every several hundred billion minutes or so, which is quite tolerable.
I have estimated that the top speed we are likely to see is about 3GHz at 0.05 microns. To assert otherwise is hogwash.
You know a lot about physics, but not much about CPU architecture. Your pet peeve will be relatively simple to work around when the time comes.
cpu's (Score:2)
- Amon CMB
Using Infiniband for clustering (Score:2)
Of course you still have the problem that current clusters require software be rewritten to take advantage of it. I think someone could design a system that finds other systems across infiniband and shares the work load automatically. The more transparent the clustering, the better.
Re:Impossible. (Score:3)
I was proven wrong.
Please read "Impossible for Dummies", this must brighten you up.
They made me make this point! (Score:4)
A more careful journalist would hopefully have written:
--
line widths (Score:5)
Lowering the voltage has some good effects - the main one is that the power consumption drops as the square of the voltage (assuming Ohms law). However lowering the voltage causes everything to run slower. The old fashioned 4000 series CMOS chips were much faster at 15 volts than they were at 5 volts.
Chips get faster when they shrink because the capacitances decrease as the surface area of a conductor shrinks; cut the feature size by a factor of two in both directions and the capacitance is down by a factor of four. However there is another effect which occurs as everything shrinks; the insulation between features shrinks, and that shrinking feature increases the parasitic capacitance between the two features.
In the past the increase in capacitance caused by the thinning of insulators has not been a significant effect in limiting clock speeds but there comes a point where the effect does become important. In neurons the cell walls are so thin that the capacitance effects of the thin dielectric limit signal propagation speeds in the neuron to about 180 miles per hour or so. Long axons have thick sheaths to cut the capacitance and increase the signal propagation speeds.
This increasing capacitance with the decreasing dielectric thickness combined with the decreasing speed from the lowered voltages will eventually put an effective cap on the clock speed of silicon devices. The only big trick left in the book is too switch to Diamond based semi conductors - which are as much better than silicon than silicon was than germanium - and that will give us some more speed. Above a certain frequency Nature itself changes the way it does things. At RF frequencies bulk devices like crystals function - at the frequencies of light waves only atomic devices can switch from one state to another quickly enough.
In other words at some point in the near future we are going to reach a point where simple die shrinking won't be enough to crank up clock speeds any more. Enjoy things while they last - but another factor of a thousand increase in clock speed (Apple II one megahertz to present day one gigahertz) is going to be very difficult to achieve.
Advanced cooling will be needed. (Score:2)
~~~
Re:They made me make this point! (Score:3)
Moores "Law", based on observations by Intel co-founder Gordon Moore and first uttered around 1964, predicts that the density of transistors contained on intergrated circuits will double every 18 to 24 months.
Not tricky at all (Score:2)
It actually isn't hard at all to do this. Individual registers can be verified in real time with parity checks. Multiple parity bits can allow parity errors to automatically be resolved without losing data. A clock cycle might have to be skipped while this is done -- once every few hundred billion clocks. Otherwise, it's transparent and consumes rather little chip real estate.
In some cases it would be easier to duplicate entire modules and compare the outputs. It's not necessary to use three blocks with voting; if a compare fails, you redo the operation. It's a computer; until you write the results you still have your starting state to begin from again. So once again, you miss a clock cycle once in a great while.
Remember also that most of the computer is not the CPU and isn't implemented at this level or running at this speed. You only have to harden the parts that are.
Re:They made me make this point! (Score:2)
Screamer? (Score:2)
I had a 4 MHz Z80 system by Amstrad from the late 80's whose word processor blew away WordPerfect on a 12 MHz 8086 spell-checking documents. (I'd still be using it but it used proprietary disks and you can't get the drives any more.) This was a very powerful, intuitive word cruncher using an extended text mode that could display 512 different characters at a time on its 90x25 screen, and for ease of use it compared favorably with all but the fastest newest Windows-based systems. It also ran CP/M, and sported an interpreted BASIC that made QBasic look like a sick joke.
If we had software written like that for the x86 platform, it would be amazing what these machines could do. Imagine something text based, with pre-emptive multitasking, installable with only the features you need, highly configurable, with optional graphics, and built by people who really care about what they are doing...
Well, I guess we have an operating system like that, but it would be nice to have applications too.
These problems already solved by Cray (Score:2)
And as I pointed out they were wrong.
even if it was running at 10Ghz all the components in the mobo would suffer from heavy timing problems due to different wire lengths.
Cray mainframes of the 80's had this problem as they were refrigerator-sized and operating over 100 MHz; the problem was solved by modularizing the system, desynchronizing the components, and recombining data under controlled circumstances. I remember being told by a beaming engineer in 1982 that some busses had three different addresses on them at the same time.
So it can be done. Now, we'll just see it done 100 times faster with equipment 100 times smaller -- a single chip.
Re:Impossible. (Score:2)
As for software verifiability, plenty of people *have* done research into it. Aside from many problems in the field turning out to be NP-hard or worse (yes, there are such problems), stating exactly the properties you wish to verify is in itself a large intellectual effort.
Notwithstanding all of this, there have been some interesting projects done, some of which have made it to some areas of industry. One simple-to-grasp example of a useful project was an automated termination analyser for the Mercury [mu.oz.au] programming language. While it obviously can't be 100% accurate, it could prove that about 80-90% of the Mercury language's standard library would terminate in finite time for all inputs - and, usefully, it turned out that code for which termination couldn't be proven often contained subtle bugs.
Mercury, however, has features that make this kind of thing much, much easier than conventional programming languages (but make programming in it somewhat of a challenge to learn initially). Trying to do this on pointer-ridden C is probably too difficult and too inaccurate to be useful.
4.77 Ghz (Score:3)
Even if it was possible it would be unafordable (Score:2)