Forget Moore's Law? 406
Roland Piquepaille writes "On a day where CNET News.com releases a story named "Moore's Law to roll on for another decade," it's refreshing to look at another view. Michael S. Malone says we should forget Moore's law, not because it isn't true, but mainly because it has become dangerous. "An extraordinary announcement was made a couple of months ago, one that may mark a turning point in the high-tech story. It was a statement by Eric Schmidt, CEO of Google. His words were both simple and devastating: when asked how the 64-bit Itanium, the new megaprocessor from Intel and Hewlett-Packard, would affect Google, Mr. Schmidt replied that it wouldn't. Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors." Check this column for other statements by Marc Andreessen or Gordon Moore himself. If you have time, read the long Red Herring article for other interesting thoughts."
BBC Article (Score:4, Informative)
But what if Moores law is too slow? (Score:4, Interesting)
Everyone seems to be acting like Moore's law is too fast, that over the next centruy our technology could never grow as fast as it predicts. However, consider for a moment that perhaps it's too slow, that technology can and will grow faster than it's predictions like it or not. Yes silicon has limits, but physics wise - there is no law I know of inherent in the universe that says mathematical calculations can never be calculated faster than xyz, or the rate of growth in calculation ability can never accellerate faster than abc. These constraints are defined by human limits, not physical ones.
In fact, it could be argued that Moore's law is slowing down progress because inverstors see any technology that grows faster than it predicts as too good to be true, and therefore too risky to invest in. However, from time to time when companies have been in dire straights to outdo their competitors "magical" things have happened that seem to have surpassed it for at least brief periods. Also, from what I understand, the rate of growth in optical technology *IS* faster than moores law, but people expect it to fizzle off when it reaches the abilities of silicon - I doubt it.
The last time Intel was declaring the death of Moores law was when they were under heavy attack from predictions that they couldn't match advances in RISC technology. Funny, when they finally redesigned their CPU with RISC underpinnings - these death predictions silently faded away. (at least till now) I wonder what's holding them back this time?
Re:But what if Moores law is too slow? (Score:3, Interesting)
Contrary to what some people think, quantum physics and thermodynamics define the limits of what a computer can do.
For example, you can't send data faster than the speed of light, and you can't have two memory blocks closer together than one plank's constant. Likewise you cannot store more information than the total amount of information in the universe etc. etc.
According to physics as it is today, there are dead ends for computers where they cannot get any faster, bigger, or more powerfull. We may never reach those limits, but they still exist.
clustering (Score:5, Interesting)
I guess this is better to use interconnected devices in an interconnected world.
where I work, we recently traded our Sun E10k for several E450 between which we load balance request.
It surprisingly works very well.
I guess Google's approach is then an efficient one.
Re:clustering (Score:5, Interesting)
Google's approach is good for google. If Google would want to make good use of significantly faster CPUs, they would also need significantly more RAM in their machines (a CPU faster by a factor of 10 can't yield a speed-up factor of ten, if the network can't deliver the data fast enough).
For Google it's fine, if a request can be done in say half-a-second on a slower machine, that is a lot cheaper then a 10* as fast machine doing each request in
On the other hand, if you have a job that can only be done sequentially (or can't be parallelized all to well), then having 100s of computers won't help you very much...
The more expensive servers will definitely be more expensive when you buy them - on the other hand the more expensive faster machines might save you a lot of money in turns of less rent for the offices (lower space requirements) or - perhaps even more important - save on energy...
The company where I'm working switched all their work PCs to TFTs relatively early, when TFTs were still expensive. The company said, that this step was done on the expected cost saving in power bills and also saving on air conditioning in rooms with lots of CRTs...
Sounds like Ganesh's law to me (Score:3, Informative)
The entire issue is confused. (Score:3, Insightful)
'His words were both simple and devastating: when asked how the 64-bit Itanium, the new megaprocessor from Intel and Hewlett-Packard, would affect Google, Mr. Schmidt replied that it wouldn't. Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors." '
The parent comment is correct, but the entire issue is confused. In a few years, the Itanium will be the cheapest processor available, and Google will be using it.
What is an example that can't run in parallel? (Score:4, Interesting)
I tend to be a believer that massively parallel machines are the (eventual) future. e.g. just as we would brag about how many K, and then eventually megabytes our memory was, or how big our hard di_k was, or how many megahertz, I think that in the future shoppers will compare: "Oh, the machine at Worst Buy has 128K processors, while the machine at Circus Shitty has only 64K processors!"
Re:What is an example that can't run in parallel? (Score:3, Informative)
Though the resulting matrix would probably be applied accross a lot of data and that can be done in parallel.
A matrix inversion can be done very fast if you have a Very MPP system (say effectivly 2^32 processors!) like a quantum computer.
Re:What is an example that can't run in parallel? (Score:3, Informative)
I found a nice little read about how to decide if any particular problem you are looking at is easily parallelizable.
It is in pdf (looks like a power point presentation).
http://cs.oregonstate.edu/~pancake/presentation
Re:What is an example that can't run in parallel? (Score:4, Insightful)
That's not to say that every complex problem needs a supercomputer. That's why Cray also sells Intel clusters. Right tool for the right job and all of that.
Re:What is an example that can't run in parallel? (Score:3, Insightful)
Re:What is an example that can't run in parallel? (Score:3, Informative)
Can someone please give an example of a computing task that CANNOT be subdivided into smaller tasks and run in parallel on many processing elements?
The technical issue here is known as "linear speedup". Take chess, for example: the standard search algorithm for chess play is something called minimax search with alpha-beta pruning. It turns out that the alpha-beta pruning step effectively involves information from the entire search up to this point. With only a subset of this information, exponentially more work will be needed: a bad thing.
How do parallel chess computers such as Deep Blue work, then? Very fancy algorithms that still get sublinear but interesting speedups at the expense of a ton of clever programming. This is a rough explanation of why today's PC chess programs are probably comparable with the now-disassembled Deep Blue: the PC chess programmers can use much simpler search algorithms, concentrating on other important programming tasks, also a 10x speedup in uniprocessor performance has a 10x search speed increase, whereas using 10x slow processors isn't nearly so effective. Note that Deep Blue was decommissioned largely because of maintenance costs: a lot of rework would have to be done to make Deep Blue take advantage of Moore's Law.
That said, many tasks are "trivially" parallelizable. Aside from the pragmatic problem of coding for parallel machines (harder than writing serial code even for simple algorithms), there is also the silicon issue: given a transistor budget, are manufacturers better spending it on a bunch of little processors or one big one? This is the real question, and so far the answer is generally "one big one". YMMV. HTH.
(BTW, why can't I use HTML entities for Greek alpha and beta in my Slashdot article? What are they protecting me from?)
Re:What is an example that can't run in parallel? (Score:4, Informative)
This algorithm is something I'm familiar with. (Not chess, but other toy games in LISP, like Tic Tac Toe, Checkers, and Reversi, all of which I've implemented using a generic minimax-alphabeta subroutine I wrote.) (All just for fun, of course.)
If you have a bunch of parallel nodes, you throw all of the leaf nodes at it in parallel. As soon as leaf board scores start comming in, you min or max them up the tree. You may be able to alpha-beta prune off entire subtrees. Yes, at higher levels, the process is still sequential. But many boards' scores at the leaf nodes need computed, and could be done in parallel. Yes, you may alpha-beta prune off a subtree that has already had some of your processors thrown at it's leaf nodes -- you abort those computations and re-assign those processors to the leaf nodes that come after the subtree that just got pruned off.
Am I missing anything important here? It seems like you could still significantly benefit from massive parallel processing. If you have enough processors, the alpha-beta pruning itself might not even be necessary. After all the alpha-beta pruning is just an optmization so that sequential processing doesn't have to examine subtrees that wouldn't end up affecting the outcome. But let's say, each board can have 10 possible moves made by each player. I want to look 4 moves ahead. This is 10,000 leaf boards to score. If I have more than 10,000 processors, why even bother to alpha-beta prune? Now, if I end up needing to examine 1 million boards (more realistic perhaps) and I can do them 10,000 at a time, I still may end up being able to take advantage of some alpha-beta pruning. And 10,000 boards examined at once, sequentially, is still faster than 1 at a time.
Vector processors wouldn't be any more helpful here (would they?) than massively parallel?
Of course, whether a mere 10,000 processors constitutes massively parallel or not is a matter of interpretation. Some people say a 4-way SMP is massively parallel. I suppose it depends on your definition of "massively".
Re:What is an example that can't run in parallel? (Score:5, Informative)
Computing the MD5 sum of 1TB of data. :-) MD5 depends on (among other things) being non-parallelizable for its security.
Re:What is an example that can't run in parallel? (Score:3, Interesting)
An example round from MD5 (line 198):
MD5STEP (F1, a, b, c, d, in[0] + 0xd76aa478, 7);
This expands to:
( w += (d ^ (b & (c ^ d))) + in[0] + 0xd76aa478, w = w<<7 | w>>(32-7), w += b )
Notice that there are two additions in the first subexpression. The addition (in[0] + 0xd76aa478) can be computed simultaneously with (d ^ (b & (c ^ d))).
This is the only spot where anything could be parallelized. Assuming that all the principle operations can be performed in the same amount of time, then you could potentially go 25% faster by computing the addition in parallel.
But that's the furthest you can go.
Re:clustering (Score:3, Informative)
I think you have the right idea, slightly mistated. The crux for Google is that their problem is actually creating a huge associative memory, many terabytes of RAM. The speed of the processors is not that important, the speed of the RAM is the bottleneck. Pipelining etc have little or no effect on data lookups since practically every lookup is going to be outside the cache.
That does not support the idea that Moore's law is dead. It merely means that google is more interested in bigger and faster RAM chips rather than bigger and faster processors.
Long ago when I built this type of machine the key question was the cost of memory. You wanted to have fast processors because you could reduce the total system cost if you had fewer and faster processors with the same amout of RAM. Today however RAM cost is not a big issue, the faster processors tend to require faster RAM so you can make savings by having 10 CPUS running at half the speed rather than 5 really fast processors at three times the cost.
Re:clustering (Score:3, Insightful)
If Google screws up 1 in 1000 requests, I wouldn't even notice. Refresh and on my way.
Citibank trades roughly $1 Trillion in currency a day. If they had 5 9's accuracy, they would be misplacing $10,000,000 a day. In that environment, commodity machines are unacceptable.
And it scales down: paychecks? billing records? The video check-out at Blockbuster?
Re:clustering (Score:5, Informative)
NoW (Score:4, Informative)
Here is a link to a seminal paper on the issue if you are interested:
[nec.com]
http://citeseer.nj.nec.com/anderson94case.html
Future of Supercomputing (Score:5, Interesting)
Check out who's on top of the TOP 500 [top500.org] supercomputers. US? Nope. Cluster? Nope. The top computer in the world is the Earth Simulator [jamstec.go.jp] in Japan. It's not a cluster of lower end processors. It was built from the ground up with one idea -- speed. Unsurprisingly it uses traditional vector processing techniques developed by Cray to achieve this power. And how does it compare with the next in line? It blows them away. Absolutely blows them away.
I recently read a very interesting article about this (I can't remember where - I tried googling) which basically stated that the US has lost it's edge in supercomputing. The reason was two fold: (1) less government and private funding for supercomputing projects and (2) a reliance on clustering. There is communication overhead in clustering that dwarfs similar problems in traditional supercomputers. Clusters can scale, but the max speed is limited.
Before you start thinking that it doesn't matter and that the beowulf in your bedroom can compare to any Cray, recognize that there are still problems within science that would take ages to complete. These are very different problems from those facing Google, but they are nonetheless real and important.
Re:Future of Supercomputing (Score:3, Informative)
640 processor nodes, each consisting of eight vector processors are connected as a high speed interconnection network.
That makes it a cluster (640 processor nodes) of clusters (8 vector processors)
Re:Future of Supercomputing (Score:5, Informative)
It's worth noting that the Earth Simulator is actually a cluster of vector mainframes (NEC SX-6s) using a custom interconnect. You could do something similar with the Cray X-1 if you had US$400M or so to spend.
If you're referring to the article I think you are, it was specifically talking in the context of weather simulation -- an application area where vector systems are known to excel (hence why the Earth Simulator does so well at it). The problem is that vector systems aren't always as cost-effective as clusters for a highly heterogeneous workload. With vector systems, a good deal of the cost is in the memory subsystem (often capable of several 10s of GB/s in memory bandwidth), but not every application needs heavy-duty memory bandwidth. Where I work, we've got benchmarks that show a cluster of Itanium-2 systems wiping the walls with a vector machine for some applications (specifically structural dynamics and some types of quantum chemistry calcuations), and others where a bunch of cheap AMDs beat everything in sight (on some bioinformatics stuff). It all depends on what your workload is.
Upgrading Good... (Score:4, Insightful)
Of course, in true
Re:Upgrading Good... (Score:3)
Exactly. And if they run out of capacity, the just add more cheap nodes, rather than buy a crazyily expensive supercomputer like ebay has.
Re:Upgrading Good... (Score:3, Funny)
You're thinking of priceline.
Re:Upgrading Good... (Score:5, Insightful)
This is all so obtuse anyways. These articles proclaim that Moore's Law is some crazy obsession, when in reality Moore's Law is more of a marketing law than a technical law: If you don't appreciably increase computing power year over year, no new killer apps will appear (because the market isn't there) encouraging owners of older computers to upgrade.
Re:Upgrading Good... (Score:3, Insightful)
Computers are productivity tools, and a 'google' like application would have been perfectly possible 15 years ago, the programmers would have had to work a little bit harder to achieve the same results. Nowadays you can afford to be reasonably lazy. It's only an economics thing, where cost of developement and cost of hardware balance at an optimimum.
In that light, if google were developed 15 years ago it would use 286's, and if it would have been developed in 15 it would use what's in vogue and at the econonmical right pricepoint for that time.
Re:Upgrading Good... (Score:3, Insightful)
Killer apps and hardware have everything to do with each other. Could I browse the Internet on an Atari ST? Perhaps I could do lynx like browsing (and did), however the Atari ST didn't even have the processor capacity to decompress a jpeg in less than a minute (I recall running a command line utility to decompress those sample JPEGs hosted on the local BBS to ooh and ahh over the graphical prowess). Now we play MP3s and multitask without even thinking about it, and we wouldn't accept anything less. As I mentioned in another post I believe the next big killer app that will drive the hardware (and networking) industry is digital video: When every grandma wants to watch 60 minute videos of their grandchild over the "Interweeb" suddenly there will be a massive demand for the bandwidth and the computation power (I've yet to see a computer that can compress DV video in real-time).
Moore's law is about cost! (Score:3, Insightful)
Google are taking advantage of cheap, high-performing chips, exactly the things predicted by Gordon Moore.
Re:Upgrading Good... (Score:3, Interesting)
Google surely will upgrade to a more modern processor when they start to replace older hardware. However, they will just be driven by pure economic terms. No need to get the top of the pop processor. E.g. power/speed is an issue for them.
The ordinary desktop user
For gamers the processing power is far enough currently. The limit is the GPU not the CPU, or the memory bandwith between them, or in case of online gaming, the internet connection.
Killerapps are only a question of ideas. Not of processing power. And after having the idea its time to market and money and workers(coders) to make it a killerapp.
The future will move into more network centric computing, having every information available everywhere easyly. If possible dynamic adapting interfaces between information stores(aka UDDI, WSDL and SOAP). A step toward is the stronger getting interconnection. We are close to the point that we can have internet connections where a packet makes only 3 hops between arbitrary points on earth. With ping times around a tenth of a second approaching 10ms in the next years stuff like grid computing and any other form of pervasive computing will be the money maker.
I expect that chip producing companies will have a much smaler market in about 15 years like now, but till that they will grow. The next closing window is then the interconnection industry, routers and such. When all points are connected with a ping below 10ms
So whats open? Not making faster and faster chips
Different version of parallel computing, there are a lot of thinkable parallel machines not truely implementatabel. But they could be "simulated" close enough to have indeed a speedup for certain algorithms. (Instead of having infinite processors, what some of those machines require, some millions could be enough).
And of course tiny chips like radio (RFC?)
activated chips with very low processing power. Implantable chips, likely with some more hardware to stimulate brain areas for parkinson people, to reconnect spines etc.
Chips suitable for analysis of chemical substances, water and air control, picking up poisons etc.
Chips to track stuff, lend books or paper files in big companies.
I'm pretty sure that the race is going to an end, just like in super sonic air flight. At a certain point you gain nothing from more speed, but you have still plenty of room to make "smart" solutions/improvements.
angel'o'sphere
Misapprehensions (Score:5, Insightful)
Re:Misapprehensions (Score:4, Insightful)
This is where FPGAs and other reconfigurable hardware will enter. There are allready transparent solutions, converting C code to both machine code and hardware (i.e. a bitstream to download into an FPGA on a PCI card).
When discussing video and audio editing, you must realize that the cause of the huge performance need is not the complexity of the task, but the lack of parallel work in a modern PC. As a matter of a fact, smaller computing units, perhaps thousands of CUs inside a CPU chip, would give you better performance (when editing videos if the code was adapted to take advantage of it) than a super chip from intel.
If you want to test parallellalism, bring together a set of Linux boxes and run mosix. It works great!
Re: (Score:3, Insightful)
Re:But why? (Score:2)
Re:But why? (Score:3, Insightful)
Believe it or not, there's a large number of people that don't see their computer as a toy, and really only want it to do things they need (like write letters). Just because the power's there doesn't mean a ton of people will suddenly become independent filmmakers (no matter what Apple's ads tell you).
Google != the edge of cutting edge (Score:5, Insightful)
I agree. Also the article's concluding that big companies have no future because Google has no intention of investing in new technology is premature. Google is a great company, a great technology company, but it is just one of many. Google probably does not represent the very edge of cutting edge technology, either. Stuff like Molecular Dynamics Simulation requires more computer power; I'm sure that people who work in such areas can't wait to hear Intel, AMD and Sun announcing faster processor, 64bit, more scalability.
Re:Misapprehensions (Score:3, Informative)
Re:Misapprehensions (Score:4, Interesting)
The point is that it involved numerical simulations of a plasma - specifically, the plasma created when a solid is hit by a high intensity, short pulse laser. I was doing the work on an Alpha-based machine at uni, but having recently installed Linux on my home PC, I thought, "why not see if I can get it running on that, it might save me having to go in everyday".
Well, I tried, but always got garbage results, littered with NaNs. I didn't spend too much time on it, but my assumption at the time was that the numbers involved were simply too big for my poor little 32bit CPU and OS. It looked like things quickly overflowed, causing the NaN results. (The code was all in Fortran, incidently)
I am now a programmer at a web agency, but I've not forgotten my Physics "roots", nor lost my interest in the subject. I'm currently toying with doing some simulation work on my current home PC, and would like to know that I'm not going to run into the same sorts of problems. Of course, I can scale things to keep the numbers within sensible bounds, but it would be easier (and offer less scope for silly mistakes) if I didn't have to.
Not only that, of course, but the scope of simulating physical situations can often be memory limited. Okay, so I can't currently afford 4 gig of RAM, but if I could, I could easily throw together a simulation that would need more than that. In the future, that limit might actually become a problem.
Yes, I know I'm not a "typical" user - but the point is that it's not only video editing that would benefit from a move to 64 bit machines.
That silence you hear... (Score:5, Funny)
Re:That silence you hear... (Score:3)
It always seemed to me like the money you saved buying a reasonable cooling solution rather than a peltier would be better used buying a faster processor that you won't need to overclock.
</offtopic>
Quick Stupid Question (Score:4, Insightful)
Re:Quick Stupid Question (Score:5, Insightful)
"Moore's Theory" or "Moore's Rule of Thumb" would be the best name for it, but "Moore's Law" sounds a bit catchier. Which, I think, is really all there is to it.
Re:Quick Stupid Question (Score:5, Funny)
Long answer: The Origin, Nature, and Implications of "MOORE'S LAW" - The Benchmark of Progress in Semiconductor Electronics, by Bob Schaller [gmu.edu]
64bit matters, for Google, too (Score:5, Insightful)
Well, it's not much different with 32bit address spaces. It's easy in tasks like speech recognition or video processing to use more than 4Gbytes of memory in a single process. Trying to squeeze that into a 32bit address space is a major hassle. And it's also soon going to be more expensive than getting a 64bit processor.
The Itanium and Opteron are way overpriced in my opinion. But 64bit is going to arrive--it has to.
Re:64bit matters, for Google, too (Score:2, Insightful)
Why stay stuck in the Intel world? There's more to computers that what you buy from Dell.
Re:64bit matters, for Google, too (Score:5, Insightful)
Re:64bit matters, for Google, too (Score:2)
Re:64bit matters, for Google, too (Score:3, Interesting)
You can't even memory map files anymore reliably because many of them are bigger than 4G, which means that pretty much no program that deals with I/O can rely on memory mapping. Shared memory, too, needs to be shoe-horned into 32bit. 32bit addressing has a profoundly negative effect on software and hardware architecturs. We are back to PDP-11 style computing.
Why limit ourselves to 64 bit addresses ï½ I can foresee valid applications for 128,256 and 512 bit (and larger) address schemes (consider, for example, distributed grid computing.)
Sorry, I can't. 64bit addressing is driven by the fact that we can have easily more storage on a single machine than can be addressed with 32 bits. With 64bit computing, we can have a global unified address space for every single computer in the world for some time to come. There will probably be one more round of upgrades to 128bit addressing at one point, but that's it.
The distinction Iï½d like to make is one of diminishing returns for typical applications of increasing the ï½default word lengthï½.
First of all, for floating point software, it makes sense to go to 64bit anyway: 32bit floating point values are a complicated and dangerous compromise.
But in general, you are right: 32bit numerical quantities are good enough for a lot of applications. But, as I was saying, we tried making machines that are mostly n-bits and have provisions for addressing >n-bits, and the software becomes a mess. Going to uniform 64bit architectures is driven by the fact that some software needs 64bits, and once some software does, the cost of only partial support is too high.
AMD has a good compromise: they give you blazing 32bit performance and decent 64bit performance, with very similar instruction sets. That way, you can keep running existing 32bit software in 32bit mode.
Re:64bit matters, for Google, too (Score:3, Insightful)
Right now, we are at the point where its just a waste to build bigger and bigger hammers when you can get 100 smaller hammers to do more than a few bigger hammers and do it more quickly, cheaply and efficiently.
Parallel computing is really coming of age now for consumers and small buisinesses. While in the past only a big megacorp or the government could afford a Cray class machine, now you can build equivalent power (maybe not up to today's supercomputers, but certainly equivilent to ones 10 years ago which is still pretty significant) in your basement with a few Powermac's/PC's, some network cable and open source software for clustering.
So it makes more sense for Google to invest in a load of current technology and use it in the most effecient way possible than to spend money on expensive and untested (in the "real world") hardware.
After all, just take a look at what Apple's done with the X-Serve. Affordable, small, efficient clustering capability for buisiness. Two CPU's per machine and you can beowulf them easily. Add in the new X-Raid and you have yourself a powerful cluster that probably (even at Apple's prices) will cost a lot less than a bunch of spanking new Itanium machines.
64 bit will arrive (Probably when Apple introduces it
Damn it! (Score:3, Funny)
well now... (Score:5, Funny)
This makes me feel a lot less like a cantankerous, cheap old fart for not replacing my Athlon 650.
Re:well now... (Score:3, Funny)
Moore ain't a law... (Score:2, Redundant)
And every 6 months its either a) dead or b) to continue for ever c) dead real soon. Most often its all three every week.
Re:Moore ain't a law... (Score:2, Insightful)
The majority of laws are empirical in nature. Even Newton's laws of motion don't come from the theory, rather they are axioms that underly it.
Google's got the right idea. (Score:2, Insightful)
Time and time again, it always comes down to;
Buy them small and cheap, put them all together, and that way if one dies, it's a hell of a lot easier and less expensive to replace/repair/forget.
So Google's got the right idea, they're just confirming it for the rest of us!
Danger (Score:2, Funny)
If only all dangerous things would go away as soon as we choose to forget them...
Transistors? BAH! (Score:5, Funny)
Either that, or it mutates into an evil Steve Wozniak and strangles me in my sleep.
/* Steve */
Re:Transistors? BAH! (Score:2)
Does anybody take Andreessen seriously? (Score:5, Insightful)
He hit the lottery. He was a lucky stiff. I wish I was that lucky.
But that's all it was. And I don't begrudge him for it. But I don't take his advice.
As for google. Figure it out yourself.
Google isnt' driving the tech market. What's driving it are new applications like video processing that guess what...needs much faster processors than we've got now.
So while Google might not need faster processors, new applications do.
And I say that loving google, but its not cutting edge in terms of hardware. They have some good search algorithms.
Re:Does anybody take Andreessen seriously? (Score:4, Funny)
In the largest, a bunch of guys, the day before reporting to duty for boot camp, held a very wild party. It involved using a sofa as a battering ram. There was a stove-sized hole in one wall. There was a refrigerator-sized one in the other.
Andreessen was the second largest. No major damage, but he just left EVERYTHING. Clothes, furniture, papers, food, everything. They had to clean out a man's entire life. She guessed he left town with a backpack, a change of clothes, and his portable.
When he started Netscape, he saw the niche, left town, and dumped everything on it NOW. Maybe that's luck, but maybe it's being insightful enough to know what risks are worth leaving everything for. I'd give someone who showed that kind of insight a fair shake, if they had something else to say.
He maintained an emacs fork; he was no slouch. (Score:3, Informative)
While you may validly question his business acumen, he has worked with RMS, JWZ, and knows everybody. He is a reasonable coder and a team player; we need more of him.
Re: (Score:3, Interesting)
Now, again... (Score:2, Funny)
Of course we have time. Ain't we reading slashdot?
Andreesen quotes... (Score:4, Insightful)
*gag* Off Topic, but has *anyone* become as much of a caricture of themselves as Andreessen?
This business is changing fast? Look entirely different? Thanks for the tip Marc.
Cheers,
prat
Xeon beats Itanium on value (Score:4, Interesting)
The Itanium chip will eventually succeed, but not until the price drops and the performance steps up another gear.
how is this not moore's law? (Score:5, Insightful)
Google had no intention of buying the superchip. Rather, he said, the company intends to build its future servers with smaller, cheaper processors.
How is this not Moore's law? Maybe not in the strict sense of number of transistors per cpu, but it's exactly that increase in high-end chips that make mid-range chips "smaller, cheaper" and still able to keep up with requirements.
That's the essense of Moore's law. Pretending it isn't is just headline-writing manipulation, and it's stupid.
Re:how is this not moore's law? (Score:2)
Think about the "price shadow" of products - when a new product comes out, the older/slower/less sophisticated product becomes cheaper. If this happens *really quickly*, then the prices are likely to go down a lot, and very soon. If you've already got what you want, it's a great place to be in.
This doesn't happen much with industries where there aren't many advances (think electric range). A two year old stove is pretty close in price to a brand new one. Whereas, a two year old processor (and 50 cents) will get you a cup of coffee.
It's in the gospel (Score:4, Funny)
Do not mess with our religion
Untill the end of the epoch, Amen.
PS. With thanks to a source which I hope is obvious.
Google's decision is economic (Score:5, Insightful)
Naturally there are many more problems which can not be parallelized and are not so easily engineered away. Google's statement is no great turning point in computing. Faster processors will continue to be in demand as they tend to offer better price/performance ratios, eventually, even for server farm situations.
Re:Google's decision is economic (Score:3, Interesting)
Eventually, the dwindling number of remaining problems won't have enough funding behind them to support the frantic pace of innovation needed to support Moore's law. I think that CPU development will hit this economic barrier well before it becomes technically impossible to improve performance.
Mushy writing (Score:5, Insightful)
In fact, the whole article is based around Moore's Law still applying, desptie being "unhealthy". Well, duh. I think he had a point to make somewhere, but lost it on the way to the deadline. Personally, I would have appreciated more concrete reasons about why Google's bucking the trend is so interesting (to him).
He did bring up one very interesting point, but didn't explore it enough to my taste. Where is reliability in the equation? What happens if you keep all three factors the same, and use the cost savings in the technology to address failure points?
Google ran into bum hard drives, and yet the solution was simply to change brands? The people who are trying to address that very need would seem to be a perfect fit for a story about why Moore's Law isn't the end-all be-all answer.
Moore's Law still valid (Score:2, Interesting)
Just because Google (and I assume many other companies) are looking to use smaller, cheaper processors, it does not mean that Moore's law will not continue to hold.
Moores Law is a statement about the number of transitors per square inch, not per CPU. Google's statement is more about the (flawed) concept of "One CPU to rule them all", rather than any indictment of Moore's Law or those that follow it.
The pied piper (Score:2, Troll)
Other possibilites such a quantum computing are left to a number of small university lectures to study and conduct research in, small compared to the revenue of the chip companies.
Cheaper doesn't mean better either (Score:5, Insightful)
However profit on the low end stuff is very slight because you are competing with chip fabs that don't spend time and money on R & D; buying the rights to older technology instead. (We are talking commodity margins now, not what the market will bear.) So if the market for the latest and greatest collapses the entire landscape changes.
Should that occur my prediction is that R & D will change from designing faster chips to getting better yields from the fabs. Because, at commodity margins, it will be all about lowering production costs.
However I think it is still more likely that, Google aside, there will remain a market for the high end large enough to continue to support Intel and AMD as they duke it out on technological edge. At least for a while.
Don't read too much into Googles response ... (Score:4, Insightful)
The data google deals with is non real time. They churn on some data and produce indices. A request comes in over a server, that server could potentially have it's own copy of the indices and can access a farm of servers that hold the actual data. The fact that the data and indices live on farms is no big deal as there is no synchronization requirement between them. If server A serves up some info but is 15 minutes behind server Z, that's ok. This is a textbook application for distributed non-stateful server farms
Now ebay, ALL their servers (well the non listing ones) HAVE to be going after a single or synchronized data source. Everybody MUST have the same view of an auction and all requests coming in have to be matched up. The "easiest" way to do this is by going against a single data repository (well single in the sense that the data for any given auction must reside in one place, different auctions can live on different servers of course). All this information needs to be kept up on a real time basis. So ebay also has the issue of transactionally updating data in realtime. Thus their computing needs are significantly different than that of google.
Re:Don't read too much into Googles response ... (Score:3)
That's not entirely right. EBay isn't really any more synchronised than Google.
You might have noticed when posting an auction that you can't search for it until quite a bit after posting. That's because the EBay servers don't synchronise and reindex as frequently as one might think. Their pages are kept as static as possible to reduce the load on their servers.
Eh? (Score:5, Funny)
How can Moore's law become dangerious?
If you break it, will you explode into billions of particles?
Re:Eh? (Score:3, Funny)
The danger is that soon enough an Intel processor will get hot enough to trigger a fusion reaction in atmospheric hydrogen, turning Earth into a small star. We must abandon this dangerous obsession with Moore's law before it's too late!
Does Moore's Law actually hold back development? (Score:2, Insightful)
I fully expect this to get modded down, but I still think chip manufacturers are deliberately drip-feeding us incremental speeds to maximise profits. There's not much evidence of a paradigm shift on the horizon; Hammer is an important step but it's still a similar manufacturing process. As a (probably flawed) analogy, if processing power became as important to national security as aircraft manufacture in WWII, look how fast progress could be made!
Both ways lead to growth of computing power... (Score:2, Insightful)
render farms (Score:3, Informative)
On the other hand, say you are running a renderfarm - in that case you want a fast distributed network, the same way google does, but you also want each individual node as fast as freakin possible.
They have been using Alphas for a long time for that exact reason - so now with the advent of the Intel/AMD 64s, that will drive prices down on all of it - so I would imagine the render farms are quite happy about that. That means that they can either stay at the speed at which they do things now, but for cheaper - or they can spend what they do now and get much more done in the same time... either way leading to faster production and argueably more profit.
The clusters that I am most familiar with are somewhere in between - they don't need the newest fastest thing, but they certainly wouldn't be hurt by a faster processor.
For the stuff I do though, it doesn't matter too much - if I have 20 hours or so to process something, and I have the choice of doing it in 4 minutes or 1 minute, I will take whichever is cheaper since the end result might as well be the same otherwise in my eyes.
what moore said.. (Score:5, Insightful)
Is this the first signs of a turnaround? (Score:4, Insightful)
granted it is much MUCH worse on the windows side. Kiplingers TaxCUT is 11 megabytes in size for the executable.. FOR WHAT?? eye candy and other useless features that don't make it better.... only bigger.
Too many apps and projects add things for the sake of adding them... to look "pretty" or just for silly reasons.
I personally still believe that programmers should be forced to run and program on systems that are 1/2 to 1/3rd of what is typically used. this will force the programmers to optimize or find better ways to make that app or feature work.
It sounds like google is tired of getting bigger and badder only to watch it become no faster than what they had only 6 months ago after the software and programmers slow it down.
remember everyone... X windows and a good windows manager in linux RAN VERY GOOD on a 486 with 16 meg of ram and a decent video card.. Today there is no chance in hell you can get anything but blackbox and a really old release of X to run on that hardware (luckily the Linux kernel is scalable and it heppily runs all the way back to the 386.)
Moore's Law (Score:2, Interesting)
"Moore's Law" has been bastardized beyond belief. Take an opportunity to read Moore's Paper [wpi.edu] (1965), which is basically Gordon Moore's prediction on the future direction of the IC industry.
Makes a lot of sense... (Score:2)
If you need more power than what a single CPU has to offer, buy an SMP machine. Or make a Beowulf cluster.
And no, this is not a joke: this is exactly what google has been doing: build a humongous cluster a split eveything between hundreds of machines, right?
Since Linux and the *BSDs have appeared, this means that pretty much every task can be managed by cheap, standardized machines. It's highly possible that, like the Red Herring article said, we'll see big chip makers 'go under' just because the research balloon out of control.
Very interesting articles. Moore's Law may end, not because it's impossible to build a better chip, but because it has become un-economical to build one.
Oh Really? (Score:2, Insightful)
I use a PC of what would have been unimaginable power a few short years ago, and it is still woefully inadequate for many of my purposes. I still spend a lot of my programming time optimizing code that I could leave in its original, elegant but inefficient state if computers were faster. And in the field of artificial intelligence, computers are finally starting to do useful things, but are sorely hampered by insufficient processing power (try a few huge matrix decompositions -- or a backgammon rollout! -- and you'll see what I mean).
Perhaps the most insightful comment in the article is the observation that no one has ever won betting against Moore's Law. I'm betting it'll be around another 10 years with change. Email me if you're taking...
Re:Oh Really? (Score:2)
Maybe at some point we will need to work with that much data, but do we need it today?
Each previous generation of processors was released more or less in tandem with a new generation of apps that needed the features of the new chips. What app do you run that needs an IA64 or a Dec Alpha? If you want raw performance, better to get either the fastest IA32 chip you can or, maybe, a PowerPC with Altivec support. (Assuming you're writing your own app and can support the vector processor, of course....)
Amen. (Score:3)
My experience with 64 bit chips is that they don't offer any compelling advantages over a multi-processor 32 bit system.
The only real advantage they have is a bigger address space and even that doesn't offer much advantage over a cluster of smaller systems.
I actually read them (Score:5, Insightful)
Example 1 - Intel - This company continues to pump out faster and faster processors. They can't stop making new processors or AMD or someone else will. The costs of making each processor goes up but the premium for new, faster processors continues to drop as fewer people need the absolute high end. So if you look at Intel's business 5 years ago, they always had a healthy margin for the high end. That is no longer the case and if you exprapolate out a few years, it is tough to imagine that Intel will be the same company it is today.
Example 2 - Sun - These guys always did a great job of providing tools to companies that needed the absolute fastest machines to make it work. Unfortunately, Moore's law caught up and made their systems a luxury compared to lots of other manufacturers.
The basic problem that all these companies have is that Moore's Law eventually changes every business into a low end commodity business.
You can't stop the future. You can only simulate it by stopping progress
no need for speed (Score:4, Insightful)
More on what Google's CEO said (Score:3, Informative)
From the article:
He gave the Monday keynote at the "Hot Chips" [hotchips.org] conference at Stanford last August.
There is an abstract [hotchips.org] of his keynote.
Moore's Extension? (Score:3, Funny)
IT isn't everything. (Score:3, Insightful)
But when computers are used for crunching numbers we still want machines to be as fast as possible. Supercomputers still exist today. Countries and companies are still spending millions to build parallel machines thousands of times faster than home PCs. They're doing this because the current crop of processors is not fast enough for what they want to calculate.
Current computational modeling of the weather, a nuclear explosion, the way a protein folds, a chemical reaction, or any of a large number of other important real-world phenomena is limited by current computational speed. Faster computers will aid these fields tremendously. More power is almost always better in mathematical modeling- I don't expect we'll ever get to the point where we have as much computational power as we want.
Other implications of Moore's law (Score:4, Insightful)
I think there will always be a market for the fastest chips possible. However, there are other ways for this trend to take us rather than powerful CPU chips. These would include lower power, lower size, higher system integration, and lower cost.
The EPIA series mini-ITX boards are an example of this. Once the VIA processors get powerful enough to decode DVDs well, it is very likely that they won't need to get more powerful for most consumer applications. However, if you look at the board itself (e.g. here [viavpsd.com]),
you'll see that component count is stil pretty high; power consumption, while small, still requires a substantial power supply in the case or a large brick.
When something like this can be put together, capable of DVD decoding, having no external parts other than memory (and maybe not even that), and the whole thing runs on two AAA batteries, then you'd really have something. Stir bluetooth (or more likely its sucessors) into the mix and you have ubiquitous computing, capable of adapting to their environement and adapting the environment to suit human needs.
Re:Other implications of Moore's law (Score:3, Insightful)
IIRC, there an article a while back (discussed here) that reviewed Moore's law, as Moore used it over a number of years and found that Moore himself seemed to redefine it every couple years. It's a marketing term which describes the general phenomenon of faster computers getting cheaper in a regular way.
You're right though, Moore never really talked about doubling of 'processor power,' he discussed things in terms of devices such as transistors. Trouble is sometimes RAM was included in the 'device' total sometimes not... it's easy to fudge a bit during the slow downs and speed ups if you change how the thing is defined.
Top it off with the fact that the whole thing was eventually cast in terms of the cost optimal solution. Given the degree to which the size of the market for computers has changed I'd say that this is a very difficult thing to define. As everyone is likely to point out, commodity desktop PC's have a very different optimum from massive single-system image computers. Of course, if you consider that a calculator is a computer, be they $1 cheapo's or the latest graphing programable whoopdeedoo they are all computers. There are so many markets for computers now, each with their own optimum that it's pretty artificial to talk about Moore's law at all. I've never seen anyone plot out Moore's law with a bunch of branches. Further, cost optimal becomes pretty subjective in all the markets when there are so many variables. Finally, there are points where Moore's law breaks down... the number of devices in cheapo calculators probably hasn't changed much in the last few years, but the price changes. Moore's law doesn't really allow for this sort of behavior, that there is a maximum necessary power for a certain kind of device, if it doesn't have to do anything else, then the complexity levels off and the price goes down. This may well happen at some point in the commodity sector. It is possible that the number of features in a conventional desktop will level off at some point. Hell with a $200 WalMart cheapo PC, maybe we're there now...
Intuitively, everyone applies Moore's law to desktops but there's no particular reason to do so. Considering the history of it the massive mainframe style computer is probably the best application of it, but this is seldom done. Mainframes these days can be a complex as you're willing to pay for, which pretty much means that there is a cost optimal solution for an given problem, not just for fabrication, which is what Moore was talking about. Seems like we have turned a corner, it's time to redefine Moore's law yet again.
This has happened before! (Score:4, Insightful)
Perhaps the next step instead of being towards larger computers will be towards smaller ones. Moore's law remains just as important, but the application changes. Instead of building faster computers, you build smaller, cheaper ones. The desktops will remain important for decades as the repository of printers, large hard disks, etc. And the palmtops/wristtops/fingernailTops/embedded will communicate with them for archiving, etc.
This means that networking is becoming more important. This means that clusters need to be more integrated. I conceive of future powerful computers as a net of nets, and at the bottom of each net is a tightly integrated cluser of cpus, each more powerful than the current crop. These are going to need a lot of on-chip ram, and ram attached caches, because their access to large ram will be slow, and mediated through gatekeepers. There will probably be multiported ram whiteboards, where multiple cpus can share their current thoughts, etc.
For this scenario to work, computers will need to be able to take their programs in a sort of pseudo-code, and re-write it into a parallelized form. There will, of course, be frequent bottle necks, etc. So there will be lots of wasted cycles, but some of them can be used on other processes with lower priority. And at least each cluster will have one cpu that spends most of it's time scheduling. etc.
Consider the ratio between gray matter and white matter. I've been told that most of the computation is done in the gray matter, and the white matter acts as a communications link. This may not be true (it was an old source), but it is a good model of the problem. So to make this work, the individual processors need to get smaller and cheaper. But that's one of the choices that Moore's law offers!
So this is, in fact, an encouraging trend. But it does mean that the high end cpus will tend to be short-term solutions to problems, faster at any particular scale of the technology, but too expensive for most problems, and not developing fast enough to stay ahead of their smaller brethern. Because they are too expensive to be used in a wasteful manner.
Perhaps the "final" generation will implement these longer word length cpus, at least in places. And it would clearly use specialized hardware for the signal switchers, just as the video cards use specialized hardware, though they didn't at first. But the first versions will be built with cheap components, and the specialized hardware will only come along later, after the designs have stabilized.
Article Full of Overblown Rhetoric (Score:3, Insightful)
That said, I remember the first time I noticed that technology was 'good enough,' and didn't need to double ever again: with the introduction of CDs, and later, CD-quality sound cards. Most people are not physically capable of hearing improvements if the sampling rate of CDs is increased, so we don't need to bother. Certainly, people tried, and the home theatre style multi-channel stuff is an improvement over plain stereo CDs, but it is an insignificant improvement when compared to CDs over older mono formats. Similarily, the latest SoundBlaster cards represent an insignificant improvement over the early beeps of computers and video games. (Dogs and dolphins might wish that audio reproduction was improved, but they don't have credit cards.)
Back in the early 80s, when most bulletin board access was by 300 baud modem, paging of long messages was optional, since most people can read that fast. Of course, we need faster modems for longer files and applications, but as soon as say, HD-quality video and sound can be streamed at real-time speeds, then bandwidth will be 'enough.'
Because (Score:3, Insightful)
I'm not doing it real justice, but Google (ironic, eh?) about the effects of moore's law for a much better explanation.
Re:One size does not fit all (Score:4, Funny)
If I ever need some "Enterprise Web Site Content Management" or some "Site Search Engine Solutions," or even perhaps a website that uses broken javascript to navigate improperly, I'll give you a call.