Intel Pushes Pentium 4 Past 3 GHz 365
denisbergeron writes "Yahoo has the news about the new P4 who will run at nothing less than 3.06 GHz. But the great avance will be the hyperthreading technology (already present in Xeon) that allows multiple software threads to run more efficiently on a single processor."
eh? (Score:5, Funny)
umm... I've got an XT clone that's faster than that... wanna buy it for about $600?
(/sarcasm)
Re:eh? (Score:2, Funny)
Re:eh? (Score:2, Informative)
Re:eh? (Score:2)
A wish about hyperthreading... (Score:2, Interesting)
Re:A wish about hyperthreading... (Score:5, Informative)
Here's an article [arstechnica.com] from Ars Technica on HT/SMT.
Re:A wish about hyperthreading... (Score:4, Informative)
Re:A wish about hyperthreading... (Score:2, Interesting)
Re:A wish about hyperthreading... (Score:2)
You just answerd yourself.
Hyperthreading is just for really dealing with the load.
3.06 MHz is over 3 times faster than a C64... (Score:2, Funny)
Re:3.06 MHz is over 3 times faster than a C64... (Score:2, Insightful)
>
I agree: despite al the claims about Moore's Law and technological advances, this proves that tripling the speed of a good ol' 6510 CPU has some disadvantages as well: Give a little to gain a little. 8-)
More seriously: graphics have improved since the early eighties, but what about gameplay? Isn't Mame the only thing that really justifies buy PC hardware now and then?
--
Money is the root of all evil (Send $30 for more info)
Re:3.06 MHz is over 3 times faster than a C64... (Score:2)
Hmmm... (Score:5, Funny)
Yeah, but what's its top speed?
Re:Hmmm... (Score:5, Funny)
I dunno, it depends of the person throwing the computer I guess?
Re:Hmmm... (Score:5, Insightful)
The true fact in the matter is that intel are going to rely almost entirely on the marketability of a big number with the P4, as it's handling is rather unimpressive when compared to such ordinary designs as those from AMD, which clock poorly, yet crunch happily.
I need not mention about G4's and other well designed chips as some GHz bunny is certain to point out that they are only at 1.25GHz at the moment.
Re:Hmmm... (Score:3, Insightful)
I disagree. Intel's strategy of designing for higher clock speeds has given them a much more scalable chip, and that is evidenced by Intel's ability to increase the clock speeds frequently while AMD is struggling. And if you look at the last Toms hardware review [tomshardware.com] (its a couple of weeks old), the P4 2.8 GHz pretty much tied with the Athlon 2800 (they both won about 14 benchmark tests). But that is much less meaningful when you realize that Tom was testing an Intel chip that has been available for 2 months with an Athlon that won't be available until December. If you compare the 2.8 GHz P4 with the fastest available Athlon today, the P4 beats it in over 90% of the benchmarks (I'd imagine that a comparison between the 3.06 GHz HT chip and the Athlon 2800+ would be similar). So Intel's strategy is working for performance, and it is more marketable to boot.
And there is a lot of research right now about the optimal pipeline depth, and the conclusion was that the current pipelines are not deep enough. The optimal pipeline depth for the x86 architecture is around 40-50 stages.
http://systems.cs.colorado.edu/ISCA2002/FinalPape
http://systems.cs.colorado.edu/ISCA2002/FinalPape
BTW- thanks to fobef for these links- I read them yesterday on
will be expensive (Score:3, Insightful)
In the UK you usually have the ultimate latest Intel at about 700 UKP- the sweet spot in the price/performance trade-off tends to be around the 200 UKP mark, which will probably be the 2.5Ghz by the time this 3Ghz one is out.
graspee
Re:will be expensive (Score:2, Informative)
what's my motivation (Score:4, Interesting)
How can Word appear any faster at 3GHz? I would think that after 1.5GHz, improvement in performance would be hard to notice. Granted, it will be good for people who are still running those 200MHz clunkers but what's the incentive if you're already running in the GHz range?
Incentive? (Score:5, Funny)
Re:Incentive? (Score:2)
I'm running UT2k3 on an Athlon 750 with a GF2. I run at 800x600 with low qual textures/models and most options turned off. Yes, it's much more pretty on a fast system, but it certainly didn't "kick my [...] machine in the nuts".
Yes, I'm planning to upgrade, but I was pleasantly surprised at how well this 2.75 year old machine handled UT2k3.
Re:Incentive? (Score:2)
After all, I'm sure the original Quake runs great on my machine and several slower processor-based iterations, but It's no longer the quality I demand from my games.
Re:Incentive? (Score:2)
UT2003 is CPU limited currently. (Score:4, Informative)
Take a look at this UT2003 benchmark chart:
http://www.anandtech.com/showdoc.html?i=1650&p =3 [anandtech.com]
You can see that the GeForce 4 Ti cards are ALL still getting faster the faster the CPU gets, right up to the bitter end.
That's not to say that a couple of years from now that 3D cards won't handle physics and AI onboard-- but they don't exist now, so it's hardly fair to say "A better gfx card will almost always be a bigger win than a faster CPU."
It depends on the game, and the newer they are, the more CPU they'll eat. (See Battlefield 1942)
Since you were too lazy to look at other pages... (Score:3, Informative)
http://www.anandtech.com/showdoc.html?i=1650&p =6 [anandtech.com]
Note that other pages in the article include the Kyro II, Matrox Parhelia, and the older GeForce 2 and 3 lines, as well as the GeForce 4. Keep in mind that the faster the card gets, the faster the CPU must be to keep it fed with data. You may not see CPU saturation with a slow card, because the card is maxed long before 100% CPU usage. In the Radeon chart, you can see that the faster the Radeon, the more CPU constrained it is. Just like with Nvidia.
The Radeon 9700 isn't there because it didn't exist when the article is written. It will be even more CPU constrained than the GF4.
I suspect you are a troll, but I'd hate to see the issue confused any further.
Re:Incentive? (Score:2)
Re:what's my motivation (Score:3, Informative)
Granted, it will be good for people who are still running those 200MHz clunkers but what's the incentive if you're already running in the GHz range?
Unfortunately where I work the secretaries for division heads get these 3GHz machines and run Word on them while the scientists and technicians get to keep working on their Pentium 200MHz system. Maybe if they're lucky they get a hand-me-down from a secretary like a nice PIII-1GHz box.
Re:what's my motivation (Score:3, Interesting)
Re:what's my motivation (Score:2, Interesting)
When I went upstairs though, I noticed that all of the big knob senior civil servants were sitting on fifty pound clunkers from Staples or some such.
I thought the whole deal was quite heartening and very democratic, myself.
Re:what's my motivation (Score:3, Insightful)
Re:what's my motivation (Score:5, Interesting)
For example, we use Microsoft word with built in excell spreadsheets and ODBC queries that update charts in real time from an Oracle database as well as include visio stencils and other good stuff. This is a 40+ meg file in raw format and a lowly 1.5ghz with 512 megs of ram takes time to re-draw. We daw a huge performance increase from 1.5ghz to 2.4 Maybe "hyper threading" will help out even more.
BTW, it is about the same performance under linux using staroffice or corel office. KDE Office is even slower, so i know its not just the tools
For people who *WORK* using there pc, you can never have "too much" power. Its like race cars, maximizing performance for the job at hand.
Re:what's my motivation (Score:2, Insightful)
Where this kind of power shows, though, is in more intensive processing - as the article suggests, Photoshop, Cinema4D, and so on.
As for Hyperthreading / SMT technology - it absolutely does make a difference. I've been running an HT-enabled system (pre-release silicon) for some time, and there are specific usage models where it shines. Forget about single-threaded or single-tasking type environments - the user who loads Word, does some typing, then loads Photoshop, does some filtering, etc, isn't going to see any benefit at all from HTT. However, once you get into multi-tasking scenarios the story is very different. For example, run a series of Photoshop filters/macros whilst simultaneously virus-scanning the system; or, export a large Outlook folder to a
Re:what's my motivation (Score:2, Insightful)
So why are you asking "How can Word appear any faster at 3GHz?" It won't, but Adobe Photoshop blah blah will use as much as you can give.....
Stu.
Don't mind me I drank too much at tonight uni function. tee hihee.
Re:what's my motivation (Score:5, Funny)
The speed is for Clippy, not YOU... he now is 3D ray-traced and has more artificial intelligence built in!
If it wasn't for the idea of WYSIWYG and fonts, I'd still be doing my word processing on AppleWorks for the Apple ][.
Re:what's my motivation (Score:3, Informative)
Unless your system is locked down, click on "Change my preferences" in the Search Pane and choose "Without an animated character." I did that so long ago that I hardly remember that the dog was there in the first place.
Re:what's my motivation (Score:2)
Well, it can't. I recently upgraded my machine from 200Ppro to 2x1800, and Word hasn't sped up a shit. It still pauses to think for some quarterseconds, not very long but very noticeable. So I think there's some other problems with word, it just doesn't seem to work correctly.
All the applications in Linux though have sped up tremendously. I knew that SMP evens the system load, but thought that there wasn't that many multithreaded applications, but the system just feels silken smooth (I first ran it with 1 processor, before modding XP's L5 bridge so they appear as MP).
Re:what's my motivation (Score:3, Insightful)
Users do stupid things.
I've seen vector graphics with millions of lines inserted into word. Fine for a drawing package or desktop publishing app, but god awful slow in Word. Not really a fault of MS, it's these people should be using a desktop publishing application, Word is for wordprocessing.
Re:what's my motivation (Score:2)
How can Word appear any faster at 3GHz? I would think that after 1.5GHz, improvement in performance would be hard to notice. Granted, it will be good for people who are still running those 200MHz clunkers but what's the incentive if you're already running in the GHz range?
How can you quote something from the article and not even read it? It says you won't see much difference in Word, and you turn around and say "How will Word be faster?". Don't you even read what you cut and paste?
Strange days when people bitch about technology getting better.
Wow, hyperthreading. (Score:3, Insightful)
Anyway they have ramped up the speed, and added something that could have always been, hyperthreading. Xeon has always had it. This is not progress, this is almost not worthy reporting.
Puto
Re:Wow, hyperthreading. (Score:2, Insightful)
Re:Wow, hyperthreading. (Score:2)
I know where you are coming from and if you look through my past posts you will note they are rife with grammatical and spelling errors. Sometimes I multitask and the least amount of attention is focused on what I am posting on slashdot. More content that checking my punctuation.
You are right though. It is embarassing to have our community display such glaring errors in a a public format.
I think the problem is that all the kiddies are just trying to get an article posted that they gleaned off of some other venue, and in their haste to submit before anyone, they type whatever they can, and submit without proofing. Did I mentions couples with the fact they have Slashdot open in a window at work that they keep minimizing when a supervisor walks by.
At least the editors should proof the articles for basic spelling and grammar, typos, especially when they are that small like the current topic. And if they are reading the comments they can go in and change it, fix it.
However, what is worse are the 1000 posts pointing it out. Like we do not all see it, like once was not enough.
I would rather see a reduction of the "hey your stupid comments" as well as some editorial proofing as well.
However, Slashdot is not run by language scholars.
We need to clean up the submissions and the comments.
Puto
Re:Wow, hyperthreading. (Score:2)
Actually they got that part right.
Geez, I remember the day when we use to comment on processors, peripherals, parts. Now the community is stuck on whining about typos.
What exactly is there to talk about in yet another processor speed improvement story? Some people will say "who needs this much power?" Some people will respond and explain to these dolts who needs it. Others will make fun of the article submission. And then there are people who complain that the article was posted in the first place.
This is not progress, this is almost not worthy reporting.
See?
Bah (Score:4, Informative)
It has been reported on various sites [amdmb.com] that Athlon XP 2400+ chips (2GHz, new Thoroughbred Revision B core) are trivial to mod for dual CPU operation and easily overclock to 2.25GHz (150MHz FSB, aka 300MHz DDR, which is the most my ASUS A7M266-D will allow) with proper cooling (Thermalright SLK800 being my favorite). The chips are under $200 apiece. Imagine a Beowulf cluster of those...
Proper Athlon MP 2400+'s are due shortly I'd assume.
However... (Score:2)
Also, what end-user oriented software will take advantage of Intel's hyperthreading process right now? Will we have to wait for updates to CAD/CAM, drawing and image editing programs to use hyperthreading? And when will we see updates to multimedia programs such as Windows Media Player, RealOne, Quicktime, software DVD players, etc. that will take full advantage of hyperthreading? We might not see them until early 2003.
Re:However... (Score:2, Informative)
Re:Bah (Score:4, Funny)
Some of us need reliability and warranty. I'm not going to throw an overclocked, modded AMD POS in the server farm at work. If I do, I'm going to make sure my resume is updated and my suit is free from little white fuzzies.
I'm gonna go overclock my laundry machine now and cook a pizza in self cleaning clean mode in my oven. Should be done by the time the laundry dings.
Re:Bah (Score:3, Interesting)
Yippie! (Score:3, Funny)
Hyperthreading ... (Score:5, Interesting)
I mean, is it faster when doing stack swaps or when using TSS to multitask? *BSD uses the TSS to multitask, taking benefit of the i386's way to quickly swap registers and stack. Windows doesn't do this
So, from a pure technical point of view, how does it work? Did they just make TSS switches faster? Some OS-es benefit highly from that, but others, well, don't.
Re:Hyperthreading ... (Score:4, Interesting)
This would have been a lot harder a few years ago, but most of the hard parts (like register renaming) had already been done to implement out-of-order execution.
As for what can benefit, it's pretty much anything that can benefit from dual CPU's.
Re:Hyperthreading ... (Score:2)
Re:Hyperthreading ... (Score:2)
The processor can appear as two logical processors and run two threads through the same core. RTFA for more information.
Will this make my modem faster? (Score:3, Funny)
Re:Will this make my modem faster? (Score:2)
Not really an anti-Windows rant, more an anti-winmodem rant.
New Shuttle SB51G support hyperthreaded chips (Score:5, Informative)
I was torn between building another dual-CPU box (currently on twin 533Mhz Celerons with an ABit BP6 board), or going the small form-factor route. Now I can do both.
More at Shuttle's site [shuttleonline.com].
Cheers,
Ian
Re:New Shuttle SB51G support hyperthreaded chips (Score:3, Informative)
Other things that runs slower (in general)
video encoding
audio encoding
quite a number of apps with real multi-threading built in.
Re:New Shuttle SB51G support hyperthreaded chips (Score:2)
Hmm. Now that's seriously disappointing - video encoding is what I mainly had in mind. That and a tiny amount of Photoshop - I can live without dual-CPU for that though, as my Photoshop usage isn't that high.
There are other things I do with it - I run various virtual machines using Virtual PC for Windows [connectix.com], and I like the isolation that running on a dual-cpu gives me. Even if the virtual machine starts chewing its way through my CPU power, it generally only starts massacring one at once, thus leaving my native OS and GUI nice and responsive. I'd be looking for a hyperthreaded machine to give me the same advantage. Does that sound likely?
Cheers,
Ian
Re:New Shuttle SB51G support hyperthreaded chips (Score:3, Informative)
Uh, no. Windows NT 4.0 (workstation) and 2K (workstation) support dual CPU out of the box. They have specific multi-CPU kernels that get installed (at OS install time) if the hardware reports dual CPUs.
You only ever pay extra on the server side if you want greater than 2-way.
An oldie but a goodie... (Score:3, Funny)
Who's there?
15 second wait...
Intel
Turbo? (Score:4, Funny)
Does it at least have a turbo switch?
Re:Genuine Question (Re:Turbo?) (Score:4, Informative)
They eventually became disused because instead of dropping down to 4.77 MHz (the orginal XT speed) they'd just drop some fraction of the regular CPU speed - down to maybe 7 or 8 MHz, which was way too fast still. Plus applications stopped doing stupid things like presuming the CPU frequency and using it for timing loops.
Re:Genuine Question (Re:Turbo?) (Score:5, Informative)
Spreadsheets were the killer app that caused the PC to take off, and Lotus 123 came with a super-annoying floppy-based copy protection scheme. They intentionally misformatted the floppy, then the program verified that it was an original by doing low-level tricks with the floppy controller.
The most ridiculous and shortsighted part was that they used CPU-based timing loops to do the timing for their stupid floppy tricks. Of course, these were calibrated to the only CPU speed available at the time, 4.77MHz. As a consequence, if a PC was going to run Lotus 123, it needed to be able to slow down to the original 4.77MHz speed while it read the Lotus floppy. IIRC, Compaq had a nifty patent that automatically slowed the PC whenever the floppy controller was in use. Others had to make do with a manual switch.
The cost to society for this DRM fiasco, hundreds of millions of useless bezel switches, undoubtedly was far greater than any revenue that Lotus made by thwarting piracy. (In fact, their revenue from DRM might be negative, because they were eventually displaced by non copy-protected comptetiters.)
Hyperthreading (Score:4, Interesting)
for those of you who don't know, with hyperthreading, the system will appear to have two cpus. If you have a dual system with hyperthreading, then it will look like 4, and so on.
Re:Hyperthreading (Score:2)
There are no real problems if:
-You buy licences for real processors and the software does not check it.
-The software runs only on the number of (real)processors it is licences for: it just does not use the hyper threading. (This is the case for the current windows XP & windows 2000)
There is a problem
-The software aborts if it detects more (real) processors than it is licenced for.
I do no know such software. Or it is software that is tied so close to hareware it fails anyway if it is put on new hardware.
Note that you can still disable hyperthreading in the bios.
Update on the megahertz myth (Score:3, Interesting)
it's pathetic really... (Score:4, Insightful)
However, it's also possibly a ploy to keep people posting indigant comments about errors. 50% of posts on these kinds of stories seem to be pointing out these glaring errors. Like the recent story about PS2 games on an Xbox which was nothing to do with the Xbox at all.
Come on guys, wise up!
Re:it's pathetic really... (Score:2)
I think they just need more quality mantainers, just like you
Hyperthreading (Score:5, Insightful)
It would be more scaleable and easier to implement to use several complete CPUs. The biggest drawback (compared to hyperthreading) would of course be that in special situations some CPU cores would be idle, but this simply corresponds to pipe-line bubbles in the hyperthreaded case. This is easily compensated by two facts: 1) multiple CPUs can be made very scalable and 2) most computer systems today always runs multiple threads (i.e. utilization will be good).
Of course, for Intel to maintain their market lead, everything has to be compatible, so they'll have to pay, time after time, for the errors they made in the eighties (the 286 paging + the CISC ISA). By breaking Amdahl's law time after time (SSE, MMX, etc.) they have made an even more complex beast. The only area where they really excel is in the production processing. They can squeeze out high frequencies and pack the transistors tight. For that, I'll give 'em cred. For their CPU ISAs, I'll just laugh...
Re:Hyperthreading (Score:3, Insightful)
Still, the multiple CPU solution will be vastly more scaleable and far less complex.
By changing the ISA of the CPUs, one can avoid lots of the bubbles (all if one is mean to the compiler). Just introduce branch delay slots and you lose a whole lot of bubbles and complexity. Just imagine how simple a CPU without branch prediction would be...
Re:Hyperthreading (Score:2)
Whoopie. (Score:2, Insightful)
This is all pointless. The entire pentium "architecture" (more like a shanty-town) needs to be dumped entirely. We NEED a clean start.
Even moreso, why is no one addressing the fundamental problem--that the PC is just horribly designed? There are better ways of doing things than just ramming everything through a single CPU. This is 2002--why are we not pursuing better computer design? The "PC" is the bottleneck for crying out loud. 10 years from now will we be reading about the new 10 Ghz PVII chip, still running in 30-year-old hardware? Wonder if I can still get a "Missing Basic ROM" error on my desktop machine...
Be, Inc. tried to redesign the "PC"...they had a very nice design, but they killed it before it's time. And how about Amiga...yeah everyone is sick of hearing about the Amiga but it WAS intelligently designed. Instead of shoving everything through the CPU the Amiga used coprocessors to deal with much of the stuff that bottlenecks PCs, leaving the CPU free for more important stuff. It was a great idea, and it actually WORKED.
I don't care who does it--I want to see a better machine being built. If done right, the Ghz of the CPU won't matter nearly as much.
Re:Whoopie. (Score:2, Insightful)
The reason for integration was price, It's cheaper to produce one chip then two.
Today, we have spu:s (sound prossecing units), gpu:s (graphics prossecing utits) and so on.
You'r talking about redesigning the 'PC' when you actually mean 'redesigning the OS'.
Re:Whoopie. (Score:2)
In the 80486 it was already integrated. It was a seperate chip in the 80286 timeframe.
In the 80486 time there was a 486sx that was a 80486 with a fused FPU unit.
Dude, you are getting old!
But you are right, the parent poster does not properly makes a distinction between OS & hardware.
Re:Whoopie. (Score:2)
Find some DOS, type in the following: copy con myprog.com[enter][alt-205][alt-24][ctrl+z]myprog[e nter] and you will know:)
Disclaimer: no dos here.
Re:Whoopie. (Score:5, Insightful)
Don't worry folks. In a few years he'll graduate and get some real world experience. And then he'll probably realize that while the PC architecture does indeed suck on paper, in reality it's not all that bad. Could it be better? Sure. Should we throw the baby out with the bathwater? No way.
Compare the PC market to the rest of the computer market. Who's made more progress? Who has been rapidly pushing the niche markets into smaller and smaller niches as their "superior designs" find them running slower and more costly than the evil, horribly misdesigned PCs?
Coprocessors? Yeah... have you even bothered to look at a modern video card recently? The damn things are more complex and more powerful than the CPU. Modern audio boards are also powerful all by themselves. For the most part I/O is handled by separate chips as well.
The bus and memory interfaces on PCs could use some work. And that's happening, with 3GIO, PCI-X, and other buses being implemented in the next few years. There's some truely horrid cruft in the core too - the IRQs, DMA channels, etc. are still pretty godawful, but not nearly as godawful as they were back with the ISA bus. The issues haven't so much gone away as they've been hidden, but the performance limitations imposed really aren't all that absurd.
Design a better machine? Go for it. It'll die just like all the rest because while you may have a better electrical design, you've ignored the real world and the fact that people want to be able to make slow transitions from one architecture to another. Doing an all-at-once transition is not an option unless you control the entire market - which no PC manufacturer does (unlike Apple). Of course, the flip side of this is that the competition causes the current implementation to advance far more rapidly than would be otherwise possible. Which is why you can buy a $2000 PC that outperforms a $200,000 server.
My idea for a better PC (Score:2)
I'd have a case with a crosbar type bus. In this you'd add CPU cards that had memory and a single daughtercard slot. The daughtercards would be to add custom interface electronics for specialized tasks, but not actual processors, so a CPU card could be a video card, a SCSI card, NIC, etc.
One CPU card would be the "master" CPU card which ran the core of the OS kernel plus applications. The other cards would run applications or kernel modules specifc to their hardware daughtercards; network stacks, filesystems, display components (renderers, GUI).
Increase performance? Add a CPU module. The kernel or user tools could manage which cards ran which applications -- some apps could be dedicated to a specific CPU card, other apps could be "floated" to CPU cards based on available cycles.
I don't think this is such a terribly new idea -- its kind of the modularity that IBM 390 or other NUMA architectures do now, but condensed into a single box. Think of a blade server box, but with a switching bus and the ability to access other systems memory.
It would require an OS with a lot more modularity. I'm not sure what would happen to apps that wanted RAM beyond a single CPU card's RAM capabilities, or how fast or easy you could move an app and its memory space from one card to another. I'm also not entirely sure that even a P3 @ 3.xx GHz would be able to do the work of an NVidia GeForce, even if thats all it had to do, either.
But it would be an interesting way to make a highly scalable platform, and scalable both ways -- big and small. An OS written for such hardware could run on a single-card system (think of a laptop or even a palmtop as a single-card system), and multi-card systems could come in S, M, L, and XL sizes depending on cost and need, as well as eliminating the CPU/Memory/Bus bottlenecks.
Hyperheating?!? (Score:2, Funny)
Wow! Brilliant! Ace! (Score:2, Funny)
This will totally change my life! It's the announcement that I'd been waiting for! I must rush out and purchase ten thousand of these immediately, if not sooner! And so on!
</sarcasm>, wouldn't it be simpler for Slashdot to just link to every product announcement from a major hardware manufacturer rather than go through the farce of picking one of the dozens of frenzied (and typo'd) submissions from the "f1rz7 5Ubm1z10n, 5uX0rz!" brigade?
Hyperthreading? bah! (Score:4, Funny)
3.06mhz (Score:5, Funny)
big deal (Score:2)
something you don't see everyday (Score:3, Funny)
There's only one explanation to 2 typographical errors in the post.. sex..
Rob posting articles to be posted automatically, Kathleen wants Rob.. if you know what I mean.. Rob tries to rush.. well.. you get the idea..
Re:something you don't see everyday (Score:3, Funny)
Then how do you explain the 3rd typo ("avance")???
Anyone notice this??? (Score:5, Interesting)
Securing the physical pathways that transpoty data on a computer's motherboard. This will sure help me against those tiny little hackers inside my computer stealing my data!
Oh wait, you mean this is to protect the data against me? Looks like we have about a year before this is built into the PC architecture. Plan your computer buying wisely.
Bastards.
Meanwhile, in a systems context (Score:2, Insightful)
The fact is that for work a 700MHz PIII is usually fast enough given the rest of the system, as well as being reasonably cool and quiet.
So what is the point of this advert? Is it the result of a kind of desperation on the part of Intel? Marketing departments insisting on announcing ever smaller "feature creeps" in an effort to create a buying climate run the risk of the very buyer turnoff they want to avoid. It's like the old Indian auto industry, where the big new feature for each year was something like a differently shaped tail-light molding.
At what expense? (Score:2, Interesting)
Speed of the CPU is good but... (Score:5, Insightful)
Overkill? (Score:2, Insightful)
I know that there are some of you on here that will flame me saying that you DO use that power. And that's fine, you are the 1% of the population I mentioned earlier. But to do it (like most of you would... admit it) just to get another 4fps in UT2003 or whatever, it's just sick. Yes, eventually I will buy a new computer, but only when my needs exceed the resources in my computer, which hasn't happend just yet (it's getting close though...). If any of you can actually tell the difference between this 3.06GHz P4 and the 2.5GHz P4 (without using a stopwatch that measures in the milliseconds) I have a bridge to sell you. Don't let Intel make you think that you need to buy a new computer right now. It may help the economy in the short term, but you will just be wasting precious electricity (in this case gobs of it) just to say you have the latest and greatest. It's becoming a disease!
Re:Overkill? (Score:5, Insightful)
Everyone, of course, believes they're in that 1%.
I used to do commercial 3D video game development on a 450MHz P2. It was a bit slow when compiling, but acceptable otherwise. Then I upgraded to an 866MHz P3 and, even years later, it still feels like lightning. Compiles are quick. Everything is snappy. I've taken to writing tools in Perl and Lisp and Python, and they're snappy as well. I mean, geez, who would have thought ten years ago thay you'd ever be able write 3D geometry manipulation tools in Lisp and have no worries about performance?
Now, of course, you can buy a 2.5GHz P4 in an $800 PC. This is beyond ridiculous. Everything is three times faster than "beyond the point of caring"? I'm going to put C++ aside for almost everything, and just use whatever is the most abstract. Haskell? Yes, please.
Am I in the 1%? Certainly not.
It may help the economy in the short term, but you will just be wasting precious electricity (in this case gobs of it) just to say you have the latest and greatest. It's becoming a disease!
This bothers me, too. Yeah, people don't need all this performance, and that's okay. Who cares if your computer is too fast? But unfortunately you don't get all this performance for free. It's coming at the premature obsolescence of hardware and greatly increased power consumption. Hard drives and monitors are actually improving in this regard, especially with LCD monitors (awesome!). But now we have 70 watt processors and PCs that ship with five or more fans in them, and we're talking bottom end machines from Dell and Gateway here, not crazy high-end monsters. This is bad.
"Does this mean... (Score:3, Funny)
(someone actually asked me this in talking about the 2.2p4)
--Joey
Re:Great.... (Score:4, Insightful)
Well I have a citrix farm full of quad Xeons and 4 gigabytes of RAM, and we'd still love some more power, thanks.
Maybe you don't want 3.06 GHz for what you're working on, but our "Enterprise Class Systems" (Win2k application servers) can use all the CPU we can throw at them. Everyone has different needs, and for a lot of folks, faster processors are a good thing.
(I've seen this troll a few times over the last four or five AMD/Intel product announcements. And it's still getting modded up.)
Re:Great.... (Score:5, Insightful)
Are they actually CPU bound, or are they slowed by memory access and bus bandwidth? Apart from certain numerical computations, I have rarely seen cases in which the CPU is really fully occupied, altho' the tools often report that it is. For example, tools will report if the CPU is idle waiting for a page fault to the swapfile, but not if it's waiting for data to get to or from main memory, it just looks like the CPU is occupied.
Knowing what I know of Citrix, it alone is far bigger than the L2, and that's before even considering the user applications. It requires the CPU to switch context heavily, and constantly flush and reload its L1/2/3 caches. After all, if you need 4G of RAM to run the applications you are using, and you have say an 8M cache, the CPU is going to be spending a lot of time managing its cache rather than doing useful work. Given that, it is bound by memory access, not raw CPU.
Manufacturers, driving by consumer marketing which believes that higher Mhz == better product, are optimizing in the wrong areas. If they want to talk numbers, they should be pushing fast memory and buses which are actually a useful measure of a machine's performance, not CPU Mhz which isn't.
Re:Great.... (Score:5, Informative)
Yes, they are slowly improving, but modern PCs are still behind where workstations were years ago, and a modern Intel based server is well behind a SPARC based machine.
Intel and AMD will spend their money on whatever generates the most ROI. They have collectively spent literally billions of dollars convincing Joe Public that CPU Mhz is the best way to measure the speed of a system - they aren't going to throw that away. A competent manager with R&D dollars to spend will therefore spend them on increasing Mhz.
Oh, and your post reeks of being underexposed to any architecture other than x86.
though the cost/benefit is out of whack. A P2 2.4Ghz with 2MB of L2 would get trounced by a 2.6Mhz with 512MB of L2 cache, disputing your claims that CPU speed doesn't matter. Large cache chips only make sense if you can't get a faster CPU:
Yes, assuming the code to run is 512k in size. If the code is ~2M, so it fits into L2 on the slower processor, then it will have the advantage, because the faster one will have to waste cycles moving the cache back and forth to main memory. Cache size is related to CPU speed only in terms of memory bandwidth: if your CPU cannot get data from main memory fast enough to keep it occupied, then you need faster memory closer to the CPU, which is what a cache is. If you are context switching, then you will have to keep dumping the cache and reloading it, which puts larger caches at a disadvantage.
Ultimately, caches are a hack; an elastoplast solution to the fundamental problem, which is the mismatch between the rate at which a modern CPU can process data, and the rate at which memory can supply it. In an ideal system, there would be no CPU caches at all, because the CPU could get data from main memory fast enough to keep it fully occupied. Systems used to be built like this, before the current obsession with clock speeds.
Re:Great.... (Score:4, Insightful)
The bus and memory bandwidth has improved pretty much in lockstep with the CPU computational ability. While it might be nice on paper to have 16GB of memory bandwidth, and it might look good on a ridiculously synthetic memory bandwidth benchmark, in practicality such a imbalance would be just a monstrous waste of money: Generally processors actually do something with the data that they're processing, so the two factors have to balance: You need a system design that can keep the processor satiated. In the Athlon world such a situation was demonstrated superbly recently with the ramping up of the memory subsystem speed, DDR ramping up from 266Mhz to 400Mhz...what improvement did it demonstrate? Virtually none. The processor simply had no real need for the additional memory bandwidth, though I'm sure it will as they come out with the next generation.
Intel and AMD will spend their money on whatever generates the most ROI. They have collectively spent literally billions of dollars convincing Joe Public that CPU Mhz is the best way to measure the speed of a system - they aren't going to throw that away. A competent manager with R&D dollars to spend will therefore spend them on increasing Mhz.
While I have spent considerable effort in the past disputing the Mhz-is-king myth (especially in regards to the P4 versus the Athlon), I think you're promoting just as false of an claim. CPU speed DOES matter. By your claims, shouldn't these [tomshardware.com] benchmarks show no improvement as the CPU power ramps up, given your claims that it's starved for throughput?
Scaling horizontally... (Score:5, Interesting)
But if you are scaling an application horizontally the last thing these days is the processor speed, sure the heavy duty maths is still sitting on a mainframe, your ERP is still on an AS400, but that is more about reliability than power. Intel boxes fail, period, so having one box isn't a smart move, have 10 is a more sensible approach.
Dual NIC, external disk via fibre channel. That is where I'll spend the cash. The processor just needs to be fast enough, and I'd like there to be at least two in the box. 2 Boxes doing everything, federated systems.
If you lob everything on one box, then yes you need all the processor speed you can handle, you also need to think about what happens when the box fails.
If Intel announced that this new processor could degrade its performance when issues arose then I'd be interested. Overheating ? Turn off hyperthreading and drop the clock speed. Still got issues, move down to minimum speed and start a shutdown process.
I like servers that will run for 5-10 years with no down time. But with Intel/AMD boxen I'll stick with lobbing in lots on the basis that they'll fail.
Re:Great.... (Score:2, Funny)
Re:Superfast! (Score:3, Funny)
Jump back in time... even further
Mhz = mega herz
mhz = milli herz
Imagine a computer that's triggered every 11 minutes... with hyperthreading!
Wow. It might have stunned Charles Babbage... [vt.edu]
Re:the new P4 *that* will run at 3.06 GHz (Score:2)
Re:avance? (Score:2)
Re:The numbers are deceiving... (Score:2)