Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

Intel Pushes Pentium 4 Past 3 GHz 365

denisbergeron writes "Yahoo has the news about the new P4 who will run at nothing less than 3.06 GHz. But the great avance will be the hyperthreading technology (already present in Xeon) that allows multiple software threads to run more efficiently on a single processor."
This discussion has been archived. No new comments can be posted.

Intel Pushes Pentium 4 Past 3 GHz

Comments Filter:
  • eh? (Score:5, Funny)

    by lordkuri ( 514498 ) on Tuesday October 29, 2002 @07:45AM (#4554840)
    Yahoo has the news about the new P4 who will run at nothing less than 3.06mhz.

    umm... I've got an XT clone that's faster than that... wanna buy it for about $600?

    (/sarcasm)
    • Re:eh? (Score:2, Funny)

      by MortisUmbra ( 569191 )
      The evils of no proof-reading :) the greatest avance isn't 3.05mhz....
    • Re:eh? (Score:2, Informative)

      by Anonymous Coward
      Processors gain self consciousness at about 3 MHz, therefore we shall refer to them as "who", not "which".
    • Hehehe, now The Taco has edited the article to say "GHz"... watch out buddy, he's on to you. :)
  • I just hope hyperthreading is the real deal, not a load of hyperhype.
    • by Webmonger ( 24302 ) on Tuesday October 29, 2002 @08:00AM (#4554916) Homepage
      Hyperthreading works well for certain types of software, and awful for others.

      Here's an article [arstechnica.com] from Ars Technica on HT/SMT.
    • by Jim Norton ( 453484 ) on Tuesday October 29, 2002 @08:02AM (#4554921)
      A couple of sites have benchmarked Xeons with HT enabled already (Anandtech and Aces Hardware spring to mind.) It provides a boost in some applications but can actually decrease performance in others. It's rumored that Intel has improved their implementation of hyperthreading but I wouldn't expect the 20-25% performance gains in most applications.
      • As the poster before me mentioned Anandtech did a test where they compared Athlon MP vs. Xeons. Both in single and dual setups. This test; Database Server CPU Comparison: Athlon MP vs. Hyper Threading Xeon cand be found here: http://www.anandtech.com/IT/showdoc.html?i=1606 Its actually one of the better tests that they have done. They use their own databases to test the performance; the webDB, the adDb and the forumDB. The smart thing about doing this is that the databases have diffrent characteristics: -the webDB: lots of selects(reads) -the adDB: some selects more stored procedures -the forumDB: selects,inserts and updates After reading this test in April, i wouldnt actually jump on to the conclusion that Hyertreading is a meaningfull "desktop- feature" if you look at price/performance. Actually, i think ist a bit overhyped. -
    • I just hope hyperthreading is the real deal, not a load of hyperhype.

      You just answerd yourself.
      Hyperthreading is just for really dealing with the load.
  • ...but the C64 still got better sound.
    • > 3.06 MHz is over 3 times faster than a C64...
      > ...but the C64 still got better sound.


      I agree: despite al the claims about Moore's Law and technological advances, this proves that tripling the speed of a good ol' 6510 CPU has some disadvantages as well: Give a little to gain a little. 8-)

      More seriously: graphics have improved since the early eighties, but what about gameplay? Isn't Mame the only thing that really justifies buy PC hardware now and then?

      --
      Money is the root of all evil (Send $30 for more info)
  • Hmmm... (Score:5, Funny)

    by llin ( 54970 ) on Tuesday October 29, 2002 @07:50AM (#4554869) Homepage Journal
    the new P4 who will run at nothing less than 3.06mhz

    Yeah, but what's its top speed?

    • Re:Hmmm... (Score:5, Funny)

      by Jenova ( 27902 ) on Tuesday October 29, 2002 @08:08AM (#4554945)
      >>Yeah, but what's its top speed?

      I dunno, it depends of the person throwing the computer I guess?
    • Re:Hmmm... (Score:5, Insightful)

      by catwh0re ( 540371 ) on Tuesday October 29, 2002 @09:37AM (#4555449)
      the entire P4 design is what you call "long and narrow" with built ins to cope with things like the time it takes for a signal to cross the chip. Namely the P4 is soley designed to be able to be clocked to some very large numbers... I'd say expect P4's at 8GHz, they demo's 4GHz chips just before they were releasing 2GHz.

      The true fact in the matter is that intel are going to rely almost entirely on the marketability of a big number with the P4, as it's handling is rather unimpressive when compared to such ordinary designs as those from AMD, which clock poorly, yet crunch happily.

      I need not mention about G4's and other well designed chips as some GHz bunny is certain to point out that they are only at 1.25GHz at the moment.

      • Re:Hmmm... (Score:3, Insightful)

        by cheezedawg ( 413482 )
        The true fact in the matter is that intel are going to rely almost entirely on the marketability of a big number with the P4, as it's handling is rather unimpressive when compared to such ordinary designs as those from AMD, which clock poorly, yet crunch happily.

        I disagree. Intel's strategy of designing for higher clock speeds has given them a much more scalable chip, and that is evidenced by Intel's ability to increase the clock speeds frequently while AMD is struggling. And if you look at the last Toms hardware review [tomshardware.com] (its a couple of weeks old), the P4 2.8 GHz pretty much tied with the Athlon 2800 (they both won about 14 benchmark tests). But that is much less meaningful when you realize that Tom was testing an Intel chip that has been available for 2 months with an Athlon that won't be available until December. If you compare the 2.8 GHz P4 with the fastest available Athlon today, the P4 beats it in over 90% of the benchmarks (I'd imagine that a comparison between the 3.06 GHz HT chip and the Athlon 2800+ would be similar). So Intel's strategy is working for performance, and it is more marketable to boot.

        And there is a lot of research right now about the optimal pipeline depth, and the conclusion was that the current pipelines are not deep enough. The optimal pipeline depth for the x86 architecture is around 40-50 stages.

        http://systems.cs.colorado.edu/ISCA2002/FinalPaper s/hartsteina_optimum_pipeline_color.pdf [colorado.edu]
        http://systems.cs.colorado.edu/ISCA2002/FinalPaper s/Deep%20Pipes.pdf [colorado.edu]

        BTW- thanks to fobef for these links- I read them yesterday on /.
  • will be expensive (Score:3, Insightful)

    by Graspee_Leemoor ( 302316 ) on Tuesday October 29, 2002 @07:52AM (#4554873) Homepage Journal
    It's interesting to see what the cutting edge is capable of, but you pay such a stupidly massive premium for the latest processor that only fools would use their own money to buy it.

    In the UK you usually have the ultimate latest Intel at about 700 UKP- the sweet spot in the price/performance trade-off tends to be around the 200 UKP mark, which will probably be the 2.5Ghz by the time this 3Ghz one is out.

    graspee

    • Re:will be expensive (Score:2, Informative)

      by MikeDX ( 560598 )
      What's nice about that is that the new 1.7 P4's will easily overclock to 2.6Ghz+, so you are getting almost a Gig free for just knowing which switches to push.
  • what's my motivation (Score:4, Interesting)

    by rob-fu ( 564277 ) on Tuesday October 29, 2002 @07:54AM (#4554879)
    "You won't see a heck of a lot of difference in Word, but software like [Adobe Systems'] Photoshop or video-rendering software will benefit considerably," he said.

    How can Word appear any faster at 3GHz? I would think that after 1.5GHz, improvement in performance would be hard to notice. Granted, it will be good for people who are still running those 200MHz clunkers but what's the incentive if you're already running in the GHz range?
    • Incentive? (Score:5, Funny)

      by Gruneun ( 261463 ) on Tuesday October 29, 2002 @07:59AM (#4554914)
      Unreal Tournament 2003 just kicked my 1.0 GHz machine in the nuts and then made fun of me. If for no other reason, I'm glad to see this announcement, because I can expect a price drop on the 2.6 GHz and 2.8 GHz chips.
      • Uh... turn off a lot of the crap man.

        I'm running UT2k3 on an Athlon 750 with a GF2. I run at 800x600 with low qual textures/models and most options turned off. Yes, it's much more pretty on a fast system, but it certainly didn't "kick my [...] machine in the nuts".

        Yes, I'm planning to upgrade, but I was pleasantly surprised at how well this 2.75 year old machine handled UT2k3.
        • I am running a GF2 at 800x600 with most of the stuff at "normal" settings. It runs fine. The first thing I tried were max settings, slowly moving backwards until the gameplay was acceptable. However, the beauty of the new game is in the texture/model quality and detail. Running at anything less than max settings is unacceptable and defeats the purpose of the new game.

          After all, I'm sure the original Quake runs great on my machine and several slower processor-based iterations, but It's no longer the quality I demand from my games.
    • How can Word appear any faster at 3GHz? ...
      Granted, it will be good for people who are still running those 200MHz clunkers but what's the incentive if you're already running in the GHz range?


      Unfortunately where I work the secretaries for division heads get these 3GHz machines and run Word on them while the scientists and technicians get to keep working on their Pentium 200MHz system. Maybe if they're lucky they get a hand-me-down from a secretary like a nice PIII-1GHz box. :-)

    • by cybrthng ( 22291 ) on Tuesday October 29, 2002 @08:03AM (#4554926) Homepage Journal
      I think you over simply todays word & document processing.

      For example, we use Microsoft word with built in excell spreadsheets and ODBC queries that update charts in real time from an Oracle database as well as include visio stencils and other good stuff. This is a 40+ meg file in raw format and a lowly 1.5ghz with 512 megs of ram takes time to re-draw. We daw a huge performance increase from 1.5ghz to 2.4 Maybe "hyper threading" will help out even more.

      BTW, it is about the same performance under linux using staroffice or corel office. KDE Office is even slower, so i know its not just the tools :)

      For people who *WORK* using there pc, you can never have "too much" power. Its like race cars, maximizing performance for the job at hand.
    • by Anonymous Coward
      Correct - Word, Excel, and general Windows GUI operations - or, if you prefer, OpenOffice, KDE etc - won't show any improvement, and indeed the benefit for these apps (in terms of what the end-user perceives) began to diminish once processors passed the 6-700MHz mark.
      Where this kind of power shows, though, is in more intensive processing - as the article suggests, Photoshop, Cinema4D, and so on.

      As for Hyperthreading / SMT technology - it absolutely does make a difference. I've been running an HT-enabled system (pre-release silicon) for some time, and there are specific usage models where it shines. Forget about single-threaded or single-tasking type environments - the user who loads Word, does some typing, then loads Photoshop, does some filtering, etc, isn't going to see any benefit at all from HTT. However, once you get into multi-tasking scenarios the story is very different. For example, run a series of Photoshop filters/macros whilst simultaneously virus-scanning the system; or, export a large Outlook folder to a .PST archive file whilst WinZIP-ing a large folder. All quite credible usage scenarios (who here can say, "that's preposterous! no-one would EVER want to do such a thing!"..?) and the difference between an HT- and non-HT-enabled system is dramatic - of the order of 20-30% time saved.
    • by Anonymous Coward
      "You won't see a heck of a difference in Word..."

      So why are you asking "How can Word appear any faster at 3GHz?" It won't, but Adobe Photoshop blah blah will use as much as you can give.....

      Stu.
      Don't mind me I drank too much at tonight uni function. tee hihee. :)
    • by bjb ( 3050 ) on Tuesday October 29, 2002 @08:35AM (#4555075) Homepage Journal
      How can Word appear any faster at 3GHz?

      The speed is for Clippy, not YOU... he now is 3D ray-traced and has more artificial intelligence built in!

      If it wasn't for the idea of WYSIWYG and fonts, I'd still be doing my word processing on AppleWorks for the Apple ][.

    • How can Word appear any faster at 3GHz?

      Well, it can't. I recently upgraded my machine from 200Ppro to 2x1800, and Word hasn't sped up a shit. It still pauses to think for some quarterseconds, not very long but very noticeable. So I think there's some other problems with word, it just doesn't seem to work correctly.
      All the applications in Linux though have sped up tremendously. I knew that SMP evens the system load, but thought that there wasn't that many multithreaded applications, but the system just feels silken smooth (I first ran it with 1 processor, before modding XP's L5 bridge so they appear as MP).
    • by zenyu ( 248067 )
      How can Word appear any faster at 3GHz?

      Users do stupid things.

      I've seen vector graphics with millions of lines inserted into word. Fine for a drawing package or desktop publishing app, but god awful slow in Word. Not really a fault of MS, it's these people should be using a desktop publishing application, Word is for wordprocessing.

    • ....."You won't see a heck of a lot of difference in Word, but software like [Adobe Systems'] Photoshop or video-rendering software will benefit considerably," he said.

      How can Word appear any faster at 3GHz? I would think that after 1.5GHz, improvement in performance would be hard to notice. Granted, it will be good for people who are still running those 200MHz clunkers but what's the incentive if you're already running in the GHz range?

      How can you quote something from the article and not even read it? It says you won't see much difference in Word, and you turn around and say "How will Word be faster?". Don't you even read what you cut and paste?

      Strange days when people bitch about technology getting better.

  • by puto ( 533470 ) on Tuesday October 29, 2002 @07:56AM (#4554889) Homepage
    Disregarding all of the comments on the 3.06 typo. Geez, I remember the day when we use to comment on processors, peripherals, parts. Now the community is stuck on whining about typos. Read it, chuckle to self,move on.

    Anyway they have ramped up the speed, and added something that could have always been, hyperthreading. Xeon has always had it. This is not progress, this is almost not worthy reporting.

    Puto
    • Yes, mod me offtopic if you like. I think why people whine about the spelling errors is simply because /. is not a little news service for a bunch of computer geeks. Millions of people all over the world (90% of them smartarses like myself) read it and it seems really embarrassingly pathetic when the author and the editor do not know the difference between milli (m) and mega (M) and the fact that the P4 must be running at GigaHertz (GHz). Well... I am embarrassed for them.
      • Man,

        I know where you are coming from and if you look through my past posts you will note they are rife with grammatical and spelling errors. Sometimes I multitask and the least amount of attention is focused on what I am posting on slashdot. More content that checking my punctuation.

        You are right though. It is embarassing to have our community display such glaring errors in a a public format.

        I think the problem is that all the kiddies are just trying to get an article posted that they gleaned off of some other venue, and in their haste to submit before anyone, they type whatever they can, and submit without proofing. Did I mentions couples with the fact they have Slashdot open in a window at work that they keep minimizing when a supervisor walks by.

        At least the editors should proof the articles for basic spelling and grammar, typos, especially when they are that small like the current topic. And if they are reading the comments they can go in and change it, fix it.

        However, what is worse are the 1000 posts pointing it out. Like we do not all see it, like once was not enough.

        I would rather see a reduction of the "hey your stupid comments" as well as some editorial proofing as well.

        However, Slashdot is not run by language scholars.

        We need to clean up the submissions and the comments.

        Puto
    • Disregarding all of the comments on the 3.06 typo.

      Actually they got that part right.

      Geez, I remember the day when we use to comment on processors, peripherals, parts. Now the community is stuck on whining about typos.

      What exactly is there to talk about in yet another processor speed improvement story? Some people will say "who needs this much power?" Some people will respond and explain to these dolts who needs it. Others will make fun of the article submission. And then there are people who complain that the article was posted in the first place.

      This is not progress, this is almost not worthy reporting.

      See?
  • Bah (Score:4, Informative)

    by Brian Stretch ( 5304 ) on Tuesday October 29, 2002 @07:57AM (#4554897)
    Or you can build a dual processor Athlon system for less money. No need for HypedThreading.

    It has been reported on various sites [amdmb.com] that Athlon XP 2400+ chips (2GHz, new Thoroughbred Revision B core) are trivial to mod for dual CPU operation and easily overclock to 2.25GHz (150MHz FSB, aka 300MHz DDR, which is the most my ASUS A7M266-D will allow) with proper cooling (Thermalright SLK800 being my favorite). The chips are under $200 apiece. Imagine a Beowulf cluster of those...

    Proper Athlon MP 2400+'s are due shortly I'd assume.
    • ...Dual-CPU Athlon motherboards are not that easy to find in a retail store--you often have to purchase them mail order. :-(

      Also, what end-user oriented software will take advantage of Intel's hyperthreading process right now? Will we have to wait for updates to CAD/CAM, drawing and image editing programs to use hyperthreading? And when will we see updates to multimedia programs such as Windows Media Player, RealOne, Quicktime, software DVD players, etc. that will take full advantage of hyperthreading? We might not see them until early 2003.
    • Re:Bah (Score:4, Funny)

      by hendridm ( 302246 ) on Tuesday October 29, 2002 @10:11AM (#4555760) Homepage
      > are trivial to mod for dual CPU operation and easily overclock to 2.25GHz

      Some of us need reliability and warranty. I'm not going to throw an overclocked, modded AMD POS in the server farm at work. If I do, I'm going to make sure my resume is updated and my suit is free from little white fuzzies.

      I'm gonna go overclock my laundry machine now and cook a pizza in self cleaning clean mode in my oven. Should be done by the time the laundry dings.
      • Re:Bah (Score:3, Interesting)

        by Brian Stretch ( 5304 )
        Fine, read this [anandtech.com] and pick out a prebuilt Athlon MP 2200+ server for your server farm. It's STILL better/cheaper than buying a 3GHz P4.
  • Yippie! (Score:3, Funny)

    by Anonymous Coward on Tuesday October 29, 2002 @07:59AM (#4554911)
    I'll be able to run Word really fast now.
  • Hyperthreading ... (Score:5, Interesting)

    by RinkSpringer ( 518787 ) <rink.rink@nu> on Tuesday October 29, 2002 @07:59AM (#4554913) Homepage Journal
    Humm, this raises a point for me. Of course they claim it is faster, but when exactly ?

    I mean, is it faster when doing stack swaps or when using TSS to multitask? *BSD uses the TSS to multitask, taking benefit of the i386's way to quickly swap registers and stack. Windows doesn't do this ...

    So, from a pure technical point of view, how does it work? Did they just make TSS switches faster? Some OS-es benefit highly from that, but others, well, don't.
    • by Doctor Faustus ( 127273 ) <Slashdot&WilliamCleveland,Org> on Tuesday October 29, 2002 @08:26AM (#4555034) Homepage
      Ignore Intel's "Hyperthreading" name. There was already an established name for the technique, Symetric Multi-Threading (SMT). The basic concept is that, since most of the CPU's pipeline is usually going to waste due to stalls, especially in a CPU with a pipeline as deep as a P4, the one physical CPU can pretend to be two CPU's. When instructions for one CPU stall, the pipeline can switch to instructions for the other.

      This would have been a lot harder a few years ago, but most of the hard parts (like register renaming) had already been done to implement out-of-order execution.

      As for what can benefit, it's pretty much anything that can benefit from dual CPU's.
    • So, from a pure technical point of view, how does it work?

      The processor can appear as two logical processors and run two threads through the same core. RTFA for more information.

  • by Anonymous Coward on Tuesday October 29, 2002 @08:04AM (#4554933)
    I have a 56k modem and the internet is soooo slow, will this make it faster? They said with the PIII it would but I didn't see much different.
    • Actually my parents have a P4 with a winmodem and it still sometimes losses connection if you have to many processes running (like 3 netscape windows, a text editor, and a bash window). Also when trying to connect the whole machine slows down a lot. I would have thought that by now winmodems could work half way decently but a cheap hardware modem still beats a winmodem hands down. Slap on a $30 external modem and everything works a lot better.

      Not really an anti-Windows rant, more an anti-winmodem rant. :)
  • by mccalli ( 323026 ) on Tuesday October 29, 2002 @08:06AM (#4554937) Homepage
    Title says it all really.

    I was torn between building another dual-CPU box (currently on twin 533Mhz Celerons with an ABit BP6 board), or going the small form-factor route. Now I can do both.

    More at Shuttle's site [shuttleonline.com].

    Cheers,
    Ian

    • by Anonymous Coward
      If you are really used to using multi-processor enabled applications, especially graphics apps like Maya, Digital Fusion, etc, you will not like a hyperthreading machine. In our tests, Maya runs SLOWER as does Digital Fusion when hyperthreading is enabled.

      Other things that runs slower (in general)

      video encoding
      audio encoding
      quite a number of apps with real multi-threading built in.
      • you will not like a hyperthreading machine. In our tests...things that runs slower...video encoding, audio encoding, quite a number of apps with real multi-threading built in.

        Hmm. Now that's seriously disappointing - video encoding is what I mainly had in mind. That and a tiny amount of Photoshop - I can live without dual-CPU for that though, as my Photoshop usage isn't that high.

        There are other things I do with it - I run various virtual machines using Virtual PC for Windows [connectix.com], and I like the isolation that running on a dual-cpu gives me. Even if the virtual machine starts chewing its way through my CPU power, it generally only starts massacring one at once, thus leaving my native OS and GUI nice and responsive. I'd be looking for a hyperthreaded machine to give me the same advantage. Does that sound likely?

        Cheers,
        Ian

  • by imag0 ( 605684 ) on Tuesday October 29, 2002 @08:06AM (#4554938) Homepage
    Knock Knock!
    Who's there?
    ...
    15 second wait...
    ...
    Intel
  • Turbo? (Score:4, Funny)

    by suss ( 158993 ) on Tuesday October 29, 2002 @08:06AM (#4554940)
    Yahoo has the news about the new P4 who will run at nothing less than 3.06mhz.

    Does it at least have a turbo switch?
  • Hyperthreading (Score:4, Interesting)

    by echophase ( 601838 ) on Tuesday October 29, 2002 @08:06AM (#4554941)
    This will make things interesting for software licenses that charge per cpu.

    for those of you who don't know, with hyperthreading, the system will appear to have two cpus. If you have a dual system with hyperthreading, then it will look like 4, and so on.
    • it is explained here at theinquerer [theinquirer.net]for windows. Basically it mean if a applcation is licenced for a single processor it will use the first processor(even if i is a dual processor system.

      There are no real problems if:
      -You buy licences for real processors and the software does not check it.
      -The software runs only on the number of (real)processors it is licences for: it just does not use the hyper threading. (This is the case for the current windows XP & windows 2000)

      There is a problem
      -The software aborts if it detects more (real) processors than it is licenced for.
      I do no know such software. Or it is software that is tied so close to hareware it fails anyway if it is put on new hardware.

      Note that you can still disable hyperthreading in the bios.

  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday October 29, 2002 @08:08AM (#4554948) Homepage
    Does this mean that AMD's scale for measuring the performace of its CPUs (the Athlon 2200+ runs at 2200 zlotniks) will no longer compare fairly against MHz for the P4? Perhaps a P4 will run about as fast as an Athlon of the same clock speed (if you could get Athlons clocked at 3GHz).
  • by xirtam_work ( 560625 ) on Tuesday October 29, 2002 @08:09AM (#4554951)
    the huge number of story errors that keep popping up. You'd think that the story editors would try to mantain some kind of quality control.

    However, it's also possibly a ploy to keep people posting indigant comments about errors. 50% of posts on these kinds of stories seem to be pointing out these glaring errors. Like the recent story about PS2 games on an Xbox which was nothing to do with the Xbox at all.

    Come on guys, wise up!
  • Hyperthreading (Score:5, Insightful)

    by e8johan ( 605347 ) on Tuesday October 29, 2002 @08:14AM (#4554975) Homepage Journal
    Hyperthreading is a complex proof of the limitations of todays CPU architectures. I belive in a CPU architecture containing many small CPU cores on one chip, instead of just multiplying the issue and commit parts and sharing the execution units.
    It would be more scaleable and easier to implement to use several complete CPUs. The biggest drawback (compared to hyperthreading) would of course be that in special situations some CPU cores would be idle, but this simply corresponds to pipe-line bubbles in the hyperthreaded case. This is easily compensated by two facts: 1) multiple CPUs can be made very scalable and 2) most computer systems today always runs multiple threads (i.e. utilization will be good).
    Of course, for Intel to maintain their market lead, everything has to be compatible, so they'll have to pay, time after time, for the errors they made in the eighties (the 286 paging + the CISC ISA). By breaking Amdahl's law time after time (SSE, MMX, etc.) they have made an even more complex beast. The only area where they really excel is in the production processing. They can squeeze out high frequencies and pack the transistors tight. For that, I'll give 'em cred. For their CPU ISAs, I'll just laugh...
  • Whoopie. (Score:2, Insightful)

    by Anonymous Coward
    Great, so now we'll see nerds nitrogen-cooling these things to get an extra performance boost as well? What a waste of time.

    This is all pointless. The entire pentium "architecture" (more like a shanty-town) needs to be dumped entirely. We NEED a clean start.

    Even moreso, why is no one addressing the fundamental problem--that the PC is just horribly designed? There are better ways of doing things than just ramming everything through a single CPU. This is 2002--why are we not pursuing better computer design? The "PC" is the bottleneck for crying out loud. 10 years from now will we be reading about the new 10 Ghz PVII chip, still running in 30-year-old hardware? Wonder if I can still get a "Missing Basic ROM" error on my desktop machine...

    Be, Inc. tried to redesign the "PC"...they had a very nice design, but they killed it before it's time. And how about Amiga...yeah everyone is sick of hearing about the Amiga but it WAS intelligently designed. Instead of shoving everything through the CPU the Amiga used coprocessors to deal with much of the stuff that bottlenecks PCs, leaving the CPU free for more important stuff. It was a great idea, and it actually WORKED.

    I don't care who does it--I want to see a better machine being built. If done right, the Ghz of the CPU won't matter nearly as much.
    • Re:Whoopie. (Score:2, Insightful)

      by nempo ( 325296 )
      With the early 486 cpu:s we had the extra fpu chip. Later, that was integrated into the main cpu.
      The reason for integration was price, It's cheaper to produce one chip then two.
      Today, we have spu:s (sound prossecing units), gpu:s (graphics prossecing utits) and so on.

      You'r talking about redesigning the 'PC' when you actually mean 'redesigning the OS'.
      • With the early 486 cpu's we had the extra fpu chip

        In the 80486 it was already integrated. It was a seperate chip in the 80286 timeframe.

        In the 80486 time there was a 486sx that was a 80486 with a fused FPU unit.

        Dude, you are getting old!

        But you are right, the parent poster does not properly makes a distinction between OS & hardware.
    • It's pointless indeed... regarding Wonder if I can still get a "Missing Basic ROM" error on my desktop machine...:

      Find some DOS, type in the following: copy con myprog.com[enter][alt-205][alt-24][ctrl+z]myprog[e nter] and you will know:)

      Disclaimer: no dos here.

    • Re:Whoopie. (Score:5, Insightful)

      by Zathrus ( 232140 ) on Tuesday October 29, 2002 @09:32AM (#4555400) Homepage
      Whoopie. Another EE student who has realized that the paper design of the PC architecture sucks wind and can't imagine that it works at all.

      Don't worry folks. In a few years he'll graduate and get some real world experience. And then he'll probably realize that while the PC architecture does indeed suck on paper, in reality it's not all that bad. Could it be better? Sure. Should we throw the baby out with the bathwater? No way.

      Compare the PC market to the rest of the computer market. Who's made more progress? Who has been rapidly pushing the niche markets into smaller and smaller niches as their "superior designs" find them running slower and more costly than the evil, horribly misdesigned PCs?

      Coprocessors? Yeah... have you even bothered to look at a modern video card recently? The damn things are more complex and more powerful than the CPU. Modern audio boards are also powerful all by themselves. For the most part I/O is handled by separate chips as well.

      The bus and memory interfaces on PCs could use some work. And that's happening, with 3GIO, PCI-X, and other buses being implemented in the next few years. There's some truely horrid cruft in the core too - the IRQs, DMA channels, etc. are still pretty godawful, but not nearly as godawful as they were back with the ISA bus. The issues haven't so much gone away as they've been hidden, but the performance limitations imposed really aren't all that absurd.

      Design a better machine? Go for it. It'll die just like all the rest because while you may have a better electrical design, you've ignored the real world and the fact that people want to be able to make slow transitions from one architecture to another. Doing an all-at-once transition is not an option unless you control the entire market - which no PC manufacturer does (unlike Apple). Of course, the flip side of this is that the competition causes the current implementation to advance far more rapidly than would be otherwise possible. Which is why you can buy a $2000 PC that outperforms a $200,000 server.
    • The high performance of CPUs makes me wonder why we couldn't do a more interesting type of a machine.

      I'd have a case with a crosbar type bus. In this you'd add CPU cards that had memory and a single daughtercard slot. The daughtercards would be to add custom interface electronics for specialized tasks, but not actual processors, so a CPU card could be a video card, a SCSI card, NIC, etc.

      One CPU card would be the "master" CPU card which ran the core of the OS kernel plus applications. The other cards would run applications or kernel modules specifc to their hardware daughtercards; network stacks, filesystems, display components (renderers, GUI).

      Increase performance? Add a CPU module. The kernel or user tools could manage which cards ran which applications -- some apps could be dedicated to a specific CPU card, other apps could be "floated" to CPU cards based on available cycles.

      I don't think this is such a terribly new idea -- its kind of the modularity that IBM 390 or other NUMA architectures do now, but condensed into a single box. Think of a blade server box, but with a switching bus and the ability to access other systems memory.

      It would require an OS with a lot more modularity. I'm not sure what would happen to apps that wanted RAM beyond a single CPU card's RAM capabilities, or how fast or easy you could move an app and its memory space from one card to another. I'm also not entirely sure that even a P3 @ 3.xx GHz would be able to do the work of an NVidia GeForce, even if thats all it had to do, either.

      But it would be an interesting way to make a highly scalable platform, and scalable both ways -- big and small. An OS written for such hardware could run on a single-card system (think of a laptop or even a palmtop as a single-card system), and multi-card systems could come in S, M, L, and XL sizes depending on cost and need, as well as eliminating the CPU/Memory/Bus bottlenecks.
  • I read "hyperheating" and almost just scrolled on by. After all, "Pentium? Hyperheating? There's nothing new to see here. Move along."

  • This will totally change my life! It's the announcement that I'd been waiting for! I must rush out and purchase ten thousand of these immediately, if not sooner! And so on!

    </sarcasm>, wouldn't it be simpler for Slashdot to just link to every product announcement from a major hardware manufacturer rather than go through the farce of picking one of the dozens of frenzied (and typo'd) submissions from the "f1rz7 5Ubm1z10n, 5uX0rz!" brigade?

  • by Jugalator ( 259273 ) on Tuesday October 29, 2002 @08:33AM (#4555060) Journal
    I'm personally going to build an octathreading CPU by tricking the OS into thinking it's working with EIGHT processors! Wow, that should give me 8x the performance! Stupid Intel restricting themselves to faking just two processors.
  • 3.06mhz (Score:5, Funny)

    by stud9920 ( 236753 ) on Tuesday October 29, 2002 @08:33AM (#4555064)
    3.06milliHz ! Wow ! That means about ten clocks an hour ! With the super deep P4 pipeline (20 deep IIRC), it means it will push some 200 "single clock instruction" in just an hour. But beware of pipeline stalls. They better have a solid branch prediction algorithm.
  • Slashdotters did this a while ago [slashdot.org] :-)
  • by Anonymous Coward on Tuesday October 29, 2002 @08:40AM (#4555102)
    Yahoo has the news about the new P4 who will run at nothing less than 3.06mhz

    There's only one explanation to 2 typographical errors in the post.. sex..

    Rob posting articles to be posted automatically, Kathleen wants Rob.. if you know what I mean.. Rob tries to rush.. well.. you get the idea..
  • by FuzzyDaddy ( 584528 ) on Tuesday October 29, 2002 @08:45AM (#4555126) Journal
    The LaGrande initiative will coexist with existing security initiatives such as Microsoft's Palladium to create a more secure computing environment, Otellini said. It will secure the physical pathways that transport data on a computer's motherboard, and will be available for both servers and desktops. The technology will take until at least next year to come to market, however, probably with the next generation of Intel's desktop Pentium processors.

    Securing the physical pathways that transpoty data on a computer's motherboard. This will sure help me against those tiny little hackers inside my computer stealing my data!

    Oh wait, you mean this is to protect the data against me? Looks like we have about a year before this is built into the PC architecture. Plan your computer buying wisely.

    Bastards.

  • I/O is still the bottleneck, be it to RAM, hard disk or whatever. I haven't got a single computer which at some time or another isn't sitting around waiting for the harddisk to stop reading or writing, or for data to flow through that sl-o-o-w 100baseT switch.

    The fact is that for work a 700MHz PIII is usually fast enough given the rest of the system, as well as being reasonably cool and quiet.

    So what is the point of this advert? Is it the result of a kind of desperation on the part of Intel? Marketing departments insisting on announcing ever smaller "feature creeps" in an effort to create a buying climate run the risk of the very buyer turnoff they want to avoid. It's like the old Indian auto industry, where the big new feature for each year was something like a differently shaped tail-light molding.

  • At what expense? (Score:2, Interesting)

    by Alethes ( 533985 )
    What did Intel sacrifice to make the number of Ghz higher for the sake of marketing? Really, I'd like to know, because I've heard this is the case with previous Ghz barrier crossings, and I wonder how it affects the overall performance of the CPU, and the rest of the computer for that matter.
  • by Cheese Cracker ( 615402 ) on Tuesday October 29, 2002 @09:14AM (#4555301)
    Why not spend more R&D money in increasing the speed of the bus? It would give us way better performance.
  • Overkill? (Score:2, Insightful)

    by ruiner13 ( 527499 )
    Seriously, besides the 1% of the research/development population who may need this, doesn't anyone think this is going too far, to fast? My personal computer is a G4 450, and I have yet to find something that really taxes it. I've upgraded the VC, HDs and RAM, spent maybe $300 doing so (over 3 years) and I have no problems, and I'd say I utilize the computer's resources more than maybe 97% of the population does (I am a programmer/video editor). I don't see what the difference in being able to compile the latest release of Apache in 5 minutes instead of 6.5. The scary thing is, I know people who actually think that tweaking their Athlon XP 2200+'s to eek out another 150MHz or something by using a freakin' pencil is gonna get them somewhere.

    I know that there are some of you on here that will flame me saying that you DO use that power. And that's fine, you are the 1% of the population I mentioned earlier. But to do it (like most of you would... admit it) just to get another 4fps in UT2003 or whatever, it's just sick. Yes, eventually I will buy a new computer, but only when my needs exceed the resources in my computer, which hasn't happend just yet (it's getting close though...). If any of you can actually tell the difference between this 3.06GHz P4 and the 2.5GHz P4 (without using a stopwatch that measures in the milliseconds) I have a bridge to sell you. Don't let Intel make you think that you need to buy a new computer right now. It may help the economy in the short term, but you will just be wasting precious electricity (in this case gobs of it) just to say you have the latest and greatest. It's becoming a disease!

    • Re:Overkill? (Score:5, Insightful)

      by Junks Jerzey ( 54586 ) on Tuesday October 29, 2002 @10:47AM (#4556043)
      I know that there are some of you on here that will flame me saying that you DO use that power. And that's fine, you are the 1% of the population I mentioned earlier.

      Everyone, of course, believes they're in that 1%.

      I used to do commercial 3D video game development on a 450MHz P2. It was a bit slow when compiling, but acceptable otherwise. Then I upgraded to an 866MHz P3 and, even years later, it still feels like lightning. Compiles are quick. Everything is snappy. I've taken to writing tools in Perl and Lisp and Python, and they're snappy as well. I mean, geez, who would have thought ten years ago thay you'd ever be able write 3D geometry manipulation tools in Lisp and have no worries about performance?

      Now, of course, you can buy a 2.5GHz P4 in an $800 PC. This is beyond ridiculous. Everything is three times faster than "beyond the point of caring"? I'm going to put C++ aside for almost everything, and just use whatever is the most abstract. Haskell? Yes, please.

      Am I in the 1%? Certainly not.

      It may help the economy in the short term, but you will just be wasting precious electricity (in this case gobs of it) just to say you have the latest and greatest. It's becoming a disease!

      This bothers me, too. Yeah, people don't need all this performance, and that's okay. Who cares if your computer is too fast? But unfortunately you don't get all this performance for free. It's coming at the premature obsolescence of hardware and greatly increased power consumption. Hard drives and monitors are actually improving in this regard, especially with LCD monitors (awesome!). But now we have 70 watt processors and PCs that ship with five or more fans in them, and we're talking bottom end machines from Dell and Gateway here, not crazy high-end monsters. This is bad.
  • by Joey7F ( 307495 ) on Tuesday October 29, 2002 @10:43AM (#4556002) Homepage Journal
    I can surf the web faster or no?"

    (someone actually asked me this in talking about the 2.2p4)

    --Joey

"Show me a good loser, and I'll show you a loser." -- Vince Lombardi, football coach

Working...