Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Technology Hardware

Dual Cores Taken for a Spin in Multitasking 221

Vigile writes "While dual cores are just now starting to hit the scene from processor vendors, PC Perspective has taken the first offering from Intel, the Extreme Edition 840, through the paces in single- and multi-tasking environments. It seems that those two cores can make quite a difference if you have as many applications open and working as the author does in the test." It's worth noting that each scenario consists of only desktop applications, and it'd still be interesting to see some common server benchmarks, such as a database or web server.
This discussion has been archived. No new comments can be posted.

Dual Cores Taken for a Spin in Multitasking

Comments Filter:
  • by jawtheshark ( 198669 ) * <slashdot@nosPAm.jawtheshark.com> on Friday April 22, 2005 @05:55AM (#12311410) Homepage Journal
    SMP system performs better when applications are multithreaded...

    (Dual core is the same as an SMP system, except the cores can communicate a bit faster with each other)

    • yeah it is a bit dull to anyone who has been following dual core or even just SMP.

      Cue loads of stupid people saying this technology is crap because most apps aren't multithreaded and the clockspeeds are lower than their current cPU.
      • Re:Newsflash... (Score:3, Informative)

        by hyc ( 241590 )
        The fact is, when you have only one problem to solve, a single fast CPU is always better than an equivalent count of slower CPUs. One 1GHz CPU is better than 2 512MHz CPUs of the same design, for solving a single problem.

        It's another fact that it's easiest for humans to analyze only one problem at a time. So the most straightforward way to handle any computing task it to crunch at it linearly from beginning to end.

        Unfortunately, gains in raw CPU speed have always come slower than the demand for number-cru
        • Re:Newsflash... (Score:3, Insightful)

          by jawtheshark ( 198669 ) *
          when you have only one problem to solve

          That highly depends on the problem. If your problem is highly parallizable and the application that resolves your problem has been written (correctly) in a multithreaded way, then two CPU's will perform better. (As you say, it doesn't scale in a linear way)

          Of course, you might just say that a parallizable problem is not one problem, but many small problems that need to be solved separately ;-)

        • And again unfortunately, the only way to get good performance out of those designs is by explicitly coding for parallel processing. And that's hard for humans.

          Yes its hard now but it won't always be hard, we will develop tools and methodologies that help us, just as we have before.
        • And again unfortunately, the only way to get good performance out of those designs is by explicitly coding for parallel processing. And that's hard for humans.

          I took a Parallel programming course in university. It was hard. Most students didn't get it. I got an A+ in the course. The hard part is that you really have to forget everything you have learned about programming on a single processor. You have to use completely different algorithms. All the regular algorithms follow a straight line. Paral
        • Re:Newsflash... (Score:3, Insightful)

          by walt-sjc ( 145127 )
          While that's true, these dual core chips (especially Intel's lame single-memory bus design) really seem targeted towards the desktop market where the impact is greatest, yet cost differential is realativly small (in relation to the total price of a system including software.)

          I'm more interested in what IBM's Cell processor can do. While some problems are definately single threaded by nature, the majority are not. I have a GIS application that could definately benefit from as many processors as I can throw
      • Spot on target!

        Some people really just don't "get it". As a relatively early (1993) adopter of Wintel SMP, I can tell you that a multithreaded OS combined with some multithreaded apps can be significantly faster than single processor systems. (I used SMP to improve throughput in Photoshop, and rendering speed in 3D-Studio.)

        Since 1993, I have been doing BYO with computers, and have found that, except for gaming, SMP (combined with SCSI disks) has proved to be a pleasant experience. Multitasking is limit
      • yeah it is a bit dull to anyone who has been following dual core or even just SMP.

        Cue loads of stupid people saying this technology is crap because most apps aren't multithreaded and the clockspeeds are lower than their current cPU.


        Please excuse my ignorance, but I've never really 'got' SMP. How exactly does it work in a typical desktop environment? How do jobs get scheduled across the two seperate cores/CPUs in such a way that it maximizes the available resources? Doesn't there need to be some kind
    • Re:Newsflash... (Score:5, Insightful)

      by beezly ( 197427 ) on Friday April 22, 2005 @06:07AM (#12311443)
      That is not necessarily true.

      I know this article is talking about Intel dual core chips, but for well-designed CPUs with integrated memory controllers (Power5, Ultrasparc IV, Opteron), the difference between a single dual-core CPU and two single-core CPUs is significant.

      On chips with built in memory controllers, as you increase the number of cores on a chip the memory bandwidth per core decreases, however as you increase the number of chips in a system, the memory bandwidth per core remains the same and the number of cores increases.

      That can amount to a big performance difference when running memory-intensive jobs.

      Intel seem to be really losing the plot here at the moment. In multichip configurations, Intel's memory bandwidth already sucks compared to Opteron. Multicore per chip is only going to make it FAR worse.
      • Is it just me or did the performance of the single core AMD relative to the Intel dual core in those benchmarks just scream out..."I want the AMD dual core!"?

        Seriously, unless you're application can run in the cache on the Intel parts, the AMD is gonna win hands down when running at the same clock rate which translates pretty closely to the same power consumption. AMD will yet be a tad lighter on power consumption just because the stuff is packed more tightly even though it has more active components. Equa
        • Same clock rate?

          The fastest AMD chip for the near future is 2.6GHz while the slowest P4 for the last year or so is 2.8GHz. If you want to compare based on clock speed, the Pentium-M (aka P3-v2) is a much fairer comparison. It has been a well known fact since the P4's launch that the P4's IPC (instructions per clock) sucks when compared to the P3's. (And even more so when compared to the PM's.)

          I have both an A64-3000+ and a P4-3G. My typical workloads usually contain a number of non-trivial tasks. While my
      • Re:Newsflash... (Score:2, Insightful)

        by MuMart ( 537836 )
        On chips with built in memory controllers, as you increase the number of cores on a chip the memory bandwidth per core decreases, however as you increase the number of chips in a system, the memory bandwidth per core remains the same and the number of cores increases.

        Even in a multi-memory controller system the same physical memory is shared, so there has to be some performance hit when running more than one cpu, so I doubt the actual memory bandwidth per chip will be the same. Or is there some architec

        • Re:Newsflash... (Score:3, Informative)

          by beezly ( 197427 )
          That's NUMA - if your OS is NUMA aware it should try to place processes on the same processor as the memory that contains their data.

          But yes, you're right, processes accessing memory on a different processor will suffer a latency (and to some extent bandwidth) hit. A well designed OS will help to mitigate it to some of the extent, but it's one of the reason that CPUs don't scale linearly.
    • Re:Newsflash... (Score:5, Informative)

      by The New Andy ( 873493 ) on Friday April 22, 2005 @06:11AM (#12311452) Homepage Journal
      ... assuming the OS chooses which threads are executed on which core well enough. If two threads depend on each other heavily and they are running on different cores, you can get really crappy performance.

      So the obvious answer would be to move one of the processes to the other core. However, this isn't trivial. You either have one scheduler per core or one scheduler per operating system. (You can't have a single thread sent to both cores easily - if both cores run the same thing at the same time there will be chaos)

      If you have one per core, then the scheduler trying to get rid of the thread will have to synchronise with the other core, waiting for the other scheduler to come into context, it then has to tell it to add this new process. Obviously, there is a fair bit of overhead, and if my memory serves me correctly, each core in the current chip has its own cache - so now all the stuff which was cached has to be sent to memory (since it is in the wrong cache) and now there is nothing in the cache, making every memory access slow for the next little while. End result - you can transfer a thread between CPUs, but it is costly.

      It is possible to have a single scheduler which can then just dispatch threads to each core as it gets run by each core. The big one here is making the scheduler threadsafe - both CPUs could run the scheduler at the same time, so you have to make sure they don't crap on each other. This is a problem which we have solved already with common synchro-primitives. But, if you just lock the list of threads to run (*), then you will get a whole lot of CPU time wasted just waiting to run the scheduler. It might be acceptable for 2 cores, but it doesn't scale at all.

      (*) You may realise (just as I realised) that a scheduler is more than just a list of threads to run (it is typically implemented as a couple of lists for each priority). The same problem still occurs with more than one list of threads, it is just a bit harder for me to express (proof by bad English skills).

      Finally, I'm expecting someone to tell me that I'm wrong about something I just said. That person is probably correct. My only experience with this stuff is a 3rd year undergrad operating systems course where we played around with OS161 (a toy operating system basically). But, hopefully the end conclusion will be the same: twice the number of processors won't equal twice as much performance, and it is tough to get a fast algorithm that will scale.


      • You know, in Windows, you can hit Ctl+Alt+Del to bring up the task manager, go to the processes tab, right click on a process, and go to "Set Affinity".

        Just sayin'...
        • You know, in Windows, you can hit Ctl+Alt+Del to bring up the task manager, go to the processes tab, right click on a process, and go to "Set Affinity".
          On SunOS it is called pbind [sun.com] and it existed since well before Windows supported SMP at all.
      • So the obvious answer would be to move one of the processes to the other core.

        The obvious answer is that the situation isn't a whole lot different from dual processor machines...
      • Re:Newsflash... (Score:4, Informative)

        by dascandy ( 869781 ) <dascandy@gmail.com> on Friday April 22, 2005 @01:14PM (#12314706)
        Actually, the opposite. Two processes that communicate quite heavily SHOULD be run together on two processors, especially since they will share memory and thus cache lines, plus they can spend time spinning on a lock instead of swapping threads. Given short locks and equal speed, they can work a whole lot more efficient on a dual-core than on a single-core.

        FYI, it's called Gang Scheduling and has been described for quite some time.
  • A matter of time. (Score:5, Insightful)

    by Renraku ( 518261 ) on Friday April 22, 2005 @05:57AM (#12311417) Homepage
    How long before applications start figuring that they should have an entire core dedicated to them?

    Windows, for example. What if the next version of Windows requires a dual-core processor to be usable? You know..Windows gets one core to idle at 80% of its capacity..and spills over into the other core when loading a text file.

    If things stayed the way they were now, and the entire other core could be kept separate from the OS and used for gaming/other applications, it would be a great idea.

    But guess what.
    • There's no way you can dedicate a CPU to a particular application.. not in any form of pre-emptive OS.

      However, you can constrain an application to a particular CPU (in windows at least) - task manager, set affinity. That's a great way of preventing an application from using your other CPU. If you want a CPU to run a game only, you would have to go through the entire process list and set the other processes to CPU 1, (or write an app to do that), and then set your game process to CPU 2.

      I think you'll get m
      • There's no way you can dedicate a CPU to a particular application.. not in any form of pre-emptive OS.

        What'd'ya mean "any form of pre-emptive OS"? Just because Microsoft doesn't do it doesn't mean it's not possible. You can certainly do this on Irix [sgi.com], for example. And I haven't looked at the Linux processor set tools, but I assume it's similar.
    • Windows is currently idling at between 4 and 7% of CPU on my PC. Admittedly I'm running a 2.66GHz machine, but then again, according to the processes list that CPU tiem is mostly being taken up by a couple of apps sitting in the background (and 1% by me typing in IE).

      Why on earth you would want to allocate an entire CPU to that, I have no idea.

      Now you might want to allocate a whole CPU to Doom3 or HL2, but I suspect they'll pretty much get that anyway, as applications are assigned to the quietest CPU, as
    • by jtshaw ( 398319 ) * on Friday April 22, 2005 @08:39AM (#12311980) Homepage
      That would be a total waste of CPU time.

      Very few applications, and OS's in particular, are idle most of the time. I don't know the exact profiling characteristics of Windows, but I do know that in linux the kernel rarely, if ever, takes up 100% of a CPU's, and never does for a prolonged period of time.

      If you locked one CPU and made that for OS tasks only you'd be wasting a lot of clock cycles that another application could happily use. Same would go for locking just about any application to a cpu.

      • If you locked one CPU and made that for OS tasks only you'd be wasting a lot of clock cycles that another application could happily use.

        Probably wasted in most situations.

        But I wonder whether an OS sitting on its hands ready to go in a multiple core machine might be useful in soft realtime applications that need a little improvement in latency.

      • That would be a total waste of CPU time.

        Yes, yes. This is Windows we're talking about.
  • Well... (Score:5, Interesting)

    by X0563511 ( 793323 ) * on Friday April 22, 2005 @05:57AM (#12311419) Homepage Journal
    I'm still bumming around with a sub-gigahertz chip, specifically an Athlon T-Bird. I've been out of the loop for too long, can anyone tell me the benifits of using a dual core system (and while we are at it, a 64-bit chip)? Any problems to look out for if I decide to jump on the wagon in my next upgrade?
    • Re:Well... (Score:3, Informative)

      by FidelCatsro ( 861135 )
      For the average user , if i were to be totaly honest. Right now there is hardly any real use for either. Duel cores would probably help the system apear faster if the average user is switching around alot of programs ubt for the price you would pay then it is not worth your time.

      64-bit well um if the average user um well runs a massive database setup but it will be more usefull soon in the x86 world (athlon 64 procesors though are excelent because of the onboard memory controler and architecture).

      For the
      • Re:Well... (Score:5, Informative)

        by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Friday April 22, 2005 @06:22AM (#12311497) Homepage
        AMD64 carries more than just "bigger registers". It has more of them and the actual core is an overall improved K7 process with

        - Slightly longer scheduling buffers
        - 128-bit L1 cache bus
        - Larger instruction window (means it can feed the alus better when constants/etc are found)
        - more registers [and they're bigger]

        They also run cooler and takes less power than their k7 brothers.

        Tom
    • Re:Well... (Score:2, Informative)

      by shyampandit ( 842649 )
      Mainly the difference would be found when running many apps at once. For example if you are ripping songs and playing a game simulataneously then it would be faster than a single core machine. I run many programs like proxy servers, mail servers etc. for the home LAN and also use it for games. So in this situation dual core will help me run the game lag free.

      Although for the speed boost to materialize in games they will have to be coded to use both cores, so one dosent just idle away.. When more programs g
    • Re:Well... (Score:5, Informative)

      by JollyFinn ( 267972 ) on Friday April 22, 2005 @06:28AM (#12311514)
      The 64bit is for anyone with more than 2Gb of RAM + x86-64 gives you more registers besides being 64bit so it speeds up the recompiled code.

      Dual core means simply you have TWO processors running. Rember old reviews on SMP dual celeron A and other such reviews. It gives little for games, lots for certain multithreaded applications. As you have two processors running and doing things. And multitasking applications, like being able to run interactive application (doom 3), while system is doing some multihour compilation on background.
      Anyway, it mainly keeps system more responsive when you have some thread or application takes CPU.
      Also with lesser degree helps in some other similar situation, where CPU is tied up with something EXACLY same moment you would wan't it to deal with UI stuff.
      • Dual core means simply you have TWO processors running.

        AFAIK (and I may be wrong, someone please correct) the intel dual core shares one memory bus, where in most true dual CPU systems they don't. There may also be other bus sharing issues...

        On the other hand, SMP may be more efficient with both cores on the same chip.

        It would be very interesting to benchmark a dual Intel CPU machine against a single CPU dual-core machine running at the same frequency, etc.

    • Re:Well... (Score:2, Informative)

      by Karaman ( 873136 )

      Hi, my older PC was a T-Bird@850Mhz with 256 RAM, 160GB HDD- PATA133 (CPU was working at 50 deg C)

      Now I have Athlon64 3000+ (233x8 = 2000MHz) (s939) with 1GB RAM, 200GB HDD- SATA150 (CPU does not go beyond 37 deg C)

      The difference is that with the older pc I compiled and installed LinuxFromScratch in 4 days (well I drank a lot of caffeine products),
      while when I switched to the A64 PC I did the job IN ONLY 4 Hours!

      Unfortunately I was unable to compile a stable x86_64 toolchain to complile a x86_64 Linux
      • The parent tried to show the difference in speed between a 32-bit and a 64-bit CPU by comparing an Athlon 'Thunderbird' with an Athlon 64, where they had different clock speeds (850Mhz vs. 2000Mhz), a significantly different amount of (possibly different speed) RAM, and different hard disks.

        This was then modded Informative.

        WTF?

        Rik
  • Well? (Score:5, Funny)

    by Anonymous Coward on Friday April 22, 2005 @06:03AM (#12311431)
    Does this mean my Windows XP machine wont pause when I put in a floppy or Cdrom? Wow, sign me up.
    • Re:Well? (Score:5, Funny)

      by EpsCylonB ( 307640 ) <eps&epscylonb,com> on Friday April 22, 2005 @06:07AM (#12311444) Homepage
      no but it will make your internet faster.
    • Re:Well? (Score:3, Funny)

      by kabocox ( 199019 )
      Does this mean my Windows XP machine wont pause when I put in a floppy or Cdrom? Wow, sign me up.

      Nope, that feature isn't scheduled until after we have 16 cores on chip, 32 Gb of RAM, 10 Terabytes of HD storage, and optical media is at 1 Terabyte per disc. They said it was a wierd hardware limit and it would require at least that much processing power for Windows XP to read a floppy or CDROM and do anything else. You don't even want to know what it will take for Longhorn to do that.
  • Something missing (Score:5, Interesting)

    by FidelCatsro ( 861135 ) <fidelcatsro&gmail,com> on Friday April 22, 2005 @06:04AM (#12311438) Journal
    What this test really was missing was a direct comparison to SMP systems which really for me makes the results entierly boring and expected .
    If he had shoved in a duel opteron set-up and a duel xeon set-up then it may have been a little more intresting , though as it stands its like stating the obvious.
    • IMO what's missing is actually testing single vs. dual cores -all else being equal.

      I'm more familiar with the tools available on Macs, but given that there are simple utilities that allow you to turn one CPU off on a dual-CPU system, I assume the same is true on the Intel side and that they would also allow one to turn off one core on a dual -core system.

      Which always makes me wonder why benchmarks that are supposedly testing one vs. two processors (or cores) don't use these tools so they can actually test
  • Anandtech (Score:5, Informative)

    by iamthemoog ( 410374 ) on Friday April 22, 2005 @06:20AM (#12311488) Homepage
    Has the new dual core opteron up against a quad Xeon with 8MB cache, amongst many others.

    Well worth a read:

    http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2397 [anandtech.com]
    • Re: (Score:3, Informative)

      Comment removed based on user account deletion
    • Re:Anandtech (Score:2, Informative)

      by astro-g ( 548659 )
      I love how he says the only difference between the 8xx operons and the 1xx opterons is the ammount of validation testing each chip gets.

      Umm, what about the number of available HT channels?
      There is a reason you cant use the 1xx chips in the 8 was motherboard.
  • One thought I had... (Score:5, Interesting)

    by Kjella ( 173770 ) on Friday April 22, 2005 @06:40AM (#12311548) Homepage
    ...and I'm not quite sure if it's a good one, but for desktops:

    The foreground program has a dedicated core. If you switch programs, put the old on the "other" core. The new moved from the "other" core. Essentially, your current program has full responsiveness (assuming you don't do things that lock up the application itself), no context switches, no other programs that can run some weird blocking call (on a single core machine, it certainly looks that way at least, especially CD-ROM operations).

    Granted you could end up with your fg processor being idle most of the time. But the way many people work with the computer, the foreground program is the ONLY time-critical application.

    Kjella
    • I think you hit on one of the biggest problems with that. What if the foreground app doesn't need any real horsepower, but others do? Espceially if you have real-time processes (video/audio capture...) going that might get CPU starved.

      I think a much better way to handle making the foreground app more responsive would simply be to raise its priority level. That way it only hogs the CPU if it really has something to do.
  • by Anonymous Coward
    Or do I have to wait for Service Pack 3?

    Yours,

    Gator Fan.
    • Actually multi-core systems will make it harder for a 'user' to notice they have spyware... since a 'slowing down' of the system can often times be quite noticable when a spyware has hijacked you.
      With dual cores, spyware could potentially take up the better part of a core's processing power, and the luser would never be the wiser...
  • Sluuuurp..... (Score:3, Interesting)

    by Diakoneo ( 853127 ) on Friday April 22, 2005 @07:00AM (#12311613)
    That last page raised my eyebrows. 291 Watts under load, that's some serious power draw compared to what I'm used to. And that had to be kicking out some serious heat, too.
    Anybody know what is the draw for a 4x Xeon system? I'd be interested in seeing how they compare.
    I wonder at what point the facilities people will want to use the server farm to heat the building, too. A weird convergence, the PC world is becoming more like the old mainframe world.
    • I've crossed Intel designs off my list for that reason. If power is your concern, the Opteron dual cores will be available in low voltage designs that draw as little as 30 watts.

      Or you could wait for a dual core Pentium M.

  • by Aqua OS X ( 458522 ) on Friday April 22, 2005 @07:01AM (#12311616)
    Who the hell runs benchmarks with FireFox and iTunes.

    if you ask me, the people that desperately need the ability to multitask are folks in the creative industry. Every 5 minutes bounce back and forth between massive applications rendering huge files.

    Nothing sucks more then opening a 400dpi photoshop document and not having InDesign respond since your single core CPU is being bogarted.

    SMP is probably the only reason I still find my crusty old Dual 450 g4 useful. It does things slowly, but it doesn't "feel" slow. If something is taking its sweet ass time, I can usually do something else without waiting years for windows and menus to draw.
    • Surely those kinds of apps are more memory bound than CPU bound?
    • by Jameth ( 664111 ) on Friday April 22, 2005 @08:30AM (#12311921)
      You're dead on accurate with that one. I want a benchmark that will tell me what kind of performance I can expect if I have a logo I am editing in Illustrator that I open in Photoshop to clean up a bit and then insert into a document in InDesign while I'm trying to make it look similar in the webpage I'm putting together in TextPad, viewing both final documents through Acrobat, IE, FireFox, and Safari, all at the same time. (While listening to music.)

      And no, I'm not being sarcastic. Although I rarely do all of that at once, it has been known to happen. And don't even get me started about what happens when I have something compiling behind all of that. I'm just thankful, in a way, that since I don't do 3D work I'm not tossing Maya into that mix.
    • Who the hell runs benchmarks with FireFox and iTunes.

      Someone who's paid by Intel, in money and/or product, to write a review whose conclusions match Intels marketing materials?

      That's not a guess, it happens pretty much every day. Some people are naive enough to believe that a website like this can make tons on cash on google ads and a couple of banners, but not folks that have seen how trade press reviews work.
    • Firefox might not be such a bad benchmark. Go a bunch of Japanese pages with status bar scrollers (in japanese) then open up about 20+ tabs. I do this every day, and the CPU usage on Linux can go up to a sustained 70-90% or more. If gcc is working in the background, the system can really lack responsiveness (and I'm on an AthonXP 3000 with 1Gb of RAM). Out of all the apps I use, Firefox and Mame take the cake in CPU usage.
    • I think the biggest thing that is driving the need for dual-core CPU's is the fact that multimedia-editing programs are very CPU-intensive tasks and could benefit from the use of a dual-core CPU.

      The Adobe Photoshop CS example you cited is a good one; imagine being able to use both CPU cores to dramatically reduce rendering times for processing high-resolution images in Photoshop CS. Also, video-editing programs such as Adobe Premiere and its competitors could also benefit from a dual-core CPU, given how mu
    • Who the hell runs benchmarks with FireFox and iTunes.

      Someone who wants to see how they handle the types of apps most people actually use?

      Nothing sucks more then opening a 400dpi photoshop document and not having InDesign respond since your single core CPU is being bogarted.

      Or your music drops out because you just clicked the link to a web page full of Flash animations and tables.

      Or a 'high priority task" like inserting a disk blocks the CPU.

      Even for home use, I prefer what on paper would be a slower
  • by pmadden ( 209229 ) on Friday April 22, 2005 @07:17AM (#12311658) Homepage Journal
    I'll probably get flamed for this....

    Increased performance in CPUs has normally come from faster clock rates and more complex circuitry. As we all know, Intel (and the others) have bailed out on faster clocks. If you add more complex circuitry, the logic delay increases--to keep the clock rate up, you have to burn power.

    What does this mean? The old-fashioned ways of getting more performance are dead--if you try it, the chip will burn up. It's easier to build two 1X MIP cores than one 2X MIP core. Like it or not, dual cores are the only solution; with transistor scaling, we'll have to go to 4, 8, and 16 cores in the next few years. IBM went dual-core with the PowerPC in 2001. Intel, AMD, and Sun are just following suit.

    Not bummed out yet? Massive parallelism works well for people doing scientific computing, but for the average joe, it's useless. I don't care how fast a processor is--I usually have one task that will crush it--but rarely do I have two time-critical things to worry about at the same time. In the article referenced, they had to work hard to find things that would test the dual-core features. Parallel computing and multiple cores sounds great. History buffs will know about Thinking Machines, Meiko, Kendell Square, MasPar, NCUBE, Sequent, Transputer, Parsytec, Cray, and so on.... Not a happy ending.

    So.... we can't get more single processor performance without bursting into flames. And parallel machines are only useful to a small market. IMO, it's gonna get grim. (And before anyone says new paradigm of computing to take advantage of the parallel resource, put down the crack pipe and think about it--we've been waiting for that paradigm for about 40 years. Remember occam? I thought not.)
    • Grim? Hardly. Most users don't even need a 1.5GHz processor. We've got plenty of older boxes around here (733 MHz Compaq P3 systems) which are running the latest office software quite happily.

      It's only going to get grim for gamers - well, not even then. I'm sure Carmack and company will figure out a way of taking advantage of multiple cores and multiple GPUs in their future generations of games.
      • Yeah, actually I think this might become a big boon for gamers (such as myself). The stuff that makes games interesting to me is AI (which is a very wide field certainly, but I think of such things as finally being able to use reasoning-engines (F.E.A.R is the first game I know of that use one), better pathfinding (AI can now use focussed D* instead of cheating with A*, etc) all of which will finally get to some love and tender care.

        With only one CPU, AI was always the ugly step child. "Yeah, sure.. we ca

        • There's this interview with Tim Sweeney [anandtech.com], the leading developer behind the Unreal 3 engine.
          They're working on a multithreaded engine for unreal 3, exciting stuff.
          Like you said, AI is a logical chunk of processing that should be on a separate thread. Other logical chunks he mentions are physics, animation updates, the renderer's scene traversal loop, sound updates, and content streaming.

          So at least one multi-threaded game engine is in the pipe. This is good because we don't really have a chicken and egg
    • Maybe that means designers of Wordprocessors and things like that (average Joe software) will have to be content with 2 GHz then and only designers of games, render- and similar programs where parallelization is possible can use more than that. Wouldn't be the worst development IMO.
    • Not bummed out yet? Massive parallelism works well for people doing scientific computing, but for the average joe, it's useless. I don't care how fast a processor is

      Speaking as a physicist who used 5 CPU years last week alone, just because it may not be "useful for the average joe" now, doesn't mean its not something worthwhile-- people like me will really be able to use dual-core chips (as the parent states) and will buy these things now.
    • If they can't ramp up hardware any more, the next revolution in computing will not be faster hardware, it will be cleaner, more efficient code. Personally, I think that there's a lot of potential left, if not with silicon, then with diamond wafer chips, or optical computing.
    • by ciroknight ( 601098 ) on Friday April 22, 2005 @11:33AM (#12313657)
      Well you're right about what you were saying, those words would arbit a good deal of flames. But everything has its place and there's a place for everything. Lemme explain.

      Clockspeed is the easiest race, if you want to think of the CPU industry as a continuous race. All you have to do to crank out a faster CPU is continually shrink the die (because smaller gates flip faster), and make sure that everything is arranged neatly on the chip. When you hit thermal walls like we are now, it's simply time to reduce the voltage, and shrink the die again.

      The only problem is, Intel's flagship for doing this now, happens to be one with a lot of baggage. The Netburst core design pretty much dictates there is to be at least two of everything, and both of them should be running all the time, especially if Hyperthreading is on. This effectively doubles your transistor count (though in reality it is less than that; there's only a single copy of bus administration, micro-op decode, etc). Keeping them on all of the time also helps jump the heat production.

      But here's a truth; their CPU clock game could still be running if they would like it to. The Pentium-M is still running extremely cool. Shrink it to a 90 micron core, use SOI, strained silicon, more of their substrata magic, and a healthy dose of SpeedStep, and you could see a Pentium-M hitting 3.5GHz clockspeeds that would put both the Athlon 64 and the Pentium 4 to shame. Sadly, to build this processor is to admit defeat with the Netburst core, and Intel's being very stubborn.

      On the other hand, I believe AMD's got some magic they haven't used yet up their sleeve. Though honestly I couldn't tell you what it is. There has to be a reason they aren't playing up the Turion more other than the fact it isn't scaling down as far as the Pentium-M can. I'm also surprised they're being so slow about ramping their clockspeeds, but this is probably just so their thermal profiles look superior to Intel's. A 3GHz Opteron could easily decimate a dual Xeon setup, but at the same time would probably produce just as much heat, and I think AMD would see that as a defeat.

      • ...happens to be one with a lot of baggage

        I think I need some time away from the computer; my brain initially interpreted word that as "babbage".
    • I don't care how fast a processor is--I usually have one task that will crush it--but rarely do I have two time-critical things to worry about at the same time.

      Maybe that's because you've been trained not to run two intensive applications at the same time. If your subconscious is telling you not to launch a compile job until your DVD transcode is done, then you probably wouldn't get a lot out of an SMP system. Break that habit, though, and you might like what you find.

    • Interesting issues, but I disagree with you on some points.

      First, there's really nothing new about multiple cores IMO. Instead of "core" try the term "execution unit". CPUs used to have a single execution unit. Then something like an Integer Unit was added. Remember the days of math coprocessors? They soon got moved into the CPU by adding more, and more specialized, Integer Units and a Floating Point Execution Unit. Now you have CPUs that have multiple execution units, to the point of having multiple Integ
    • Not bummed out yet? Massive parallelism works well for people doing scientific computing, but for the average joe, it's useless. I don't care how fast a processor is--I usually have one task that will crush it--but rarely do I have two time-critical things to worry about at the same time. In the article referenced, they had to work hard to find things that would test the dual-core features.

      I had a small epiphany reading a review of one of these dual cored setups that released this week. One of the review

    • Actually, multiple processors affects the average Joe too.

      Most people will benefit, albeit only slightly, simply from having the OS get its own core (if it so decides to schedule itself). Your video game can now have a core all to its graphics pipeline, while the kernel has a different core for handling disk and network data, and your mail client checking for new messages won't slow down your video game almost at all.

      Average Joe plays video games with his PC. Average Joe will like multi-core CPUs becaus
    • Not bummed out yet? Massive parallelism works well for people doing scientific computing, but for the average joe, it's useless. ...rarely do I have two time-critical things to worry about at the same time.

      In the 2005 Intel keynote speech, [taoriver.net] distributed computation expert Justin Rattner [taoriver.net] noted that "without language support, this isn't going to work."

      Pretty much all apps can make use of parallel execution. If you have to interpret a big chunk of data, you can usually break it into segments, and process the
  • Dual Core CPU's (Score:2, Interesting)

    by antivoid ( 751399 )
    I feel a good use for Dual-core systems is to put the OS on one core, including all explorer.exe instances and threads.

    The operating system shoul employ a smart system of monitoring CPU usage per thread and move the high- usage threads to the other core.

    I wonder though, on a slightly different topic - heat dispersion: nobody seems to talk about it - but two cores mean twice as much heat. How the hell do they do away with the heat? It dissapointing but they might be speedstepping/downclocking the cores
    • Re:Dual Core CPU's (Score:2, Informative)

      by ThaReetLad ( 538112 )
      Actually several of the articles do mention heat and the answer is this. Total thermal output from the top end dual core opteron is no more than for the top end single core processor (95 watts max). This has been achieved by using more energy efficient (and slightly lower perfomance) transistor designs in certain areas. AMD appears not to be doing any thermal throttling either.
    • I feel a good use for Dual-core systems is to put the OS on one core, including all explorer.exe instances and threads.

      I've read this sentiment several times now since the dual-core craze came about. So long as your kernel and web/file browser tasks get scheduled, why should you care which core they're running on?

  • by Craster ( 808453 )
    It's worth noting that each scenario consists of only desktop applications, and it'd still be interesting to see some common server benchmarks, such as a database or web server.


    Except that this is a desktop processor, that won't be shipping in server systems. So in actual fact it's worth noting that the entire point is that each scenario consists of only desktop applications.
  • As far as I can tell, this is basically no different from a dual processor system, except that you are probably going to get a little less performance out of the dual core than out of two separate processors. In return, you are going to save a bit on hardware (sockets, etc.) compared to a true dual processor system.

    All these questions about whether Windows will usurp one of the cores or how to schedule the two cores seem positively bizarre, given that the answer is no different from dual processor machine
    • As far as I can tell, this is basically no different from a dual processor system, except that you are probably going to get a little less performance out of the dual core than out of two separate processors.

      i think that's why the intel chip is a desktop one. to a single user, it's cheaper to get a multicore system than a real two chips machine; but for servers it's not the real thing.

      and because of the FSB arch of intel; it's not really a way to turn 2-way designs into a 4-way. the bus design makes

  • In Windows, from the task manager "processes" tab just right click on the process and select "Set Affinity..." I'm running a dual processor right now and I can force any process to run on either CPU. This is very useful for multi-tasking, not just SMP programs.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...