Dual Cores Taken for a Spin in Multitasking 221
Vigile writes "While dual cores are just now starting to hit the scene from processor vendors, PC Perspective has taken the first offering from Intel, the Extreme Edition 840, through the paces in single- and multi-tasking environments. It seems that those two cores can make quite a difference if you have as many applications open and working as the author does in the test." It's worth noting that each scenario consists of only desktop applications, and it'd still be interesting to see some common server benchmarks, such as a database or web server.
Newsflash... (Score:5, Funny)
(Dual core is the same as an SMP system, except the cores can communicate a bit faster with each other)
Re:Newsflash... (Score:2)
Cue loads of stupid people saying this technology is crap because most apps aren't multithreaded and the clockspeeds are lower than their current cPU.
Re:Newsflash... (Score:3, Informative)
It's another fact that it's easiest for humans to analyze only one problem at a time. So the most straightforward way to handle any computing task it to crunch at it linearly from beginning to end.
Unfortunately, gains in raw CPU speed have always come slower than the demand for number-cru
Re:Newsflash... (Score:3, Insightful)
That highly depends on the problem. If your problem is highly parallizable and the application that resolves your problem has been written (correctly) in a multithreaded way, then two CPU's will perform better. (As you say, it doesn't scale in a linear way)
Of course, you might just say that a parallizable problem is not one problem, but many small problems that need to be solved separately ;-)
Re:Newsflash... (Score:2)
Yes its hard now but it won't always be hard, we will develop tools and methodologies that help us, just as we have before.
Re:Newsflash... (Score:2, Troll)
I took a Parallel programming course in university. It was hard. Most students didn't get it. I got an A+ in the course. The hard part is that you really have to forget everything you have learned about programming on a single processor. You have to use completely different algorithms. All the regular algorithms follow a straight line. Paral
Re:Graphics (Score:2)
Re:Graphics (Score:2)
Not all data set operations can be split up. If your operation can be split, then great. Clearly when you see operations running at 1/14th the time on a 16 CPU, your data set needs operations that scale well. At somepoint you will hit a wall though. Eventially adding more processors will make no difference at all (Unless you also add data).
Brute force encryption cracking scales to 2^key size processors (perhaps a little less cause you have to get code and keys to the processors), but after that adding
Re:Graphics (Score:2)
Re:Newsflash... (Score:3, Insightful)
I'm more interested in what IBM's Cell processor can do. While some problems are definately single threaded by nature, the majority are not. I have a GIS application that could definately benefit from as many processors as I can throw
Re:Newsflash... (Score:2)
Re:Newsflash... (Score:2)
Some people really just don't "get it". As a relatively early (1993) adopter of Wintel SMP, I can tell you that a multithreaded OS combined with some multithreaded apps can be significantly faster than single processor systems. (I used SMP to improve throughput in Photoshop, and rendering speed in 3D-Studio.)
Since 1993, I have been doing BYO with computers, and have found that, except for gaming, SMP (combined with SCSI disks) has proved to be a pleasant experience. Multitasking is limit
Re:Newsflash... (Score:2)
Cue loads of stupid people saying this technology is crap because most apps aren't multithreaded and the clockspeeds are lower than their current cPU.
Please excuse my ignorance, but I've never really 'got' SMP. How exactly does it work in a typical desktop environment? How do jobs get scheduled across the two seperate cores/CPUs in such a way that it maximizes the available resources? Doesn't there need to be some kind
Re:Newsflash... (Score:5, Insightful)
I know this article is talking about Intel dual core chips, but for well-designed CPUs with integrated memory controllers (Power5, Ultrasparc IV, Opteron), the difference between a single dual-core CPU and two single-core CPUs is significant.
On chips with built in memory controllers, as you increase the number of cores on a chip the memory bandwidth per core decreases, however as you increase the number of chips in a system, the memory bandwidth per core remains the same and the number of cores increases.
That can amount to a big performance difference when running memory-intensive jobs.
Intel seem to be really losing the plot here at the moment. In multichip configurations, Intel's memory bandwidth already sucks compared to Opteron. Multicore per chip is only going to make it FAR worse.
Re:Newsflash... (Score:2, Insightful)
Seriously, unless you're application can run in the cache on the Intel parts, the AMD is gonna win hands down when running at the same clock rate which translates pretty closely to the same power consumption. AMD will yet be a tad lighter on power consumption just because the stuff is packed more tightly even though it has more active components. Equa
Re:Newsflash... (Score:2)
The fastest AMD chip for the near future is 2.6GHz while the slowest P4 for the last year or so is 2.8GHz. If you want to compare based on clock speed, the Pentium-M (aka P3-v2) is a much fairer comparison. It has been a well known fact since the P4's launch that the P4's IPC (instructions per clock) sucks when compared to the P3's. (And even more so when compared to the PM's.)
I have both an A64-3000+ and a P4-3G. My typical workloads usually contain a number of non-trivial tasks. While my
Re:Newsflash... (Score:2, Insightful)
Even in a multi-memory controller system the same physical memory is shared, so there has to be some performance hit when running more than one cpu, so I doubt the actual memory bandwidth per chip will be the same. Or is there some architec
Re:Newsflash... (Score:3, Informative)
But yes, you're right, processes accessing memory on a different processor will suffer a latency (and to some extent bandwidth) hit. A well designed OS will help to mitigate it to some of the extent, but it's one of the reason that CPUs don't scale linearly.
Re:Newsflash... (Score:2)
shared FSB (intel) or not (AMD); other benchmarks (Score:5, Informative)
For benchmarks relating to serious DB and web use, see this review by Anand Shempi: http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2397 [anandtech.com]
or these two at FiringSquad:
http://www.firingsquad.com/hardware/amd_dual-core_ opteron_875/ [firingsquad.com]
and http://www.firingsquad.com/hardware/colfax_dual_op teron/ [firingsquad.com]
Re:shared FSB (intel) or not (AMD); other benchmar (Score:3, Informative)
Re:Newsflash... (Score:2)
Re:Newsflash... (Score:2)
However, I believe we will see a change in development practices. Applications will become more multi-threaded as the number of CPUs per die increase. I think AMD have already announced that eventually they will only sell multi-core CPUs (in this market).
Of course, this means that programmers will have to learn how to write parallel code
Re:Newsflash... (Score:3, Interesting)
Each Opteron has a dual bus memory controller on board (granted, only DDR400)... but as I increase the number of CPUs, the number of memory busses increases.
The 4 CPU boxes I'm working on have 8 independent DDR400 memory busses.
The only obvious way to get your memory bandwidth to scale is to have your memory controllers per CPU (or even better, on the CPU itself).
Re:Newsflash... (Score:5, Informative)
So the obvious answer would be to move one of the processes to the other core. However, this isn't trivial. You either have one scheduler per core or one scheduler per operating system. (You can't have a single thread sent to both cores easily - if both cores run the same thing at the same time there will be chaos)
If you have one per core, then the scheduler trying to get rid of the thread will have to synchronise with the other core, waiting for the other scheduler to come into context, it then has to tell it to add this new process. Obviously, there is a fair bit of overhead, and if my memory serves me correctly, each core in the current chip has its own cache - so now all the stuff which was cached has to be sent to memory (since it is in the wrong cache) and now there is nothing in the cache, making every memory access slow for the next little while. End result - you can transfer a thread between CPUs, but it is costly.
It is possible to have a single scheduler which can then just dispatch threads to each core as it gets run by each core. The big one here is making the scheduler threadsafe - both CPUs could run the scheduler at the same time, so you have to make sure they don't crap on each other. This is a problem which we have solved already with common synchro-primitives. But, if you just lock the list of threads to run (*), then you will get a whole lot of CPU time wasted just waiting to run the scheduler. It might be acceptable for 2 cores, but it doesn't scale at all.
(*) You may realise (just as I realised) that a scheduler is more than just a list of threads to run (it is typically implemented as a couple of lists for each priority). The same problem still occurs with more than one list of threads, it is just a bit harder for me to express (proof by bad English skills).
Finally, I'm expecting someone to tell me that I'm wrong about something I just said. That person is probably correct. My only experience with this stuff is a 3rd year undergrad operating systems course where we played around with OS161 (a toy operating system basically). But, hopefully the end conclusion will be the same: twice the number of processors won't equal twice as much performance, and it is tough to get a fast algorithm that will scale.
Re:Newsflash... (Score:2)
You know, in Windows, you can hit Ctl+Alt+Del to bring up the task manager, go to the processes tab, right click on a process, and go to "Set Affinity".
Just sayin'...
Re:Newsflash... (Score:2)
Re:Newsflash... (Score:2)
The obvious answer is that the situation isn't a whole lot different from dual processor machines...
Re:Newsflash... (Score:4, Informative)
FYI, it's called Gang Scheduling and has been described for quite some time.
A matter of time. (Score:5, Insightful)
Windows, for example. What if the next version of Windows requires a dual-core processor to be usable? You know..Windows gets one core to idle at 80% of its capacity..and spills over into the other core when loading a text file.
If things stayed the way they were now, and the entire other core could be kept separate from the OS and used for gaming/other applications, it would be a great idea.
But guess what.
Re:A matter of time. (Score:2)
However, you can constrain an application to a particular CPU (in windows at least) - task manager, set affinity. That's a great way of preventing an application from using your other CPU. If you want a CPU to run a game only, you would have to go through the entire process list and set the other processes to CPU 1, (or write an app to do that), and then set your game process to CPU 2.
I think you'll get m
Re:A matter of time. (Score:2)
What'd'ya mean "any form of pre-emptive OS"? Just because Microsoft doesn't do it doesn't mean it's not possible. You can certainly do this on Irix [sgi.com], for example. And I haven't looked at the Linux processor set tools, but I assume it's similar.
Re:A matter of time. (Score:2)
Why on earth you would want to allocate an entire CPU to that, I have no idea.
Now you might want to allocate a whole CPU to Doom3 or HL2, but I suspect they'll pretty much get that anyway, as applications are assigned to the quietest CPU, as
Re:A matter of time. (Score:2)
And the browser I use makes _no difference_ to the truth of what I was saying.
Re:A matter of time. (Score:4, Insightful)
Very few applications, and OS's in particular, are idle most of the time. I don't know the exact profiling characteristics of Windows, but I do know that in linux the kernel rarely, if ever, takes up 100% of a CPU's, and never does for a prolonged period of time.
If you locked one CPU and made that for OS tasks only you'd be wasting a lot of clock cycles that another application could happily use. Same would go for locking just about any application to a cpu.
Re:A matter of time. (Score:2)
If you locked one CPU and made that for OS tasks only you'd be wasting a lot of clock cycles that another application could happily use.
Probably wasted in most situations.
But I wonder whether an OS sitting on its hands ready to go in a multiple core machine might be useful in soft realtime applications that need a little improvement in latency.
Re:A matter of time. (Score:2)
Yes, yes. This is Windows we're talking about.
Well... (Score:5, Interesting)
Re:Well... (Score:3, Informative)
64-bit well um if the average user um well runs a massive database setup but it will be more usefull soon in the x86 world (athlon 64 procesors though are excelent because of the onboard memory controler and architecture).
For the
Re:Well... (Score:5, Informative)
- Slightly longer scheduling buffers
- 128-bit L1 cache bus
- Larger instruction window (means it can feed the alus better when constants/etc are found)
- more registers [and they're bigger]
They also run cooler and takes less power than their k7 brothers.
Tom
Re:Well... (Score:5, Informative)
No, I'm just a happy loyal user. I have both a Prescott P4 3.2Ghz and an AMD64 Newcastle 2.2Ghz...
For what I do [building software] the AMD64 smokes the P4
The AMD approach is just common sense. Be more efficient at what you do and gradually do it faster. Intel went the market route and said "slow clockrate is for pansies!".
So you end up with a cpu that has a higher clock rate but it doesn't win because the efficency is too low.
AES on my AMD64 ranges around 260 [or so] cycles/block. On the P4 with Intels compiler I get around 410 cycles/block. If you scale 3.2Ghz to 2.2 Ghz that's still effectively 281 cycles [at 2.2Ghz]. Doesn't seem like much but keep in mind to get this speed they had to draw more power and run at a higher clock rate.
I did a benchmark a week ago where I built LibTomCrypt with/without hyperthreading and it took the prescott with hyperthreading at 3.2Ghz to even come close to matching the AMD64 speed. That's only on ~45,000 lines of code.
Now multiply that by say five or ten to get a larger project.
I'm not saying the Prescott isn't a neat design. Overall it's efficient enough to be useful. Just the AMD64 eats it's breakfast and spanks it's mother is all I'm saying.
Tom
Re:Mod parent offtopic (Score:2)
Tom
..infact you misunderstood me (Score:2)
Not offtopic no as grandparent said
Re:Well... (Score:2, Informative)
Although for the speed boost to materialize in games they will have to be coded to use both cores, so one dosent just idle away.. When more programs g
Re:Well... (Score:5, Informative)
Dual core means simply you have TWO processors running. Rember old reviews on SMP dual celeron A and other such reviews. It gives little for games, lots for certain multithreaded applications. As you have two processors running and doing things. And multitasking applications, like being able to run interactive application (doom 3), while system is doing some multihour compilation on background.
Anyway, it mainly keeps system more responsive when you have some thread or application takes CPU.
Also with lesser degree helps in some other similar situation, where CPU is tied up with something EXACLY same moment you would wan't it to deal with UI stuff.
Re:Well... (Score:2)
AFAIK (and I may be wrong, someone please correct) the intel dual core shares one memory bus, where in most true dual CPU systems they don't. There may also be other bus sharing issues...
On the other hand, SMP may be more efficient with both cores on the same chip.
It would be very interesting to benchmark a dual Intel CPU machine against a single CPU dual-core machine running at the same frequency, etc.
Re:Well... (Score:2, Informative)
Hi, my older PC was a T-Bird@850Mhz with 256 RAM, 160GB HDD- PATA133 (CPU was working at 50 deg C)
Now I have Athlon64 3000+ (233x8 = 2000MHz) (s939) with 1GB RAM, 200GB HDD- SATA150 (CPU does not go beyond 37 deg C)
The difference is that with the older pc I compiled and installed LinuxFromScratch in 4 days (well I drank a lot of caffeine products),
while when I switched to the A64 PC I did the job IN ONLY 4 Hours!
Unfortunately I was unable to compile a stable x86_64 toolchain to complile a x86_64 Linux
Re:Well... (Score:2)
This was then modded Informative.
WTF?
Rik
Re:Uber amounts of RAM in 64 bit (Score:2)
About Linux, I'm not sure...but someone else might help out with that question.
In practice, 512 MByte are comfortable for typical desktop use. My private PC has 1GByte and I still count that as a luxury.
Re:Uber amounts of RAM (Score:2)
Furthermore, in massively multi-threaded applications, you can run out of virtual address space for the prog
Well? (Score:5, Funny)
Re:Well? (Score:5, Funny)
Re:Well? (Score:3, Funny)
Re:Well? (Score:3, Funny)
Nope, that feature isn't scheduled until after we have 16 cores on chip, 32 Gb of RAM, 10 Terabytes of HD storage, and optical media is at 1 Terabyte per disc. They said it was a wierd hardware limit and it would require at least that much processing power for Windows XP to read a floppy or CDROM and do anything else. You don't even want to know what it will take for Longhorn to do that.
Re:Well? (Score:2)
My personal favorite is the "Saving your settings" wait upon shutdown. After you've installed a few apps and run the system for a while, this can take a few MINUTES.
The CD-ROM thing the GP mentioned is also hella annoying.. Floppies aren't as bad as they used to be under the 9x windows though (want to format a floppy? can't do ANYTHING else!)
Re:Well? (Score:2)
Something missing (Score:5, Interesting)
If he had shoved in a duel opteron set-up and a duel xeon set-up then it may have been a little more intresting , though as it stands its like stating the obvious.
Re:Something missing (Score:2)
I'm more familiar with the tools available on Macs, but given that there are simple utilities that allow you to turn one CPU off on a dual-CPU system, I assume the same is true on the Intel side and that they would also allow one to turn off one core on a dual -core system.
Which always makes me wonder why benchmarks that are supposedly testing one vs. two processors (or cores) don't use these tools so they can actually test
Anandtech (Score:5, Informative)
Well worth a read:
http://www.anandtech.com/cpuchipsets/showdoc.aspx
Re: (Score:3, Informative)
Re:Anandtech (Score:2, Informative)
Umm, what about the number of available HT channels?
There is a reason you cant use the 1xx chips in the 8 was motherboard.
One thought I had... (Score:5, Interesting)
The foreground program has a dedicated core. If you switch programs, put the old on the "other" core. The new moved from the "other" core. Essentially, your current program has full responsiveness (assuming you don't do things that lock up the application itself), no context switches, no other programs that can run some weird blocking call (on a single core machine, it certainly looks that way at least, especially CD-ROM operations).
Granted you could end up with your fg processor being idle most of the time. But the way many people work with the computer, the foreground program is the ONLY time-critical application.
Kjella
Re:One thought I had... (Score:2)
I think a much better way to handle making the foreground app more responsive would simply be to raise its priority level. That way it only hogs the CPU if it really has something to do.
Will my Spyware support these? (Score:2, Funny)
Yours,
Gator Fan.
Re:Will my Spyware support these? (Score:2)
With dual cores, spyware could potentially take up the better part of a core's processing power, and the luser would never be the wiser...
Sluuuurp..... (Score:3, Interesting)
Anybody know what is the draw for a 4x Xeon system? I'd be interested in seeing how they compare.
I wonder at what point the facilities people will want to use the server farm to heat the building, too. A weird convergence, the PC world is becoming more like the old mainframe world.
Re:Sluuuurp..... (Score:2)
Or you could wait for a dual core Pentium M.
How 'bout some Adobe CS benchmarks? (Score:5, Insightful)
if you ask me, the people that desperately need the ability to multitask are folks in the creative industry. Every 5 minutes bounce back and forth between massive applications rendering huge files.
Nothing sucks more then opening a 400dpi photoshop document and not having InDesign respond since your single core CPU is being bogarted.
SMP is probably the only reason I still find my crusty old Dual 450 g4 useful. It does things slowly, but it doesn't "feel" slow. If something is taking its sweet ass time, I can usually do something else without waiting years for windows and menus to draw.
Re:How 'bout some Adobe CS benchmarks? (Score:2)
Re:How 'bout some Adobe CS benchmarks? (Score:4, Funny)
And no, I'm not being sarcastic. Although I rarely do all of that at once, it has been known to happen. And don't even get me started about what happens when I have something compiling behind all of that. I'm just thankful, in a way, that since I don't do 3D work I'm not tossing Maya into that mix.
Re:How 'bout some Adobe CS benchmarks? (Score:2)
Someone who's paid by Intel, in money and/or product, to write a review whose conclusions match Intels marketing materials?
That's not a guess, it happens pretty much every day. Some people are naive enough to believe that a website like this can make tons on cash on google ads and a couple of banners, but not folks that have seen how trade press reviews work.
Re:How 'bout some Adobe CS benchmarks? (Score:2, Informative)
Multimedia programs could use dual-core CPU. (Score:2)
The Adobe Photoshop CS example you cited is a good one; imagine being able to use both CPU cores to dramatically reduce rendering times for processing high-resolution images in Photoshop CS. Also, video-editing programs such as Adobe Premiere and its competitors could also benefit from a dual-core CPU, given how mu
Re:How 'bout some Adobe CS benchmarks? (Score:2)
Someone who wants to see how they handle the types of apps most people actually use?
Nothing sucks more then opening a 400dpi photoshop document and not having InDesign respond since your single core CPU is being bogarted.
Or your music drops out because you just clicked the link to a web page full of Flash animations and tables.
Or a 'high priority task" like inserting a disk blocks the CPU.
Even for home use, I prefer what on paper would be a slower
It's bad news, actually... (Score:5, Insightful)
Increased performance in CPUs has normally come from faster clock rates and more complex circuitry. As we all know, Intel (and the others) have bailed out on faster clocks. If you add more complex circuitry, the logic delay increases--to keep the clock rate up, you have to burn power.
What does this mean? The old-fashioned ways of getting more performance are dead--if you try it, the chip will burn up. It's easier to build two 1X MIP cores than one 2X MIP core. Like it or not, dual cores are the only solution; with transistor scaling, we'll have to go to 4, 8, and 16 cores in the next few years. IBM went dual-core with the PowerPC in 2001. Intel, AMD, and Sun are just following suit.
Not bummed out yet? Massive parallelism works well for people doing scientific computing, but for the average joe, it's useless. I don't care how fast a processor is--I usually have one task that will crush it--but rarely do I have two time-critical things to worry about at the same time. In the article referenced, they had to work hard to find things that would test the dual-core features. Parallel computing and multiple cores sounds great. History buffs will know about Thinking Machines, Meiko, Kendell Square, MasPar, NCUBE, Sequent, Transputer, Parsytec, Cray, and so on.... Not a happy ending.
So.... we can't get more single processor performance without bursting into flames. And parallel machines are only useful to a small market. IMO, it's gonna get grim. (And before anyone says new paradigm of computing to take advantage of the parallel resource, put down the crack pipe and think about it--we've been waiting for that paradigm for about 40 years. Remember occam? I thought not.)
Re:It's bad news, actually... (Score:2)
It's only going to get grim for gamers - well, not even then. I'm sure Carmack and company will figure out a way of taking advantage of multiple cores and multiple GPUs in their future generations of games.
Re:It's bad news, actually... (Score:3, Insightful)
Yeah, actually I think this might become a big boon for gamers (such as myself). The stuff that makes games interesting to me is AI (which is a very wide field certainly, but I think of such things as finally being able to use reasoning-engines (F.E.A.R is the first game I know of that use one), better pathfinding (AI can now use focussed D* instead of cheating with A*, etc) all of which will finally get to some love and tender care.
With only one CPU, AI was always the ugly step child. "Yeah, sure.. we ca
re: Dual will cores revolutionize gaming (Score:2, Interesting)
They're working on a multithreaded engine for unreal 3, exciting stuff.
Like you said, AI is a logical chunk of processing that should be on a separate thread. Other logical chunks he mentions are physics, animation updates, the renderer's scene traversal loop, sound updates, and content streaming.
So at least one multi-threaded game engine is in the pipe. This is good because we don't really have a chicken and egg
Re:It's bad news, actually... (Score:2)
Re:It's bad news, actually... (Score:2)
Speaking as a physicist who used 5 CPU years last week alone, just because it may not be "useful for the average joe" now, doesn't mean its not something worthwhile-- people like me will really be able to use dual-core chips (as the parent states) and will buy these things now.
Re:It's bad news, actually... (Score:3, Insightful)
Re:It's bad news, actually... (Score:4, Interesting)
Clockspeed is the easiest race, if you want to think of the CPU industry as a continuous race. All you have to do to crank out a faster CPU is continually shrink the die (because smaller gates flip faster), and make sure that everything is arranged neatly on the chip. When you hit thermal walls like we are now, it's simply time to reduce the voltage, and shrink the die again.
The only problem is, Intel's flagship for doing this now, happens to be one with a lot of baggage. The Netburst core design pretty much dictates there is to be at least two of everything, and both of them should be running all the time, especially if Hyperthreading is on. This effectively doubles your transistor count (though in reality it is less than that; there's only a single copy of bus administration, micro-op decode, etc). Keeping them on all of the time also helps jump the heat production.
But here's a truth; their CPU clock game could still be running if they would like it to. The Pentium-M is still running extremely cool. Shrink it to a 90 micron core, use SOI, strained silicon, more of their substrata magic, and a healthy dose of SpeedStep, and you could see a Pentium-M hitting 3.5GHz clockspeeds that would put both the Athlon 64 and the Pentium 4 to shame. Sadly, to build this processor is to admit defeat with the Netburst core, and Intel's being very stubborn.
On the other hand, I believe AMD's got some magic they haven't used yet up their sleeve. Though honestly I couldn't tell you what it is. There has to be a reason they aren't playing up the Turion more other than the fact it isn't scaling down as far as the Pentium-M can. I'm also surprised they're being so slow about ramping their clockspeeds, but this is probably just so their thermal profiles look superior to Intel's. A 3GHz Opteron could easily decimate a dual Xeon setup, but at the same time would probably produce just as much heat, and I think AMD would see that as a defeat.
Re:It's bad news, actually... (Score:2)
I think I need some time away from the computer; my brain initially interpreted word that as "babbage".
Re:It's bad news, actually... (Score:2)
Of course I could be wrong. For the systems that these things will be used in, memory costs will probably dominate. The fact that the dual core opterons require no specialized support chips (unlike intel's solution) means that AMD can grab some of this
Causality? (Score:2)
Maybe that's because you've been trained not to run two intensive applications at the same time. If your subconscious is telling you not to launch a compile job until your DVD transcode is done, then you probably wouldn't get a lot out of an SMP system. Break that habit, though, and you might like what you find.
Re:It's bad news, actually... (Score:3, Insightful)
First, there's really nothing new about multiple cores IMO. Instead of "core" try the term "execution unit". CPUs used to have a single execution unit. Then something like an Integer Unit was added. Remember the days of math coprocessors? They soon got moved into the CPU by adding more, and more specialized, Integer Units and a Floating Point Execution Unit. Now you have CPUs that have multiple execution units, to the point of having multiple Integ
Re:It's bad news, actually... (Score:2, Insightful)
I had a small epiphany reading a review of one of these dual cored setups that released this week. One of the review
Re:It's bad news, actually... (Score:2)
Most people will benefit, albeit only slightly, simply from having the OS get its own core (if it so decides to schedule itself). Your video game can now have a core all to its graphics pipeline, while the kernel has a different core for handling disk and network data, and your mail client checking for new messages won't slow down your video game almost at all.
Average Joe plays video games with his PC. Average Joe will like multi-core CPUs becaus
Re:It's bad news, actually... (Score:2)
In the 2005 Intel keynote speech, [taoriver.net] distributed computation expert Justin Rattner [taoriver.net] noted that "without language support, this isn't going to work."
Pretty much all apps can make use of parallel execution. If you have to interpret a big chunk of data, you can usually break it into segments, and process the
Re:It's bad news, actually... (Score:2)
That's only true if the complexity you add is serial.
Sort of. Fanout comes into play very quickly when you start parallelising stuff. Try making a 2-level implementation of a 32 bit adder, for example.
Dual Core CPU's (Score:2, Interesting)
The operating system shoul employ a smart system of monitoring CPU usage per thread and move the high- usage threads to the other core.
I wonder though, on a slightly different topic - heat dispersion: nobody seems to talk about it - but two cores mean twice as much heat. How the hell do they do away with the heat? It dissapointing but they might be speedstepping/downclocking the cores
Re:Dual Core CPU's (Score:2, Informative)
Re:Dual Core CPU's (Score:2)
I've read this sentiment several times now since the dual-core craze came about. So long as your kernel and web/file browser tasks get scheduled, why should you care which core they're running on?
OP Misses the point (Score:2, Insightful)
Except that this is a desktop processor, that won't be shipping in server systems. So in actual fact it's worth noting that the entire point is that each scenario consists of only desktop applications.
why all the speculation and hoopla? (Score:2)
All these questions about whether Windows will usurp one of the cores or how to schedule the two cores seem positively bizarre, given that the answer is no different from dual processor machine
Re:why all the speculation and hoopla? (Score:2)
i think that's why the intel chip is a desktop one. to a single user, it's cheaper to get a multicore system than a real two chips machine; but for servers it's not the real thing.
and because of the FSB arch of intel; it's not really a way to turn 2-way designs into a 4-way. the bus design makes
Simple (Score:2)
Re:Fundamental question about dual core (Score:2, Insightful)
The total number of cpy cycles are the same, but the average queue-length for a process waiting for the CPU is half, i.e. the latency before your process is scheduled is lower making it "feel" faster.
Re:Built in coffee pot! (Score:2)
Ah, this would explain why Asus [asus.com] released a barebones solution called S-presso. Just add a couple of dual cores, water cooling, a fine italian pump, and poof the next generation in computers the e-s-presso. As the water travels over one each dual core chip it's super heated quickly and the italian pump takes over. Through the grounds to your demitasse mugs.
Warning, not drinking enough espresso may
Re:Useful benchmarks (Score:2)
Interesting question. A big factor is one of the questions I have. Does dual-core have a price/performance advantage over dual-processor? Some factors:
How do the yields compare? I may be wrong, but I'm guessing that the clock rate on dual-cores is lower because you just can't get a good yield at a higher clock rate. You have to go with the clock rate of the lowest performer in the dual-core.
How does performan