Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Windows Microsoft Power Hardware IT

Windows 7 On Multicore — How Much Faster? 349

snydeq writes "InfoWorld's Andrew Binstock tests whether Windows 7's threading advances fulfill the promise of improved performance and energy reduction. He runs Windows XP Professional, Vista Ultimate, and Windows 7 Ultimate against Viewperf and Cinebench benchmarks using a Dell Precision T3500 workstation, the price-performance winner of an earlier roundup of Nehalem-based workstations. 'What might be surprising is that Windows 7's multithreading changes did not deliver more of a performance punch,' Binstock writes of the benchmarks, adding that the principal changes to Windows 7 multithreading consist of increased processor affinity, 'a wholly new mechanism that gets rid of the global locking concept and pushes the management of lock access down to the locked resources,' permitting Windows 7 to scale up to 256 processors without performance penalty, but delivering little performance gains for systems with only a few processors. 'Windows 7 performs several tricks to keep threads running on the same execution pipelines so that the underlying Nehalem processor can turn off transistors on lesser-used or inactive pipelines,' Binstock writes. 'The primary benefit of this feature is reduced energy consumption,' with Windows 7 requiring 17 percent less power to run than Windows XP or Vista."
This discussion has been archived. No new comments can be posted.

Windows 7 On Multicore — How Much Faster?

Comments Filter:
  • by Anonymous Coward on Wednesday October 21, 2009 @08:43AM (#29822649)
    Suck it, nerds.
    • by Anonymous Coward

      Suck it, Microsoft drone.

    • Re: (Score:3, Interesting)

      Comment removed based on user account deletion
      • by ozmanjusri ( 601766 ) <.moc.liamtoh. .ta. .bob_eissua.> on Wednesday October 21, 2009 @09:09AM (#29822995) Journal
        But how does it compare to XP X64?

        It's slower.

        Win7 is basically just a refurbished Vista under the hood.

        • Re: (Score:3, Insightful)

          by lorenlal ( 164133 )

          I have to disagree with the Troll mod.

          I ran XP x64 for a few years, and I liked it a lot. Driver support was dodgy in some cases, but it was a pretty solid OS. 64 Bit Vista was indeed slower, much larger and suffered from the well documented issues we all know...

          Windows 7 is very Vista-like, but with the benefit of:
          1) Two years to get application writers used to the Vista/7 model, and the headaches associated with it.
          2) More driver support from vendors
          3) Hardware that's two years newer
          4) More customizable

          • Re: (Score:3, Insightful)

            by ozmanjusri ( 601766 )
            I have to disagree with the Troll mod.

            Sadly, it's inevitable here if you discuss anything Microsoft doesn't want discussed.

            It's a fact though. They couldn't afford to take any risks after the Vista failure and played it very conservative with Win 7. Of course, admitting that wouldn't generate a lot of hype, so their marketing machine is in overdrive trying to spin a very bland OS as something exciting.

            At least it's showing clearly how much Microsoft has infiltrated Slashdot over the past couple of yea

      • Re: (Score:3, Interesting)

        I'm the type of guy that hates change. I used Windows 2000 until '06 when my copy finally quit working due to numerous re-installs (I hated solving problems and would just format whenever something came up). I learned to love XP and used it until about a month ago when I got a x64 system. I was gonna switch to XP64, but heard the driver support was terrible, especially for gaming. I read one x64 comparison between XP, Vista, and Windows 7 and the reviewers couldn't even get XP 64 stable enough to complete t
    • by jedidiah ( 1196 )

      Suck what? The fact that Windows is finally catching up to Unix in this area?

      What are you going to test this feature out on?

      You can buy 1024 CPU Linux boxes. 100 cpu Unix boxes have been commonplace for awhile.

      Microsoft is last to the party (like always).

  • Less power? (Score:5, Funny)

    by Canazza ( 1428553 ) on Wednesday October 21, 2009 @08:44AM (#29822655)

    Nooo! I was hoping that power consumption would continue to increase! Sooner or later our PCs would require 1.21GW!

  • Not Really (Score:5, Funny)

    by Mikkeles ( 698461 ) on Wednesday October 21, 2009 @08:46AM (#29822685)

    'What might be surprising is that Windows 7's multithreading changes did not deliver more of a performance punch,'

    No, it's not surprising.

    • Re:Not Really (Score:5, Interesting)

      by timeOday ( 582209 ) on Wednesday October 21, 2009 @08:49AM (#29822741)
      It's not surprising because the OS really can't do that much to improve (or mess up) the performance of user-mode code that isn't making many OS calls anyways.

      What is surprising is that power consumption could be so significantly reduced. This story could have come out with an entirely different spin if the headline were simply, "Windows 7 Reduces Power Consumption by 17%."

      • Re:Not Really (Score:5, Interesting)

        by setagllib ( 753300 ) on Wednesday October 21, 2009 @08:55AM (#29822813)

        I disagree - user-mode code, whether it's separated into threads or processes, still relies very heavily on kernel scheduling decisions. It may sound simple enough, but if you study the decisions the kernel has to make (such as which thread to wake first, from a set of 8 all waiting on the same semaphore), you can find lots of ways to get it wrong. We now take it for granted because thousands of man-years have been spent on solutions.

        • Re:Not Really (Score:5, Interesting)

          by SpryGuy ( 206254 ) on Wednesday October 21, 2009 @08:59AM (#29822867)

          While actual performance may not be faster, perceived performance almost certianly is. It "feels" snappier, seems to respond better, due to some optimizations in locking and in the graphics subsystem that allows visual feedback in one app to not be blocked or held up by work going on in another app.

          • Re: (Score:3, Insightful)

            by Ephemeriis ( 315124 )

            While actual performance may not be faster, perceived performance almost certianly is. It "feels" snappier, seems to respond better, due to some optimizations in locking and in the graphics subsystem that allows visual feedback in one app to not be blocked or held up by work going on in another app.

            That was one of the first things I noticed when I installed Win7.

            Vista always felt sluggish. Even when things were working properly and I wasn't experiencing any problems, the entire OS just felt like molasses. There were minute pauses everywhere. Not enough to actually say this is taking longer than it did on XP... But it always felt like the OS was struggling to keep up with me.

            With Win7, that hesitation is gone. Everything feels far more responsive. I don't know that I'm actually getting anything d

      • Re: (Score:2, Insightful)

        by goldspider ( 445116 )

        This story could have come out with an entirely different spin if the headline were simply, "Windows 7 Reduces Power Consumption by 17%."

        Welcome to Slashdot!

      • Re:Not Really (Score:5, Insightful)

        by bravecanadian ( 638315 ) on Wednesday October 21, 2009 @09:22AM (#29823129)

        Agreed. A 17% reduction in power consumption doing the same tasks is nothing to scoff at...

        • Re: (Score:3, Insightful)

          by R3d M3rcury ( 871886 )

          That was sort of my reaction.

          From what I read, I got the impression that Windows 7 isn't any faster than Vista, but it will get the same speed using less energy.

          This is a good thing for laptop users, is it not?

      • Re:Not Really (Score:5, Informative)

        by RicktheBrick ( 588466 ) on Wednesday October 21, 2009 @09:51AM (#29823467)
        I do volunteer work for world community grid. I use to run 7 computers. I now run 4 quad computers. A quad will beat 4 computers in work done and will use less electricity than 4 computers running at comparable speeds. My electricity bill went down when running the 4 quads than it was with the 7 computers and daily contribution has more than doubled.
      • Re:Not Really (Score:5, Informative)

        by Jah-Wren Ryel ( 80510 ) on Wednesday October 21, 2009 @10:06AM (#29823689)

        not surprising because the OS really can't do that much to improve (or mess up) the performance of user-mode code that isn't making many OS calls anyways.

        Others have already mentioned scheduling and cache thrashing, I'd like to add memory management. There are lots of ways memory management choices can degrade performance, sometimes drastically.

        One example is page sizes and the TLB - each cpu has a hardware TLB [wikipedia.org] which is like a cache of virtual page to physical page address maps. Hardware TLB look-ups are fast, but the TLB is only of limited size and when a virtual address is not in the hardware TLB, the OS has to take a fault and walk its own software-maintained TLB that holds the complete list of virt2phys translations. That's a couple of orders magnitude slower than getting it from the hardware TLB.

        One way to reduce TLB misses is to use larger pages. So an OS that is smart enough to automagically coalesce 4K pages to 4MB (or larger, depending on the hardware) pages can significantly improve TLB performance. In a pathological case, that could result in a 100x-1000x speed-up, in typical cases where it is going make an difference you'll probably see ~10% performance improvement.

        Another related example is how shared memory is handled. Every page of virtual memory has a PTE [wikipedia.org] which, at the most basic level, contains the virt2phys translation. When shared memory is used, a decision must be made - are the PTEs shared, or does each process get a separate copy of the PTEs for the shared memory. Downside of sharing PTEs is that the shared memory must be mapped at exactly the same virtual address in each process that uses it, so if one of those processes already has something else at that address, it won't be able to use the shared memory. The downside of using separate copies of PTEs is that you can really suck up a lot memory for just the PTE list -- imagine 50 processes that all share on chunk of 100MB of memory, if they all get their own PTE copies for that 100MB its the equivalent of 5GB worth of PTEs. If a PTE itself takes up 32 bytes, then that's at least 40MB of PTE entries just to manage that 100MB of memory. A 40% overhead is huge and then there is the issue of hardware TLB misses which, depending on the implementation, may have to search all PTEs in the system, so the more PTEs the worse a TLB miss will hurt performance.

    • Re: (Score:3, Insightful)

      No, it's not surprising.

      Should have implemented Grand Central [wikipedia.org], I hear it's free and opensource. Even has the Apache license so that it allows use of the source code for the development of proprietary software.

      I mean they already borrowed the TCP IP stack. [gcn.com]

      • by LO0G ( 606364 )

        Why would Microsoft implement GCD when they already have ConcRT [msdn.com] which appears to be a better (more scalable) implementation of the same functionality?

        And while the NT 3.1 TCP stack was based on the BSD TCP stack, that TCP stack was replaced in Win95/NT4.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          Why would Microsoft implement GCD when they already have ConcRT [msdn.com] which appears to be a better (more scalable) implementation of the same functionality?

          So that when I write code that uses GCD I don't have to rewrite it to port it to Windows?

          And while the NT 3.1 TCP stack was based on the BSD TCP stack, that TCP stack was replaced in Win95/NT4.

          Yet I can still write a piece of software that uses BSD sockets calls and port it to Windows by changing little more than a couple of header includes.

          • by Dog-Cow ( 21281 )

            If you don't understand the difference between copying code and implementing an API, I really hope I never use any code written by you.

      • Re: (Score:3, Informative)

        Correct me if I'm wrong, but GCD seems to be a user-space parallelism library, while TFS is talking about kernel-space task scheduling. I hate the unintended (and bad) pun but I think you were comparing apples and oranges here.
    • >No, it's not surprising.

      Im not surprised. I think we're going to find that as people start taking Win7 apart that its not too much different from Vista because Vista itself was pretty efficient to begin with. The Vista bashing was really unjustified and after you got over issues like old drivers, old hardware, and pre-SP1 UAC, you pretty much have Win7.

      • Re: (Score:3, Interesting)

        by Lumpy ( 12016 )

        The Vista bashing was really unjustified and after you got over issues like old drivers, old hardware, and pre-SP1 UAC, you pretty much have Win7.

        are you really that disillusioned? People bash Vista because it deserves it. I have Yet to run into one person that genuinely likes vista and has no problems. Out of 3 of my business clients 2 requested a downgrade to XP within the past 4 months. They both gave vista a shakedown on all workstations for 2 years, and finally looked at the numbers we gave th

      • Re: (Score:3, Informative)

        by Ephemeriis ( 315124 )

        Im not surprised. I think we're going to find that as people start taking Win7 apart that its not too much different from Vista because Vista itself was pretty efficient to begin with. The Vista bashing was really unjustified and after you got over issues like old drivers, old hardware, and pre-SP1 UAC, you pretty much have Win7.

        Vista had issues, no matter how you look at it.

        The lead-up to Vista was just plain stupid. Microsoft was advertising it like the second coming. It's a freaking OS! If you do it right, people don't even notice the OS because it gets out of their way and lets them do their work. With Vista, Microsoft seemed to forget that their job wasn't to produce the single flashiest piece of software on the computer, but rather to make that computer run all the other software better.

        The GUI was an improvement over XP

  • by bsDaemon ( 87307 ) on Wednesday October 21, 2009 @08:49AM (#29822747)
    Is this really that surprising? I mean, splitting threads over different cores, having two cores still isn't going to be that much faster than one. I wouldn't expect to see much a gain just from this any more than I would on Linux or BSD. Still, every little bit helps.
  • Power savings (Score:2, Interesting)

    by NoYob ( 1630681 )
    From what I've seen, unless you're on a Core i7, you're not getting the power savings. I'm still running a Core Duo on my Windows XP sp3 box and I don't think it'll do me any good.

    Seeing the performance increase and in some cases decrease from Vista to 7, I don't see that as a selling feature either.

    What does intrigue me is the ability of the OS to allocate threads to the different cores. That is something I would want to learn more about.

    Basically, unless you're on a workstation and running intensive app

    • Re:Power savings (Score:5, Informative)

      by VGPowerlord ( 621254 ) on Wednesday October 21, 2009 @09:13AM (#29823041)

      From what I've seen, unless you're on a Core i7, you're not getting the power savings.

      The 17% power savings mentioned on page 3 of the article is primarily for the Intel Xeon 3500 and 5500 lines (the Nahalem processors), which shut off power to cores that aren't being actively used. The other linked articles go into this more in depth.

  • by jellomizer ( 103300 ) on Wednesday October 21, 2009 @09:02AM (#29822897)

    What the new languages and OS's are doing, are just making it easier for developers to make code that runs on parallel processors. However most of us are not trained to write parallel code. And there are some algorithms that cannot be parallelized. What the moderns OS are doing is taking code that was designed to run multi-threaded or parallel in the first place and in essence have them run more efficient on multi-processors. As well as giving you some tools to make development easier and stop us from trying to work around all those conflicts that distracts us from software development. Much like how String classes came common for developers so we didn't need to fuss around with allocations just to do some basic string manipulation... (Alocate space, calculate the memory offset insure the last character was a 0x00...) aka making development really easy for buffer overflow errors if you missed a step.

    • Re: (Score:3, Interesting)

      by DannyO152 ( 544940 )

      And who wants to spend money looking at decades old code in order to make explicit implicit blocks, or dare to risk breakage by tweaking the code to be concurrency amenable?

    • by Nursie ( 632944 ) on Wednesday October 21, 2009 @11:20AM (#29824587)

      "What the new languages and OS's are doing, are just making it easier for developers to make code that runs on parallel processors. However most of us are not trained to write parallel code."

      Well you bloody well should be, it's basic stuff.

      Parallelism has been around for over 20 years now, not to mention the related discipline of distributed computing. It's not new. It's not *that* hard. You don't need to parallelise every last goddamn algorithm if you can split the work up into jobs using thread pools, or into similar tasks.

      You think the people that make apache analyse every string comparison they do to see if they could do it more efficiently across a set of vector cores? Well maybe, but most likely they use task parallelism to get multiple threads executing different but comparatively large chunks of code.

      This is not a distraction from software development, it's doing it well. And if you're afraid of a little bit of memory allocation then you're doing it wrong...

      • by jellomizer ( 103300 ) on Wednesday October 21, 2009 @11:38AM (#29824807)

        1. Parallel Software development is normally taught as a Masters Level class for computer science. Only for the last 3 years has multi-processing architecture been available for common PCs. So sorry It is not a common Skill for good parallel software development.

        2. Having to rethink your coding methods isn't hard but you need to be retained to think about problems differently. Multi-Threading isn't the only thing about real parallel processing programming.

        3. Spending a week to make sure your threads are completing and starting at the right time and are Not creating a race condition where you have just been lucky does require a lot of extra coding that for most applications can be the difference between the software being a benefit or a cost.

  • ...permitting Windows 7 to scale up to 256 processors without performance penalty, but delivering little performance gains for systems with only a few processors...

    So you're disappointed Microsoft doesn't magically speed up your single or dual-core PC? Maybe you're expecting too much.

    • Re: (Score:3, Informative)

      by Hadlock ( 143607 )

      Maybe he's a mac user. 10.1-> 10.2 -> 10.3 all sped up my 550 mhz powerbook back in the day. 10.4 was the first OS update to slow down my computer (10.3.9 was screaming fast on my laptop). 10.4.1 fixed some speed issues, and by the time 10.4.5 came out it was nearly as fast as 10.3.5 or so. So it's possible to upgrade your OS and end up with a faster feeling system. There used to be a mac benchmarking site, mac feats that documented that each release was in fact marginally faster in most every aspect.

      • by TheSHAD0W ( 258774 ) on Wednesday October 21, 2009 @09:44AM (#29823383) Homepage

        Yeah, it's possible for an OS to slow down your computer by improperly handling tasks, but you can't depend on finding and correcting them. (They may not even be there.) It's understandable to be annoyed if an OS update slows down your system; it's something else to expect a speed-up from out of nowhere.

        Also, Windows 7 users are reporting a subjective improvement in response much like you report in OS X's progression.

      • And I bet 10.6 works quite well on there, too.</sarcasm>

        • by Hadlock ( 143607 )

          Supposedly there was a hack to allow 1ghz computers to run 10.5, but I wouldn't have much space left over on the 20GB hard drive after 10.5 would be installed. Sadly like most TiBooks of that era, one of my hinges broke and it's now collecting dust somewhere. Sadly a hinge repair costs almost as much as the entire laptop is worth these days.

  • Ouch (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Wednesday October 21, 2009 @09:12AM (#29823029) Journal

    I should know better than to click on InfoWorld links, but I think I just lost about 10 IQ points as a result of reading that article.

    In summary, Windows 7 now tries to keep threads on the same processor. It has been known for about 15 years that this gives better cache, and therefore overall, performance. Any scheduling algorithm developed in the last decade or so includes a process migration penalty, so you default to keeping a thread on a given processor and only move it when that processor is overly busy, another one is not, and the difference is greater than the migration penalty (which is different for moving between contexts in a core, between cores, and between physical processors, due to different cache layout). This also helps reduce the need for locking in the scheduler. Each CPU has its own local run queue, and you only need synchronization during process migration.

    If Vista, or even Windows Server 2003, didn't already do this, then I would be very surprised. FreeBSD and Linux both have done for several years, and Solaris has for even longer. Fine-grained in-kernel locking is not new either; almost every other kernel that I know of that supports SMP has been implementing this for a long time. One of the big pushes for FreeBSD 5 (released almost a decade ago) was to support fine-grained locking, where individual resources had their own locks, and FreeBSD was a little bit behind Linux and a long way behind Solaris in implementing this support.

  • In the article,t he numbers show that Vista SP2 gives a clear edge over Win XP SP3 in every case. I'm surprised that this wasn't commented on, given the general perception of Vista's sluggishness.
  • They tested Windows ULTIMATE, the best of the newest against the oldest patched-up version of XP. And it only saved a marginal amount of power. and may be slightly faster in some operations. What about the versions that the average Joe is going to be running? There are Starter, Home, Home Premium, Professional, and Ultimate; each with an increasing price requirement (http://windows.microsoft.com/en-us/windows7/products/compare). How does the "basement" version compare to XP SP3 (or against the various f

    • by hibiki_r ( 649814 ) on Wednesday October 21, 2009 @09:47AM (#29823421)

      Come on, look at the feature comparisons, and tell me which actual features of Ultimate make it any faster than Professional, or even Home Premium.

      If Ultimate was actually faster than any other version of 7, wouldn't it be in tech news sites everywhere? Ultimate is about more features, not about more speed.

    • They tested Windows ULTIMATE, the best of the newest against the oldest patched-up version of XP.

      What? They tested the best of the newest version of Windows 7 against the best of the newest version of XP. The oldest version of XP would have no service packs at all.

      And it only saved a marginal amount of power.

      17% is not marginal. What would you consider to be non-marginal? Greater than 100%?

      What about the versions that the average Joe is going to be running?

      Average Joe doesn't use a Xeon processor either. The choice of operating system versions seems appropriate for the level of hardware. Average Joe should just wait until someone does a comparison using games & video encoders if he wants real world tests more

    • Re: (Score:3, Interesting)

      by TheRaven64 ( 641858 )
      Does Ultimate come with a different kernel to Home? I was under the impression that the only differences between the versions were at the userland level. It's not like the older WinNT releases that actually did have slightly different kernels.
  • This has been how linux has done it since like the 80s when SMP was introduced. SunOS does it this way, UNIX did it this way, is there actually a multi-threading model that doesn't involve processor affinity? Besides the small textbook examples that are oversimplified and not useful in the real world....
  • If I'm reading the chart correctly, it appears that Vista rivals Windows 7 in all benchmarks and even beats it in a couple.

    Ru-roh, Shaggy. That's not good. I thought Windows 7 was supposed to be the Vista Apology version?

  • This test was only using a single socket system. Perf differences from XP are going to be greater on a NUMA multisocket systems like Barcelona or Nehalem. XP predates NUMA on the PC architecture, while Vista and Win 7 got a lot of tuning for it.

    This can be a big help for video encoding and other highly multithreaded tasks.

  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Wednesday October 21, 2009 @10:09AM (#29823709)
    Comment removed based on user account deletion

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...