Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Technology

Quick and Dirty Penryn Benchmarks 90

An anonymous reader writes "So Intel has their quad-core Penryn processors all set and ready to launch in November. There are benchmarks for the dual-core Wolfdale all over the place, but this seems to be the first article to put the quad-core Yorkfield to the test. It looks like the Yorkfield is only about 7-8% faster than the Kentsfield with similar clock speeds and front-side bus."
This discussion has been archived. No new comments can be posted.

Quick and Dirty Penryn Benchmarks

Comments Filter:
  • by DamonHD ( 794830 ) <d@hd.org> on Saturday August 25, 2007 @08:53AM (#20353115) Homepage
    My recent experience with quad-CPU Xeon machines is that multithread performance for a single is VERY poor, even with great care in coding, presumably because of cache-sloshing between these physically-separate CPUs dropped onto one die.

    (I compare with Niagara and even Core Duo which seem much better for threaded apps.)

    Has anyone else tested threadability of these CPUs, and power efficiency, sleep states, etc?

    Rgds

    Damon
  • by swb ( 14022 ) on Saturday August 25, 2007 @09:11AM (#20353193)
    Depends on what you do "at home". Grandma who only sends email and orders flowers will see zero benefits.

    But the rest of "normal" home users who own things like camcorders, make DVDs, rip movies, etc all see a huge benefit. I just put together a Q6600 system and couldn't be happier, but I've been a dual CPU workstation user since the PII days.
  • by Sycraft-fu ( 314770 ) on Saturday August 25, 2007 @11:09AM (#20353765)
    Intel tends to do a release of a new architecture, then some refinements on that. While it would be cool to do a whole new architecture each time around, there's just not really money for that. This is one of the refinements. The chips are not likely to be all that much faster then their previous chips at the same clock speed because they are largely the same architecture. Mostly they are just a die shrink (which means lower power and probably better scaling and cost) and some new instructions, that aren't really used yet. They are still Core 2s.

    However that doesn't mean that the next generation will be the same. Indeed, if Intel keeps with their plans it will be a new architecture and thus hopefully bring new speed increases.

    As to using multiple cores, well if you don't know how, perhaps you'd best learn then? You not knowing how doesn't mean it can't be done, indeed it can be done and IS being done. Multi-core is just the way things are going, at least for now. Not only are desktops and servers headed that way, but even things like the Xbox 360 and PS3 are as well. It's simply time to start thinking about software in a different way. No longer is a big while loop the way to go.

    Already that's happening. The number of games (and games are interesting to watch since they often ride the leading edge in terms of requirements) that makes use of two cores has risen dramatically. We are also seeing a couple games, with more on the horizon, that will support 4 cores. Things like AI and physics get executed in parallel, which makes it possible for them to be much more complex.

    Finally, there HAVE been some cool developments on processors, just not ones that most hardware sites like to cover. Some time back Intel introduced a technology they call VT, which is basically instructions to allow you to virtualize the protection rings on a processor. Supposed to make for faster VMs. Currently the implementation is somewhat lacking, VMware claims it is slower than a well optimised software solution, though others dispute that claim (Xen likes VT). The new 45nm Core 2s add to the existing VT technology with what Intel calls VT-d. Basically the idea is to allow VM software to pass DMA access to their guests, but in a safe manner that can't hurt the host. This may not be exciting to everyone, but these advances are worthwhile, given that virtual computing is getting more and more use.

    Processors may not be getting huge gains in single thread performance any more, but that doesn't mean they aren't advancing.
  • by bjackson1 ( 953136 ) on Saturday August 25, 2007 @11:29AM (#20353889)
    Intel's Core Microarchitecture is not currently available in a quad-CPU platform. It is understandable the multithreaded performance would be poor, then.

    The current quad-cpu architecture is based on Tulsa, which a 65nm shrink of Paxville, which is essentially a Pentium 4 Smithfield, or two Prescotts shoved onto one ship. Basically, it's two years ago's technology. The new Tigerton chip will be in Core based, however, it's not out yet.
  • by mi ( 197448 ) <slashdot-2017q4@virtual-estates.net> on Saturday August 25, 2007 @12:50PM (#20354413) Homepage Journal

    4 core CPU has no use at homes unless you are content creator. I'm software engineer, I don't think that any of my colleagues I work with knows how to write app that will take advantage of 2 cores; let alone 4.

    Well, fortunately, some of this software has already been written just for you and your colleagues. Check out make(1) manual page — look for the -j option...

    And no, it is not only for software engineering either. Every time I come back from vacation, I use make [algebra.com] to convert my digital pictures from the lossless "raw" format of the camera to the lower resolution JPEG for the web-pages. Having four CPUs makes that process four times faster. Great idea, uhm?..

    Your colleagues may be doofusen, but people, who will finally bring us reliable speech-generation and parsing (as an example) will certainly be smart enough to take full advantage of the multiple processors.

    Meanwhile, you can schedule a meeting to discuss using OpenMP [openmp.org] in your company's software... Compilers (including Visual Studio's and gcc) have been supporting this standard for some years now.

  • by Terje Mathisen ( 128806 ) on Saturday August 25, 2007 @03:12PM (#20355363)
    When decoding "full HD" h264, i.e. 40 Mbit/s BluRay or 30 MBit/s HD-DVD, with 1080p resolution, current cpus start to trash the L2 cache:

    Each 1080p frame consist of approximately 2 M pixels, which means that the luminance info will need 2 MB, right?

    Since the normal way to encode most of the frames is to have two source frames and one target, motion compensation (which can access any 4x4, 8x8 og 16x16 sub-block from either or both of the source frames), will need to have up to 2+2+2=6MB as the working set.

    Terje

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...