Intel Releases Broadwell Desktop CPUs: Core i7-5775C and i5-5675C 126
edxwelch writes: Intel has finally released their Broadwell desktop processors. Featuring Iris Pro Graphics 6200, they take the integrated graphics crown from AMD (albeit costing three times as much). However, they are not as fast as current Haswell flagship processors and they will be soon superseded by Skylake, to be released later this year. Tom's Hardware and Anandtech have the first reviews of the Core i7-5775C and i5-5675C.
About *** time (Score:3)
I was afraid we would have skylake ultrabook chips before broadwell desktop. This was a close call.
Behind the news much, Slashdot? (Score:2)
Re:Behind the news much, Slashdot? (Score:4, Informative)
I'm not sure if you're joking, but even though your NUC is technically a desktop machine, it runs a mobile chipset. That's how they get it into such a compact package, by using chips and parts designed for laptops.
Re:Behind the news much, Slashdot? (Score:4, Informative)
No, not joking... Just an idiot who can't read.
Re: (Score:2)
Your NUC uses a bga mobile chip in a "desktop". This is a cpu that you can throw onto a motherboard and add the video card of your choice as opposed to you NUC.
What about AMD Godaveri? (Score:5, Informative)
Tom's didn't test against AMD Godaveri, which has a substantially faster GPU than the Kaveri chips Tom's tested against. Godaveri is about 20% faster than than Kaveri, so would be competitive with these chips, as well as being about 1/3rd of the price.
Re: (Score:2)
Tom's didn't compare it, but Anandtech did and Godaveri is actually slower than Kavari for more than half the games they tested. Check the benchmarks if you don't believe me. In Alien Isolation, Total War Attila and GRID autosport the 7870K slower than 7850K. That's 3 out 5 games where it's slower!
Re:What about AMD Godaveri? (Score:4, Informative)
Once you add a R7-240 the AMD chip is faster with dual graphics and sometimes faster by itself. Still cheaper buying AMD and the 240 card than one intel cpu.
Somewhat bogus benchmark because they didn't enable dual graphics for the AMD chips in the intel test. There is reason that you would want a AMD chip to crossfire later if you don't have a lot of money.
But since nothing is CPU bound (Score:1)
It really doesn't matter. Desktop PCs need better IO of all kinds - disk and network primarily. Eleventy GHz machines don't do anything special coupled to a 3 Mbps DSL line, or running an OS on a 7200 rpm spindle.
Re: (Score:2)
Depends on your application. Most of us that would appreciate faster CPU speeds have already moved to SSDs and Gb for local network storage. Transcoding is processor intensive with local SSD, and lots of media center applications - running on desktop hardware - are now transcoding for remote viewing devices. I appreciate the desire to reduce part count and beat costs down, but stealing from the CPU performance for on-board GPU sounds like low-end chip work, not high end i7 stuff.
Re: (Score:3)
You're mis-understanding the conclusion. Intel did not steal from CPU performance to improve the GPU, and in fact the cores on Broadwell are slightly more efficient than Haswell. Here's a quote from the Tom's Hardware article:
"As host processors, Core i5-5675C and Core i7-5775C should be marginally faster than Haswell-based CPUs at similar clock rates. The issue, of course, is that they employ lower frequencies than a number of previous-gen chips. So, they'll actually post lower scores in workloads that e
Re: (Score:2)
Re: (Score:2)
Clocking them down is not stealing from CPU performance? Your own quote contradicts what you're saying.
Sigh. If you'd read the article, you'd understand why your statement makes no sense. Tom's Hardware goes on to note that Broadwell is ~5% faster than Haswell at the same clock speed. The reason Broadwell shows slightly lower performance on some benchmarks is that it's capable of dropping down to lower clock speeds to conserve power. But when performance is called for, Broadwell quickly ramps up to the same clock speed as it's predecessor. So for a sustained workload, Broadwell will be faster. It's onl
Re: (Score:2)
Afaict Intel and their customers knows there are people who want a PC but don't want a big box. They also know that some of those people will have money for a high end product and don't want to call attention to the fact that people are making a performance sacrifice by bying such a box. So they put their top end brand on a chip that is designed for such boxes just like they put their top end brand on laptop chips. For those with a bit less money to burn they market a marginally less powerful versoin of the
Re: (Score:2)
There are probably some switches you can set to turn your leftover RAM into swap.
Re: (Score:2)
Re: (Score:1)
i7 supports up to 32GB, so if you want to use Xeon it's better to use with a high-end motherboard.
Re: (Score:2)
Say what? This machine is an i7-3930K and has 64GB of RAM...
Re: (Score:2)
Depends on the model. Your i7 supports up to 64GB. The first ones IIRC were 24GB.
http://ark.intel.com/products/63697 [intel.com]
Of course, that all depends on finding a motherboard that supports an i7 (not a Xeon which is a different socket) actually accepts more than 32GB of ram.
Re: (Score:2)
The demand for using unbuffered memory places limits on the number of DIMMs which each memory channel can support and each memory channel adds to the cost of the processor and motherboard. This limits DDR3 and DDR4 to two DIMMs per channel unless the memory clock frequency is lowered.
Server processors and motherboards often support buffered memory so more DIMMs per channel but there is not enough demand to support this on desktops in the face of market segmentation.
Broadwell is yesterday's news. Bring on Skylake! (Score:1)
Re: (Score:2)
Idiots are waiting for HDMI 2.0.
People with brains are waiting for DisplayPort 1.3.
Re: (Score:2)
Both HDMI and DisplayPort are worth having. HDMI is what TV sets have; if you want to attach an affordable big UHD screen to your computer, HDMI 2.0 (the new version that supports 4K) is what you need. (You can use a adapter but a native HDMI port is more convenient.) Computers displays have a variety of things: DisplayPort, Thunderbolt, HDMI, and DVI-D (plus those legacy displays with analog VGA connections), so you're going to need adapters or adapter cables as often as not.
In the future all the computer
Re: (Score:2)
DP is superior to HDMI. Yes, trash TVs have HDMI, but that doesn't change the fact that DP is the better choice every single time.
As for Thunderbolt 3 taking over everything? Intel can't let a spec sit still for more than 6 months. It would take 6 years minimum for OEMs to adopt Thunderbolt 3 on hosts and peripherals to the point that they feel safe using it as the primary connection for everything. And by then we'll have Thunderbolt 9 (still over copper instead of optical).
And of course, there's no inc
Re: (Score:2)
albeit costing three times as much (Score:5, Insightful)
For the past 5 or 10 years this has been the story of me building new computers. I don't follow tech pages on architectures much any more, just when I go to build a new computer I go and see what the latest offerings from amd/intel/nvidia are.
For pretty much ever it is, "AMD is kill, Intel rules all!" Except the fine print is that in order to rule all, you must pay 2x to 3x as much. So all of my performance/gaming computers for 17 years have been AMD/Nvidia (and VIA chipsets before Nvidia). (I have tried ATI a few times and just never cared for them.) And I get 3+ years out of each computer before it needs to be replaced.
Now, from a heat dissipation and power usage perspective, no amount of price/performance can replace that. And this is why I have not seen an AMD laptop in quite some time.
So why is AMD constantly on the verge of bankruptcy? Is there some Apple effect on Intel that causes people to throw money at them for no better performance increase? Do people simply not care how much they spend on computers? Is the laptop/mobile market cutting into PC/Server that much? Or are they just poorly managed. Over 15 years and I simply don't get it.
Re:albeit costing three times as much (Score:5, Informative)
The reason is simple - the title in the headline is misleading you about needing to pay 2 to 3 times more. It's comparing the Intel chip to a relatively low end AMD chip that happens to have a GPU, not to the high end AMD chips that it actually competes against.
If you go look at the first review, you'll see that in the CPU speed tests, the i5-5675C turns out to be substantially (about 30%) faster than even the FX-9590 (AMD's fastest desktop chip). That and it has a decently fast GPU built in too.
The i5 costs $276 (list price, so likely higher than what you'll actually get it in the shops), the FX-9590 costs $249 (on newegg today). So that's a 10% markup for a 30% faster CPU with a very usable GPU on board. Most people see that as a pretty good deal.
Re: (Score:2)
True, this is all about integrated stuff that I do not care at all about.
But, I don't know, its been 2+ years since the last time I have looked at hardware (had to look at my newegg order history to figure out how long ago), so maybe things have changed; but I really tried to buy Intel last time out. But when you add the motherboards that cost $100 more for high end boards, I just couldn't find a price point where Intel was able to match.
The result was I paid $200 for an FX-8350, which probably wasn't AMD'
Re: (Score:3)
Yes, the AMD FX line has not had any updates since you last went shopping and it has gotten less and less competitive versus Intel's offerings, especially on single-threaded tasks and in work done per Watt. It was widely believed they were actually going to abandon that market segment entirely, but the new Zen architecture is now planned to to first appear as a revamped FX line.
Re:albeit costing three times as much (Score:5, Insightful)
The result was I paid $200 for an FX-8350, which probably wasn't AMD's fastest chip at the time
That same $200 would have bought you a Core i5, which is faster in most respects to the AMD chip while using less power.
Yes, there are edge cases where the AMD chip is faster. Are you one of those edge cases?
$120 for an ASRock moberboard with onboard raid.
You can get nice Intel boards for about the same money, the $190 boards are overkill.
Of course, I was already planning a large case with a large heatsink/fan combo, so thermal concerns were not part of my calculation. If I wanted a reasonably sized computer, I would almost have to buy Intel.
Thermal may not matter, but how about your power bill?
The Intel chip will use less power, over 3 years of owning it, the power bill difference can easily wipe out any up front price difference.
And the FX-9590 is 220 Watts?? At this point I should be looking at price/W instead of price/$.
Insane, isn't it? These new Intel chips max out at 65w, and use less when the GPU isn't in heavy use.
However much time your computer is actually in use, times 150w of power, times three years, is how much in your power bill?
---
I'll be frank, a few years ago I didn't much consider the power consumption either, until I replaced my HVAC system with something from this century and then replaced all my incandescent bulbs with LED bulbs. I've started to do the math on how much of my monthly power bill is due to electronics, and the percentage is growing.
So I do now consider the typical lifetime power cost of something before I buy it, something I never used to do.
Re: (Score:2)
Remember that the 125W is only the max usage. This is a computer that gets turned off when I am not using it, so the power usage is minuscule compared to a refrigerator. (or the second beer fridge in the basement...)
However, if the new chips are going to double power usage, for very little gain in performance, well perhaps it is time for a change. Eventually... I don't see myself needing a new computer for a couple years. I have no illusions that AMD will actually start to care about power usage anytime so
Re: (Score:2)
Regarding "max power pull", you're right of course, when idle they all use less.
I will say, take a look at the "idle power" of AMD's chips and the "idle power" of the new Intel chips.
One of the reasons I upgraded from Sandy Bridge to Haswell was not speed (part of it, but not all of it), but power consumption.
Re: (Score:3)
Re: (Score:2)
Well I have an AMD computer, so I am obviously a sucker.
But this is interesting. I am using RAID1 mirroring only, as giant drives are so cheap and plentiful. So RAID performance really isn't an issue to me at all. Maybe there is no need for hardware RAID. The super high performance stuff I do all goes on an SSD anyhow.
I know the setup I have works as I did recently replace a drive. I have said many times before that I am done buying spinning disks, the next machine will be all SSD.
Re: (Score:2)
I am not even sure that hardware raid is any faster than software raid at this point. The bottleneck is surely the spinning disks. If anything a CPU can probably do a better job at RAID and have enough spare processing power that you wouldn't notice a performance hit.
I don't think it really matters that much for a simple mirror. I think the only real benefit of a software raid in this case would be better reporting of statistics related to performance and integrity. You can use any software you want to
Re: (Score:2)
The result was I paid $200 for an FX-8350, which probably wasn't AMD's fastest chip at the time
Maybe not, but close - the FX-8370 is just a slightly better binning of the same part.
I remember all of the benchmarks compared it to the i7, which of course trounced it.
Funny thing about that - there were some pretty major discrepancies at the time between benchmarks done using Intel's compiler and those done using GCC. When using GCC, the FX smoked the i7 - it wasn't until the next generation (or possibly the one after that) that the FX started to lag behind. Even today it's reasonably competitive (if not faster than) against Haswell i5s.
The FX-9590 doesnt seem to be a significant step up in performance from the 8350.
The FX-9590 isn't even a step-up - it's the exact
Re: (Score:3)
I've never understood what market wants a powerful CPU paired with a meddling and power crippled yet still expensive GPU though, except in a laptop where it's all you got. Pretty much every benchmark shows that if you want gaming performance, put almost all your money in the graphics card. I mean the high end processor is $366, you can get a $64 Intel G3260 and pair it with a $299 Radeon 290X for less that'll be a much, much better gaming machine though it'll use 200W more when you're playing.
Now if you rea
Re: (Score:1)
FX-9590 is the fastest AMD CPU you can buy and it is on par with i7 4770k. (as usual, behind in single threaded test, ahead in multi-threaded tests)
http://cpuboss.com/cpu/AMD-FX-... [cpuboss.com]
It IS faster than i5, if you are after multi-threaded load.
On top of it, it has margin corresponding to "my fastest processor".
It has no GPU.
Your choice of CPUs to compare reviewed i5 to is questionable, to say the least.
A10 APUs that were reviewed by Anandtech, cost half/third of Intel's, yet are within 20% performance wise.
Re: (Score:3)
Re: (Score:2)
Also fab access. Intel has kept their fab enough ahead of their other faults to be ahead most of the time. AMD has arguably had a better architecture than intel, but is stuck a couple process nodes behind due to not having access to a comparable fab.
I am simply dumbstruck with how flat the performance has been for intel. The power reductions are impressive, but the speed has been nearly flat for the last 4-5 years.
This latest round reeks of being an Apple specific processor. Anyone wanting a good machin
Re: (Score:2)
Re: (Score:3)
Why Intel generally thumps AMD in business (Score:5, Informative)
So why is AMD constantly on the verge of bankruptcy?
Because AMD has historically made their business model making a product that is compatible with another company's product and that other company (Intel) has a cost advantage in making the product and generally controls the architecture. Intel is actually quite the manufacturing juggernaut in microprocessors whereas AMD has basically no manufacturing of their own. Intel also has a lead in die size as well so AMD is typically playing catch up. Intel basically can make a smaller, faster processor cheaper and sell it for less any time they want to. Hard to compete effectively with that. AMD has to be smarter than Intel and they haven't shown themselves to be capable of doing that on a consistent basis. Even when their designs have been better, Intel has been able to leverage their die size advantage to overcome design deficiencies. Furthermore they've made some pretty bad tactical business errors (the acquisition of ATI hasn't been the smoothest) and Intel has been known to engage in some arguably shady business dealings with their customers.
Basically probably the only reason AMD is still with us is that Intel doesn't want the anti-trust scrutiny that would come with killing them off. Having AMD around gives Intel a "credible" competitor, albeit one that hasn't shown any meaningful ability to compete consistently. AMD has been trying to diversify away from just PC microprocessors for a while now with mixed success.
Re: (Score:3)
Actually for a while it was the other way around. AMD pioneered x86-64 and Intel was the one playing compatible catch-up when they tried to bank on IA-64 and it tanked badly.
However AMD managed to squander any gains they had their and have fallen to the distant #2 once again.
64 Bit x86 (Score:3)
Actually for a while it was the other way around. AMD pioneered x86-64 and Intel was the one playing compatible catch-up when they tried to bank on IA-64 and it tanked badly./quote.
That situation lasted for all of about 1-2 years and even then AMD never really were able to capitalize on it because Intel was better capitalized, and had cost advantages and 64 bit didn't matter enough at the time. While it was a misstep by Intel it wasn't one they couldn't recover from. Intel putting a 64 bit version of the x86 wasn't exactly a huge technical challenge for them. Intel has made a number of mistakes over the years but AMD simply has never been smart enough or well funded enough to make Intel pay for them.
Re: (Score:3)
Re: (Score:2)
AMD looks better in benchmarks than they actually perform for a lot of applications.
Let me be clear. If you're doing something like image processing, compression, video work, an 8 core AMD chip is likely to be faster than a 4 core Intel chip. Except... the 4 core, 4 HT i7 chip is likely to be faster than anything AMD makes and if you're REALLY doing that kind of work, another $100 or so in computer cost is nothing compared to time saved.
If you're doing basic Internet surfing, e-mail, Angry Birds, etc. T
Re: (Score:2)
5.5 years ago, I bought an Intel i7 860 and accompanying mid-range motherboard for 350 EUR. That means I've paid ~65 EUR/year, ~5 EUR/month for that combination, which is _still_ serving me ridiculously well (so much so that I really really really need to convince myself that I want to upgrade it -- it's far from necessary, but it 'feels' like it is time).
Taking into account that I work from home, for me it is pretty simple: I just can't be bothered to skimp by going AMD and shave off maybe 3 EUR/month on w
Re: (Score:2)
Why is AMD on the verge of bankruptcy? The lower performance of their chips means they have to sell them at lower price points to get any business at all. The fabrication technology they have access to is a couple of generations behind Intel's, so those lower performance chips are probably actually costing them more to make than Intel's faster chips but they can't get nearly as much money for them.
For many years, AMD owned its chip fabs. They didn't have the money to make the necessary investments to keep t
Re: (Score:2)
When do we get a real boost over 2013 speeds? (Score:4, Insightful)
I've got a machine over two years old now - I do some pretty heavy number-crunching with GIS map programs and always tell the counter guy I want the nearest thing he's got to a machine that finishes infinite loops. After conceding that the next model up from the i7-3930K was $500 more for another 15% of horsepower, I picked that one.
I'm sure there have been a few percent of gains with two years of subsequent chips, but basically, it's same cores, same GHz. Is this 'skylake' in several more months going to be more than a 10%-15% upgrade over my early 2013 chip? (Actually, it's older, probably came out in 2012?)
I really need to be buying a second machine in just a few months, but I'll endure some inconvenience if we're just a few months after that from a significant upgrade. But frankly, anything under 25-30% speedup in math operations will not be worth the wait.
They say Moore's Law is still going, and in low-power circles, I'd agree. But for the market segment of people who don't mind the computer doubling as a room heater if it'll just crunch numbers on a few million rows of geodatabase table a few minutes faster, it sure feels like Moore's is over for us.
Re: (Score:2)
Aren't GIS problems embarrassingly-parallel enough that you should be worried about finding a faster GPU (or maybe switching to a multi-socket Xeon system) rather than about having the fastest single-threaded performance?
Re: (Score:2)
Re: (Score:2)
I didn't think to mention it in my previous post, but a dual-socket AMD Opteron 63xx system might be a reasonable alternative. Opterons appear to be significantly cheaper than Xeons (and in some cases, cheaper than I7s), to the point that you could get a 16-24 core AMD system for close to the same price as a high-end i7 (let alone a Xeon). I have no idea which would win on the benchmarks, though.
By the way, what sort of open-source GIS tools do you use? I occasionally find myself wanting to do GIS-related s
Re: (Score:2)
As far as tools go, if you want to start out and play around first without getting buried under your own ignorance (I speak from experience here) try something like uDig GIS first. It isn't the most powerful, faste
Re: (Score:2)
I'm using PostGIS (PostgreSQL with a plug-in) for the back-end, and QGIS for the client. With the above-mentioned system and 32GB it utterly blows away the performance I get at work from a big ESRI server and Oracle with ESRIs SDE plug-in. That's probably because the corporate server is throttled per-user, but still, it means that home-user GIS for zero software costs is really here in convenient form. I've developed a "mapping system" that loads in about 50 layers for my city with one script (works in
Re: (Score:2)
My experience would indicate this as I have an i7 3770k (I think that is what is is) and I can load all 8 virtual cores at 100% for extended periods.
If that is the case, then Haswell-E is probably what you need. Another 4 core chip won't help you nearly as much as having 8 real cores will.
Not the cheapest thing in the world, but if you're actually running for "extended periods" at 100% CPU usage, then maybe it is time.
Re: (Score:2)
Re: (Score:2)
Fair enough... I infered from your post that you wanted more performance, that you wanted to know how to get a 25%+ speed jump.
i7-5960X would do it...
Not cheap, but have you considered such an 8 core, 16 thread system with 64GB of RAM, then using 32GB as a RAM disk and doing your work there rather than off a SSD?
You might find your 5-10 min becomes sub-5 min...
Keep in mind that your jump from an Athlon 64x2 to an Ivy Bridge was about a 10 year leap in technology. Even the Core2Duo was faster nearly 10 yea
Re: (Score:2)
Re: (Score:2)
Wow, you really are dedicated to major jumps.
Fair enough, more power to you and all. :)
The 486/75 to K6-2 500 is a HUGE jump, I personally couldn't have waited that long. :)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
There are plenty of i7's that support more than 32GB of RAM. I'm using a i7--3930K with 64GB right now, and there are others that support 128GB as well.
dual socket X 16 core = 32 cores (Score:2)
A dual socket mobo with 16-core AMD CPUs in each socket will probably spank your current Intel system. That's one area AMD excels at, they sell 8 and 12 core CPUs cheap, 16-core if you're serious.
Re: (Score:2)
The i7-3930K is a 6 core CPU. If you want to stick with a single-CPU system, you could go with an Xeon E5-1691 v3 which gives you 14 cores.
If you are willing to go with a dual-CPU system, you could go with something like the Xeon E5-2698 which 16 cores per processor (for a
Re: (Score:2)
I do high-performance computing for a living, and Moore's Law has been on its last gasps for a while now.
Until around 2006, the smaller you made a transistor, the faster it could work. This was called Dennard scaling. But once transistors reach a certain size, current leakage and thermal issues prevent you from making the transistors faster.
While they can't drive transistors any faster, smaller processes still allow them to put *more* transistors on a chip. This is why we've gone from single-core to mul
Re: (Score:2)
Keep in mind that Intel has focused much of its efforts the past 6 years on power reduction, not speed.
The speed can come later, at the moment they don't need it due to lack of competition.
---
Consider, 2009, i7-920 running at 2.66 GHz was a 130W CPU.
Today, these new CPUs are running at 3.3 GHz with a turbo to 3.7 GHz, with a FAR superior iGPU, along with better IPC, while pulling half the power, 65W.
To get similar performance out of the i7-920 chip you'd need to run it at 4 GHz, perhaps a bit more, to count
Re: (Score:2)
've got a machine over two years old now - I do some pretty heavy number-crunching with GIS map programs and always tell the counter guy I want the nearest thing he's got to a machine that finishes infinite loops. After conceding that the next model up from the i7-3930K was $500 more for another 15% of horsepower, I picked that one.
It is very rare that 2 years will provide a huge jump in performance.
The exceptions are when major new developments come out.
The Core2Duo was one such development, it was so much faster than Netburst, it was obvious and major.
I remember the old Athlon Thunderbird chips, those were good and a nice upgrade over a middle range Pentium II.
Other times, CPU speed tends to just sit around. The jump from a 486DX/2-66 to a Pentium 75 was very ho-hum back in the day, at least until Windows 95 showed up, then it beca
Re: (Score:2)
Maybe for some workloads. For others, a Pentium was a significant upgrade. I used to play Quake, a lot, and upgrading from a 486DX4/133 to a Pentium 133 was like night and day.
Re: (Score:2)
I would expect that it would be, given the 486 being limited to a 33 MHz bus speed compared to the Pentium 133's 66 MHz bus.
Like I said, the jump from a 486DX2-66 to a Pentium 75 was rather ho hum, at the time those two chips were mainstream. The 486 was still selling strong in 1994 when the Pentium 75 came out. By the end of 1995 when the Pentium 133 was released, the 486 was no longer mainstream, being really slow for Windows 95 at that point. Keep in mind that the Pentium 75 had a 50 MHz bus, compared
Re: (Score:2)
That may have been the official name, but many people who sold them called them DX4/133s. I've certainly never heard them called "Am5x86-P75" before, and I've been in the business since the early 80s.
Re: (Score:2)
I owned a small computer networking business back in the 90s, we sold a LOT of those chips, and we of course called them P75s... :)
Re: (Score:2)
Re:When do we get a real boost over 2013 speeds? (Score:4, Informative)
Only so much juice you can squeeze from an orange, dude. I'm still using a setup from 2008 since nothing yet guarantees the 100% improvement that would make me upgrade.
Personally, any upgrade would be for a motherboard with USB 3.1, PCIe 4.0, and DDR4. Basically, faster I/O. I wouldn't be upgrading for more processing power as the I7-3770 works perfectly fine for just about everything that I throw at.
Re: (Score:2)
Same here, SATA3 would be nice over my SATA1 for my SSD, but my i7-920 is not very far behind the current crop of CPU's considering it is 6 years old. I at least had hope for 6-8 cores by now, but 4 is still considered plenty so that is what can be gotten without getting too gouged.
The die pictures are pretty disappointing too, the core's are pretty small, surely adding a couple more would not increase the die size much on a percentage basis, but intel seems to want to more than double the price for their
Re: (Score:3)
Same here, SATA3 would be nice over my SATA1 for my SSD, but my i7-920 is not very far behind the current crop of CPU's considering it is 6 years old. I at least had hope for 6-8 cores by now, but 4 is still considered plenty so that is what can be gotten without getting too gouged.
If you're wondering where all the "improvements" went over the past 6 years...
Your i7-920 is a 130W CPU running at 2.66 GHz. This new Broadwell chip is a 65W CPU running at 3.3 GHz turboed to 3.7 GHz, and it is about 20% faster per clock cycle, making it about as fast as your current system if it were running at 4 GHz, while pulling half the power and having a nearly 10 times better iGPU.
You may not think much has changed, but that is actually a huge change, and it is where the improvements have gone.
If yo
Re: (Score:2)
So a 2x speedup in 6.5 years (unless you take into account the nearly 1.5x overclocking of my i7-920). Sure, lower dissipation is worth something, but sorry if my socks aren't blown off by a speed doubling every 6.5 years.
The Haswell-E drops the clock 10%, so you net a 1.35x speedup at best.
And I am painfully aware of the clock speed drop. We plunked down a serious chunk of change for a dual Xeon box (16 total cores) to run our electromagnetic simulations on, only to have it run about 10% slower than our
Re: (Score:2)
So a 2x speedup in 6.5 years (unless you take into account the nearly 1.5x overclocking of my i7-920). Sure, lower dissipation is worth something, but sorry if my socks aren't blown off by a speed doubling every 6.5 years.
I understand your point... I was simply saying where the development went... Rather than speed, less power consumption was the goal...
Do you not think that a doubling of speed while halving power consumption is impressive? I do in any case... what it also tells me is that for the same power, they could have quadrupled performance (give or take), if that had been their goal, but it clearly wasn't.
And you're right, software hasn't done a great job of adapting to many cores, which is a shame.
Re: (Score:2)
The doubling is impressive (except for it taking 6.5 years, if you computed through the 90s) but the power consumption is irrelevant. I'm not running a data centre, and I can afford a $2500 computer, so paying 130/1000 X 9 cents per hour = 1.17 cents per hour to run the chip is not worth mentioning.
Re: (Score:2)
Perhaps not, but at 8 hours a day, 5 days a week, 40 hours a year, that is more than $50 over 3 years, which more or less wipes out any savings from having purchased the AMD chip over the Intel chip.
It increases your carbon footprint (if you care about that sort of thing), and it is twice as bad if you have to AC the space, since that same heat must now be cooled, nearly doubling the cost to $100 over 3 years. That is somewhat removed if you otherwise would have heated the space of course.
Re: (Score:2)
2008 would be a Core2Duo or Core2Quad, probably running near 3 GHz.
It depends of course on what you're doing with it, and if you care about power consumption, but a Core i7-4790k will kick the pants off the Core2 line all day long for CPU intensive work.
The 100% speed boost is there, if you need it.
What am I missing?? (Score:2)
I don't get it. These are slower and a downgrade from other 2013 cpus for socket 1150. So why would I upgrade from what I have? Did I miss something special? If you can't afford an x99 it would make more sense to get a 2 year old i7 4770
Re: (Score:3)
These are budget CPU's for low power consumption. The initial Broadwell offering was for mobile, and this is their first desktop offering, targeting business and office with pretty decent integrated graphics, but nothing you'd want for gaming. You would stick with your i7 4770 until the 14nm line begins targeting performance computing.
Re: (Score:2)
They are not priced like budget CPU's, in fact they look like a 5% price increase from what I could dig up. So you can pay more for a top of the line CPU with slight less awful integrated graphics, great...
Intel is really not blowing my skirt up.
Re: (Score:2)
I don't get it. These are slower and a downgrade from other 2013 cpus for socket 1150. So why would I upgrade from what I have? Did I miss something special? If you can't afford an x99 it would make more sense to get a 2 year old i7 4770
The slowest [new gen] cpu is slower than the fastest [last gen] cpu. This is normal. The bazillion core global warming Broadwell will presumably come out later. It's faster per core and lower power.
In a hockey stick situation like this... (Score:2)
Basically every few additional percentage points in performance cost untold billions in investment. Thus it is possible to tailgate market leader by producing something only a little bit slower by spending half the money. As long as performance/watt, performance/rack are not outrageously bad (so data centers will not shun you) AND Intel does not engage in monopolistic tactics (big question mark here) it's possible to make a decent living.
We can all be generals now. (Score:2)
Like David Petreus, we can all now have a Broadwell under our desks.
So answer me this... (Score:1)
Will i5 prices go down this month?
I'm building a gaming/mini-simulation computer, and I have a mix of poor student syndrome along with excessive computer drooling disease (much more debilitating). Basically all I want is a fast GPU (nvid gtx 960), but I can't help but want a fast processor too, so I've been comparing the i5's, i3's and pentium G3450's.
Should I wait to buy an i5? or should I stick with the cheap processor? or maybe you know of a motherboard/cpu combo deal that will cut the cost of an i5 ju
Re: (Score:2)
For gaming purposes, the processor is definitely not your bottleneck, as long as it supports enough PCI Express slots to fulfill the needs of your video cards.
Re: (Score:2)
*err PCI lanes
oops, they left the chipset behind (Score:1)
Good. Older parts will come down in price. (Score:2)
It has been a very long time since I bought a "latest and greatest" chip when building a new computer, because 2-3 revisions old still has many times the performance of the machine it's replacing and far more "snort" than I could ever use on day-to-day activities.
With any luck, this announcement and release will bring the price down on the chips I want by another $100 or so by January-February, when I hope to actually be building a new machine.
The bleeding edge is fine for gamers and hard-core video en
Re: (Score:3)
I can see not far in the future DRAM will be replaced with flash. Not some fancy dram+flash combo, just a simple flash storage, since the cpu has enough fast on die already.
That's a horrible idea--flash memory write characteristics just don't match that use at all.
Re: (Score:2)
Re: (Score:3)
If someone is able to talk directly to your computer's network ports without your permission, you have bigger problems than a VNC-like option in your BIOS that you can turn off.
Seriously.
Similarly for uploads, etc. There's a reason that we have things like authenticating proxies, firewalls and all that other jazz that people moan about.
If you're vaguely techy and think this means it's beyond your control, you really shouldn't be on this website.