Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Graphics Intel Upgrades

Intel Releases Broadwell Desktop CPUs: Core i7-5775C and i5-5675C 126

edxwelch writes: Intel has finally released their Broadwell desktop processors. Featuring Iris Pro Graphics 6200, they take the integrated graphics crown from AMD (albeit costing three times as much). However, they are not as fast as current Haswell flagship processors and they will be soon superseded by Skylake, to be released later this year. Tom's Hardware and Anandtech have the first reviews of the Core i7-5775C and i5-5675C.
This discussion has been archived. No new comments can be posted.

Intel Releases Broadwell Desktop CPUs: Core i7-5775C and i5-5675C

Comments Filter:
  • by Carewolf ( 581105 ) on Tuesday June 02, 2015 @08:45AM (#49821645) Homepage

    I was afraid we would have skylake ultrabook chips before broadwell desktop. This was a close call.

  • Did I miss something here? I've run a Broadwell i5 in my NUC for about three months now.
  • by Freedom Bug ( 86180 ) on Tuesday June 02, 2015 @09:08AM (#49821801) Homepage

    Tom's didn't test against AMD Godaveri, which has a substantially faster GPU than the Kaveri chips Tom's tested against. Godaveri is about 20% faster than than Kaveri, so would be competitive with these chips, as well as being about 1/3rd of the price.

    • Tom's didn't compare it, but Anandtech did and Godaveri is actually slower than Kavari for more than half the games they tested. Check the benchmarks if you don't believe me. In Alien Isolation, Total War Attila and GRID autosport the 7870K slower than 7850K. That's 3 out 5 games where it's slower!

    • by bongey ( 974911 ) on Tuesday June 02, 2015 @09:56AM (#49822185)
      The Intel CPU is on par with the Godaveri over here http://www.anandtech.com/show/... [anandtech.com] .
      Once you add a R7-240 the AMD chip is faster with dual graphics and sometimes faster by itself. Still cheaper buying AMD and the 240 card than one intel cpu.
      Somewhat bogus benchmark because they didn't enable dual graphics for the AMD chips in the intel test. There is reason that you would want a AMD chip to crossfire later if you don't have a lot of money.
  • by Anonymous Coward

    It really doesn't matter. Desktop PCs need better IO of all kinds - disk and network primarily. Eleventy GHz machines don't do anything special coupled to a 3 Mbps DSL line, or running an OS on a 7200 rpm spindle.

    • Depends on your application. Most of us that would appreciate faster CPU speeds have already moved to SSDs and Gb for local network storage. Transcoding is processor intensive with local SSD, and lots of media center applications - running on desktop hardware - are now transcoding for remote viewing devices. I appreciate the desire to reduce part count and beat costs down, but stealing from the CPU performance for on-board GPU sounds like low-end chip work, not high end i7 stuff.

      • You're mis-understanding the conclusion. Intel did not steal from CPU performance to improve the GPU, and in fact the cores on Broadwell are slightly more efficient than Haswell. Here's a quote from the Tom's Hardware article:

        "As host processors, Core i5-5675C and Core i7-5775C should be marginally faster than Haswell-based CPUs at similar clock rates. The issue, of course, is that they employ lower frequencies than a number of previous-gen chips. So, they'll actually post lower scores in workloads that e

        • Clocking them down is not stealing from CPU performance? Your own quote contradicts what you're saying.
          • Clocking them down is not stealing from CPU performance? Your own quote contradicts what you're saying.

            Sigh. If you'd read the article, you'd understand why your statement makes no sense. Tom's Hardware goes on to note that Broadwell is ~5% faster than Haswell at the same clock speed. The reason Broadwell shows slightly lower performance on some benchmarks is that it's capable of dropping down to lower clock speeds to conserve power. But when performance is called for, Broadwell quickly ramps up to the same clock speed as it's predecessor. So for a sustained workload, Broadwell will be faster. It's onl

      • Afaict Intel and their customers knows there are people who want a PC but don't want a big box. They also know that some of those people will have money for a high end product and don't want to call attention to the fact that people are making a performance sacrifice by bying such a box. So they put their top end brand on a chip that is designed for such boxes just like they put their top end brand on laptop chips. For those with a bit less money to burn they market a marginally less powerful versoin of the

    • Idiots are waiting for HDMI 2.0.
      People with brains are waiting for DisplayPort 1.3.

      • Both HDMI and DisplayPort are worth having. HDMI is what TV sets have; if you want to attach an affordable big UHD screen to your computer, HDMI 2.0 (the new version that supports 4K) is what you need. (You can use a adapter but a native HDMI port is more convenient.) Computers displays have a variety of things: DisplayPort, Thunderbolt, HDMI, and DVI-D (plus those legacy displays with analog VGA connections), so you're going to need adapters or adapter cables as often as not.

        In the future all the computer

        • DP is superior to HDMI. Yes, trash TVs have HDMI, but that doesn't change the fact that DP is the better choice every single time.

          As for Thunderbolt 3 taking over everything? Intel can't let a spec sit still for more than 6 months. It would take 6 years minimum for OEMs to adopt Thunderbolt 3 on hosts and peripherals to the point that they feel safe using it as the primary connection for everything. And by then we'll have Thunderbolt 9 (still over copper instead of optical).
          And of course, there's no inc

          • Good luck if your DP adapter goes missing when you have a midnight panel or movie showing to do. You probably can't just pop down to a local store and buy one, but you can get HDMI cables anywhere. Around here you can buy them 24/7 because they have them at CVS, and I'm sure Wally World also sells them. That's why I'd rather have both ports on my system, and if I have to have just one I'd rather have HDMI.
  • by nitehawk214 ( 222219 ) on Tuesday June 02, 2015 @09:28AM (#49821981)

    For the past 5 or 10 years this has been the story of me building new computers. I don't follow tech pages on architectures much any more, just when I go to build a new computer I go and see what the latest offerings from amd/intel/nvidia are.

    For pretty much ever it is, "AMD is kill, Intel rules all!" Except the fine print is that in order to rule all, you must pay 2x to 3x as much. So all of my performance/gaming computers for 17 years have been AMD/Nvidia (and VIA chipsets before Nvidia). (I have tried ATI a few times and just never cared for them.) And I get 3+ years out of each computer before it needs to be replaced.

    Now, from a heat dissipation and power usage perspective, no amount of price/performance can replace that. And this is why I have not seen an AMD laptop in quite some time.

    So why is AMD constantly on the verge of bankruptcy? Is there some Apple effect on Intel that causes people to throw money at them for no better performance increase? Do people simply not care how much they spend on computers? Is the laptop/mobile market cutting into PC/Server that much? Or are they just poorly managed. Over 15 years and I simply don't get it.

    • by beelsebob ( 529313 ) on Tuesday June 02, 2015 @09:38AM (#49822049)

      The reason is simple - the title in the headline is misleading you about needing to pay 2 to 3 times more. It's comparing the Intel chip to a relatively low end AMD chip that happens to have a GPU, not to the high end AMD chips that it actually competes against.

      If you go look at the first review, you'll see that in the CPU speed tests, the i5-5675C turns out to be substantially (about 30%) faster than even the FX-9590 (AMD's fastest desktop chip). That and it has a decently fast GPU built in too.

      The i5 costs $276 (list price, so likely higher than what you'll actually get it in the shops), the FX-9590 costs $249 (on newegg today). So that's a 10% markup for a 30% faster CPU with a very usable GPU on board. Most people see that as a pretty good deal.

      • True, this is all about integrated stuff that I do not care at all about.

        But, I don't know, its been 2+ years since the last time I have looked at hardware (had to look at my newegg order history to figure out how long ago), so maybe things have changed; but I really tried to buy Intel last time out. But when you add the motherboards that cost $100 more for high end boards, I just couldn't find a price point where Intel was able to match.

        The result was I paid $200 for an FX-8350, which probably wasn't AMD'

        • Yes, the AMD FX line has not had any updates since you last went shopping and it has gotten less and less competitive versus Intel's offerings, especially on single-threaded tasks and in work done per Watt. It was widely believed they were actually going to abandon that market segment entirely, but the new Zen architecture is now planned to to first appear as a revamped FX line.

        • by FlyHelicopters ( 1540845 ) on Tuesday June 02, 2015 @01:30PM (#49824129)

          The result was I paid $200 for an FX-8350, which probably wasn't AMD's fastest chip at the time

          That same $200 would have bought you a Core i5, which is faster in most respects to the AMD chip while using less power.

          Yes, there are edge cases where the AMD chip is faster. Are you one of those edge cases?

          $120 for an ASRock moberboard with onboard raid.

          You can get nice Intel boards for about the same money, the $190 boards are overkill.

          Of course, I was already planning a large case with a large heatsink/fan combo, so thermal concerns were not part of my calculation. If I wanted a reasonably sized computer, I would almost have to buy Intel.

          Thermal may not matter, but how about your power bill?

          The Intel chip will use less power, over 3 years of owning it, the power bill difference can easily wipe out any up front price difference.

          And the FX-9590 is 220 Watts?? At this point I should be looking at price/W instead of price/$.

          Insane, isn't it? These new Intel chips max out at 65w, and use less when the GPU isn't in heavy use.

          However much time your computer is actually in use, times 150w of power, times three years, is how much in your power bill?

          ---

          I'll be frank, a few years ago I didn't much consider the power consumption either, until I replaced my HVAC system with something from this century and then replaced all my incandescent bulbs with LED bulbs. I've started to do the math on how much of my monthly power bill is due to electronics, and the percentage is growing.

          So I do now consider the typical lifetime power cost of something before I buy it, something I never used to do.

          • Remember that the 125W is only the max usage. This is a computer that gets turned off when I am not using it, so the power usage is minuscule compared to a refrigerator. (or the second beer fridge in the basement...)

            However, if the new chips are going to double power usage, for very little gain in performance, well perhaps it is time for a change. Eventually... I don't see myself needing a new computer for a couple years. I have no illusions that AMD will actually start to care about power usage anytime so

            • Regarding "max power pull", you're right of course, when idle they all use less.

              I will say, take a look at the "idle power" of AMD's chips and the "idle power" of the new Intel chips.

              One of the reasons I upgraded from Sandy Bridge to Haswell was not speed (part of it, but not all of it), but power consumption.

        • Onboard raid is for suckers. Just do a software raid. Processors are really fast now. You don't need dedicated raid hardware anymore, especially the cheap chips they put on motherboards just to say they do raid. Also, with a software raid, you are not relying on a particular chips implementation of raid. You can take the drives out, stick them on a different controller in a different computer and still have access to your data as long as you are running the same raid software.
          • Well I have an AMD computer, so I am obviously a sucker.

            But this is interesting. I am using RAID1 mirroring only, as giant drives are so cheap and plentiful. So RAID performance really isn't an issue to me at all. Maybe there is no need for hardware RAID. The super high performance stuff I do all goes on an SSD anyhow.

            I know the setup I have works as I did recently replace a drive. I have said many times before that I am done buying spinning disks, the next machine will be all SSD.

            • I am not even sure that hardware raid is any faster than software raid at this point. The bottleneck is surely the spinning disks. If anything a CPU can probably do a better job at RAID and have enough spare processing power that you wouldn't notice a performance hit.

              I don't think it really matters that much for a simple mirror. I think the only real benefit of a software raid in this case would be better reporting of statistics related to performance and integrity. You can use any software you want to

        • by rdnetto ( 955205 )

          The result was I paid $200 for an FX-8350, which probably wasn't AMD's fastest chip at the time

          Maybe not, but close - the FX-8370 is just a slightly better binning of the same part.

          I remember all of the benchmarks compared it to the i7, which of course trounced it.

          Funny thing about that - there were some pretty major discrepancies at the time between benchmarks done using Intel's compiler and those done using GCC. When using GCC, the FX smoked the i7 - it wasn't until the next generation (or possibly the one after that) that the FX started to lag behind. Even today it's reasonably competitive (if not faster than) against Haswell i5s.

          The FX-9590 doesnt seem to be a significant step up in performance from the 8350.

          The FX-9590 isn't even a step-up - it's the exact

      • by Kjella ( 173770 )

        I've never understood what market wants a powerful CPU paired with a meddling and power crippled yet still expensive GPU though, except in a laptop where it's all you got. Pretty much every benchmark shows that if you want gaming performance, put almost all your money in the graphics card. I mean the high end processor is $366, you can get a $64 Intel G3260 and pair it with a $299 Radeon 290X for less that'll be a much, much better gaming machine though it'll use 200W more when you're playing.

        Now if you rea

      • by Kartu ( 1490911 )

        FX-9590 is the fastest AMD CPU you can buy and it is on par with i7 4770k. (as usual, behind in single threaded test, ahead in multi-threaded tests)
        http://cpuboss.com/cpu/AMD-FX-... [cpuboss.com]
        It IS faster than i5, if you are after multi-threaded load.
        On top of it, it has margin corresponding to "my fastest processor".
        It has no GPU.

        Your choice of CPUs to compare reviewed i5 to is questionable, to say the least.
        A10 APUs that were reviewed by Anandtech, cost half/third of Intel's, yet are within 20% performance wise.

    • Comment removed based on user account deletion
      • Also fab access. Intel has kept their fab enough ahead of their other faults to be ahead most of the time. AMD has arguably had a better architecture than intel, but is stuck a couple process nodes behind due to not having access to a comparable fab.

        I am simply dumbstruck with how flat the performance has been for intel. The power reductions are impressive, but the speed has been nearly flat for the last 4-5 years.

        This latest round reeks of being an Apple specific processor. Anyone wanting a good machin

    • by bongey ( 974911 )
      For some reason anandtech didn't enable crossfire in any of the benchmarks. Who in their right mind would have a AMD APU and AMD G-Card and disable crossfire, unless you are trying to make intel to appear much faster.
    • by sjbe ( 173966 ) on Tuesday June 02, 2015 @10:20AM (#49822377)

      So why is AMD constantly on the verge of bankruptcy?

      Because AMD has historically made their business model making a product that is compatible with another company's product and that other company (Intel) has a cost advantage in making the product and generally controls the architecture. Intel is actually quite the manufacturing juggernaut in microprocessors whereas AMD has basically no manufacturing of their own. Intel also has a lead in die size as well so AMD is typically playing catch up. Intel basically can make a smaller, faster processor cheaper and sell it for less any time they want to. Hard to compete effectively with that. AMD has to be smarter than Intel and they haven't shown themselves to be capable of doing that on a consistent basis. Even when their designs have been better, Intel has been able to leverage their die size advantage to overcome design deficiencies. Furthermore they've made some pretty bad tactical business errors (the acquisition of ATI hasn't been the smoothest) and Intel has been known to engage in some arguably shady business dealings with their customers.

      Basically probably the only reason AMD is still with us is that Intel doesn't want the anti-trust scrutiny that would come with killing them off. Having AMD around gives Intel a "credible" competitor, albeit one that hasn't shown any meaningful ability to compete consistently. AMD has been trying to diversify away from just PC microprocessors for a while now with mixed success.

      • Actually for a while it was the other way around. AMD pioneered x86-64 and Intel was the one playing compatible catch-up when they tried to bank on IA-64 and it tanked badly.

        However AMD managed to squander any gains they had their and have fallen to the distant #2 once again.

        • Actually for a while it was the other way around. AMD pioneered x86-64 and Intel was the one playing compatible catch-up when they tried to bank on IA-64 and it tanked badly./quote.

          That situation lasted for all of about 1-2 years and even then AMD never really were able to capitalize on it because Intel was better capitalized, and had cost advantages and 64 bit didn't matter enough at the time. While it was a misstep by Intel it wasn't one they couldn't recover from. Intel putting a 64 bit version of the x86 wasn't exactly a huge technical challenge for them. Intel has made a number of mistakes over the years but AMD simply has never been smart enough or well funded enough to make Intel pay for them.

          • by Kremmy ( 793693 )
            I was bummed as hell that they weren't able to capitalize on it. I watched Apple jump from the 64bit PowerPC G5 CPUs to 32bit Intel CPUs, which I considered a HUGE step backwards. AMD was the only company producing a 64bit desktop CPU that wasn't the G5 at the time, and they got glossed over. I think we'd be seeing a very different field right now if AMD had won the Apple contract. Hoping they can capitalize on their gains being the supplier for the console chips and start giving Intel some real competition
    • AMD looks better in benchmarks than they actually perform for a lot of applications.

      Let me be clear. If you're doing something like image processing, compression, video work, an 8 core AMD chip is likely to be faster than a 4 core Intel chip. Except... the 4 core, 4 HT i7 chip is likely to be faster than anything AMD makes and if you're REALLY doing that kind of work, another $100 or so in computer cost is nothing compared to time saved.

      If you're doing basic Internet surfing, e-mail, Angry Birds, etc. T

    • 5.5 years ago, I bought an Intel i7 860 and accompanying mid-range motherboard for 350 EUR. That means I've paid ~65 EUR/year, ~5 EUR/month for that combination, which is _still_ serving me ridiculously well (so much so that I really really really need to convince myself that I want to upgrade it -- it's far from necessary, but it 'feels' like it is time).

      Taking into account that I work from home, for me it is pretty simple: I just can't be bothered to skimp by going AMD and shave off maybe 3 EUR/month on w

    • Why is AMD on the verge of bankruptcy? The lower performance of their chips means they have to sell them at lower price points to get any business at all. The fabrication technology they have access to is a couple of generations behind Intel's, so those lower performance chips are probably actually costing them more to make than Intel's faster chips but they can't get nearly as much money for them.

      For many years, AMD owned its chip fabs. They didn't have the money to make the necessary investments to keep t

  • by rbrander ( 73222 ) on Tuesday June 02, 2015 @09:29AM (#49821989) Homepage

    I've got a machine over two years old now - I do some pretty heavy number-crunching with GIS map programs and always tell the counter guy I want the nearest thing he's got to a machine that finishes infinite loops. After conceding that the next model up from the i7-3930K was $500 more for another 15% of horsepower, I picked that one.
    I'm sure there have been a few percent of gains with two years of subsequent chips, but basically, it's same cores, same GHz. Is this 'skylake' in several more months going to be more than a 10%-15% upgrade over my early 2013 chip? (Actually, it's older, probably came out in 2012?)
    I really need to be buying a second machine in just a few months, but I'll endure some inconvenience if we're just a few months after that from a significant upgrade. But frankly, anything under 25-30% speedup in math operations will not be worth the wait.
    They say Moore's Law is still going, and in low-power circles, I'd agree. But for the market segment of people who don't mind the computer doubling as a room heater if it'll just crunch numbers on a few million rows of geodatabase table a few minutes faster, it sure feels like Moore's is over for us.

    • Aren't GIS problems embarrassingly-parallel enough that you should be worried about finding a faster GPU (or maybe switching to a multi-socket Xeon system) rather than about having the fastest single-threaded performance?

      • My experience would indicate this as I have an i7 3770k (I think that is what is is) and I can load all 8 virtual cores at 100% for extended periods. The biggest bottleneck I had previously was memory but having 32GB has solved that for now. I could have gotten a multi-socket Xeon but for amateur work that gets into the silly price range. As far as GPU acceleration I don't know if Esri supports it as I stick with open source tools as I can't afford the phenomenal cost and the open source tools have always g
        • I didn't think to mention it in my previous post, but a dual-socket AMD Opteron 63xx system might be a reasonable alternative. Opterons appear to be significantly cheaper than Xeons (and in some cases, cheaper than I7s), to the point that you could get a 16-24 core AMD system for close to the same price as a high-end i7 (let alone a Xeon). I have no idea which would win on the benchmarks, though.

          By the way, what sort of open-source GIS tools do you use? I occasionally find myself wanting to do GIS-related s

          • At the time when I looked at a machine for what I am doing the Intel option was the cheaper one and I could have saved $20 by going with a mid range i5 which I was planning on doing but the unlocked higher end i7 was only $20 more that what I was planning on getting so it was a why not purchase.

            As far as tools go, if you want to start out and play around first without getting buried under your own ignorance (I speak from experience here) try something like uDig GIS first. It isn't the most powerful, faste
          • by rbrander ( 73222 )

            I'm using PostGIS (PostgreSQL with a plug-in) for the back-end, and QGIS for the client. With the above-mentioned system and 32GB it utterly blows away the performance I get at work from a big ESRI server and Oracle with ESRIs SDE plug-in. That's probably because the corporate server is throttled per-user, but still, it means that home-user GIS for zero software costs is really here in convenient form. I've developed a "mapping system" that loads in about 50 layers for my city with one script (works in

        • My experience would indicate this as I have an i7 3770k (I think that is what is is) and I can load all 8 virtual cores at 100% for extended periods.

          If that is the case, then Haswell-E is probably what you need. Another 4 core chip won't help you nearly as much as having 8 real cores will.

          Not the cheapest thing in the world, but if you're actually running for "extended periods" at 100% CPU usage, then maybe it is time.

          • By extended periods I meant 10-15 minutes, and that is with some very large data sets that will consume 24GB of the 32GB of physical RAM in the machine. I can wait that long without issue as this replaced an Athalon 64x2 with 4GB ram and I tried running some of those tasks and they would take upwards of 5 days with the CPU pegged and the disk spinning like mad paging data in and out.
            • Fair enough... I infered from your post that you wanted more performance, that you wanted to know how to get a 25%+ speed jump.

              i7-5960X would do it...

              Not cheap, but have you considered such an 8 core, 16 thread system with 64GB of RAM, then using 32GB as a RAM disk and doing your work there rather than off a SSD?

              You might find your 5-10 min becomes sub-5 min...

              Keep in mind that your jump from an Athlon 64x2 to an Ivy Bridge was about a 10 year leap in technology. Even the Core2Duo was faster nearly 10 yea

              • I tend to make major jumps in technology, my current machine is robust enough that if I want I can crank up the clock speed and ram speed a fair amount an not have to worry for some incremental gains. I buy a machine that meets my needs and has room to continue to meet my needs, use it until it becomes painfully slow or starts having hardware problems, and then replace to get the close to 10x performance improvement across the board. Hence the jump from the Athalon 64x2 to the i7. As far a GPU acceleration
                • Wow, you really are dedicated to major jumps.

                  Fair enough, more power to you and all. :)

                  The 486/75 to K6-2 500 is a HUGE jump, I personally couldn't have waited that long. :)

                  • I had to as I was poor and couldn't afford to replace things. The 486 worked well enough for what I was doing with it at the time but it did get flaky. I am one of those really odd people who will use something until it is worn out and the repair costs exceed what it would cost to replace it. I am really not dedicated to the huge jumps it is just I grew up poor and am too cheap to replace a functional device until it becomes non functional.
        • Comment removed based on user account deletion
          • There are plenty of i7's that support more than 32GB of RAM. I'm using a i7--3930K with 64GB right now, and there are others that support 128GB as well.

    • A dual socket mobo with 16-core AMD CPUs in each socket will probably spank your current Intel system. That's one area AMD excels at, they sell 8 and 12 core CPUs cheap, 16-core if you're serious.

    • by wbo ( 1172247 )
      Since most GIS systems are heavily multithreaded, wouldn't you be better off with more CPU cores rather than a few fast cores? The Xeon series of processors are designed for this type of thing much more so than the Core i7 series.

      The i7-3930K is a 6 core CPU. If you want to stick with a single-CPU system, you could go with an Xeon E5-1691 v3 which gives you 14 cores.

      If you are willing to go with a dual-CPU system, you could go with something like the Xeon E5-2698 which 16 cores per processor (for a
    • by MetricT ( 128876 )

      I do high-performance computing for a living, and Moore's Law has been on its last gasps for a while now.

      Until around 2006, the smaller you made a transistor, the faster it could work. This was called Dennard scaling. But once transistors reach a certain size, current leakage and thermal issues prevent you from making the transistors faster.

      While they can't drive transistors any faster, smaller processes still allow them to put *more* transistors on a chip. This is why we've gone from single-core to mul

      • Keep in mind that Intel has focused much of its efforts the past 6 years on power reduction, not speed.

        The speed can come later, at the moment they don't need it due to lack of competition.

        ---

        Consider, 2009, i7-920 running at 2.66 GHz was a 130W CPU.

        Today, these new CPUs are running at 3.3 GHz with a turbo to 3.7 GHz, with a FAR superior iGPU, along with better IPC, while pulling half the power, 65W.

        To get similar performance out of the i7-920 chip you'd need to run it at 4 GHz, perhaps a bit more, to count

    • 've got a machine over two years old now - I do some pretty heavy number-crunching with GIS map programs and always tell the counter guy I want the nearest thing he's got to a machine that finishes infinite loops. After conceding that the next model up from the i7-3930K was $500 more for another 15% of horsepower, I picked that one.

      It is very rare that 2 years will provide a huge jump in performance.

      The exceptions are when major new developments come out.

      The Core2Duo was one such development, it was so much faster than Netburst, it was obvious and major.

      I remember the old Athlon Thunderbird chips, those were good and a nice upgrade over a middle range Pentium II.

      Other times, CPU speed tends to just sit around. The jump from a 486DX/2-66 to a Pentium 75 was very ho-hum back in the day, at least until Windows 95 showed up, then it beca

      • by Dadoo ( 899435 )

        The jump from a 486DX/2-66 to a Pentium 75 was very ho-hum back in the day

        Maybe for some workloads. For others, a Pentium was a significant upgrade. I used to play Quake, a lot, and upgrading from a 486DX4/133 to a Pentium 133 was like night and day.

        • I would expect that it would be, given the 486 being limited to a 33 MHz bus speed compared to the Pentium 133's 66 MHz bus.

          Like I said, the jump from a 486DX2-66 to a Pentium 75 was rather ho hum, at the time those two chips were mainstream. The 486 was still selling strong in 1994 when the Pentium 75 came out. By the end of 1995 when the Pentium 133 was released, the 486 was no longer mainstream, being really slow for Windows 95 at that point. Keep in mind that the Pentium 75 had a 50 MHz bus, compared

          • by Dadoo ( 899435 )

            AMD did make a 133 MHz version, but called it "Am5x86-P75"

            That may have been the official name, but many people who sold them called them DX4/133s. I've certainly never heard them called "Am5x86-P75" before, and I've been in the business since the early 80s.

            • I owned a small computer networking business back in the 90s, we sold a LOT of those chips, and we of course called them P75s... :)

    • Wait another half a decade. A Haswell 4770K maxed out at ~4.5GHz (typical max if you do not delid) works out to be ~= 5.6GHz Nehalem i7. This is absolute crap if you consider that it was pretty much 5 years from Nehalem to Haswell, but I needed to upgrade some machines (i7 920 @ 4.1GHz) so went with Haswell and got only a 37% boost in absolute OCed performance for my trouble.
  • I don't get it. These are slower and a downgrade from other 2013 cpus for socket 1150. So why would I upgrade from what I have? Did I miss something special? If you can't afford an x99 it would make more sense to get a 2 year old i7 4770

    • by halivar ( 535827 )

      These are budget CPU's for low power consumption. The initial Broadwell offering was for mobile, and this is their first desktop offering, targeting business and office with pretty decent integrated graphics, but nothing you'd want for gaming. You would stick with your i7 4770 until the 14nm line begins targeting performance computing.

      • They are not priced like budget CPU's, in fact they look like a 5% price increase from what I could dig up. So you can pay more for a top of the line CPU with slight less awful integrated graphics, great...

        Intel is really not blowing my skirt up.

    • I don't get it. These are slower and a downgrade from other 2013 cpus for socket 1150. So why would I upgrade from what I have? Did I miss something special? If you can't afford an x99 it would make more sense to get a 2 year old i7 4770

      The slowest [new gen] cpu is slower than the fastest [last gen] cpu. This is normal. The bazillion core global warming Broadwell will presumably come out later. It's faster per core and lower power.

  • ... AMD has a chance for survival. I think.
    Basically every few additional percentage points in performance cost untold billions in investment. Thus it is possible to tailgate market leader by producing something only a little bit slower by spending half the money. As long as performance/watt, performance/rack are not outrageously bad (so data centers will not shun you) AND Intel does not engage in monopolistic tactics (big question mark here) it's possible to make a decent living.
  • Like David Petreus, we can all now have a Broadwell under our desks.

  • Will i5 prices go down this month?

    I'm building a gaming/mini-simulation computer, and I have a mix of poor student syndrome along with excessive computer drooling disease (much more debilitating). Basically all I want is a fast GPU (nvid gtx 960), but I can't help but want a fast processor too, so I've been comparing the i5's, i3's and pentium G3450's.

    Should I wait to buy an i5? or should I stick with the cheap processor? or maybe you know of a motherboard/cpu combo deal that will cut the cost of an i5 ju

  • I doubt they're actually faster at all graphics operations than AMD. A Kaveri APU has a memory controller that runs natively at 2400MHz. These new i5 and i7 CPUs run at an utterly pathetic 1600MHz. When I changed a trinity APU system's memory from 1600 to 2133, the graphics rating in Windows 7 went up 0.7 points. Memory bandwidth is twice as important when you're sharing it with the rest of the system for normal operations as well as graphics operations. They reeeeeally need to upgrade their CPUs and c
  • It has been a very long time since I bought a "latest and greatest" chip when building a new computer, because 2-3 revisions old still has many times the performance of the machine it's replacing and far more "snort" than I could ever use on day-to-day activities.

    With any luck, this announcement and release will bring the price down on the chips I want by another $100 or so by January-February, when I hope to actually be building a new machine.

    The bleeding edge is fine for gamers and hard-core video en

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...