Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Technology

Intel Says Its 7nm Chips Are Delayed By At Least a Year (gizmodo.com) 101

If you've been following along with Intel's troubles moving away from its 14nm process to 10nm over the years, you probably won't be surprised to learn that the company is now having trouble getting its 7nm process off the ground. From a report: On its Q2 earnings call this week, Intel revealed that it's pushing back its previously planed 7nm rollout by six months -- and that yields for the process are now a year behind schedule. This means that Intel can't produce 7nm chips in an economically viable way at the moment. Intel originally expected to catch up with AMD's 7nm chips in 2021, but didn't say when in 2021. With these new delays, that puts Intel's 7nm chip debut in 2022, at the earliest. By then, AMD may already be on its Ryzen 6000 5nm chips on Zen 4 architecture, according to its roadmap -- though that's assuming AMD doesn't run into any delays itself.

However, there is some good news on the 10nm front. From Intel's Q2 2020 press release, the company says it's "accelerating its transition to 10nm products this year" and growing its portfolio of 10nm-based Intel Core processors. That includes its Tiger Lake chips and its first 10nm-based server CPU Ice Lake. Additionally, Intel said it "expects to deliver a new line of client CPUs (code-named Alder Lake), which will include its first 10nm-based desktop CPU, and a new 10nm-based server CPU (code-named Sapphire Rapids)." Intel originally announced its 10nm chips in 2015, but confirmed it was having yield issues and other problems that July.

This discussion has been archived. No new comments can be posted.

Intel Says Its 7nm Chips Are Delayed By At Least a Year

Comments Filter:
  • by Anonymous Coward

    TSMC, a cell phone chip maker, is already close to 2nm but Intel, who has been making chips for decades, is having trouble hitting 7nm?

    • Re:How is it that.., (Score:4, Informative)

      by prisoner-of-enigma ( 535770 ) on Friday July 24, 2020 @10:25AM (#60326037) Homepage

      TSMC is barely at the beginning of a 2nm process. [extremetech.com] To say they're "already close" is extremely misleading. 7nm is in volume production. 5nm is in early production. 2nm is still in R&D.

      • I thought 3nm was the hard theoretical limit ... Whar are the using? Degenerated matter from neutron stars?

        • I thought 3nm was the hard theoretical limit ... Whar are the using? Degenerated matter from neutron stars?

          Unicorn tears, mixed with the blood of the innocent.

        • The names of the processes are pure marketing and have very little to do with reality. Feature size is variable in semiconductors and they can be measured in more than one way. So saying you have a "7nm process" means absolutely nothing. The key part is increasing density while maintaining viable yields on a die's roughly single digit centimetres in both dimensions. There are other ways you can increase density beyond feature size shrinks, one that is now being looked in to is 3D stacking of components. For

        • by Waffle Iron ( 339739 ) on Friday July 24, 2020 @11:52AM (#60326415)

          I thought 3nm was the hard theoretical limit ... Whar are the using? Degenerated matter from neutron stars?

          That made me curious about how much such a CPU would weigh.

          If they made a 2 billion transistor CPU, and assume each transistor needs 2nmX2nmX10nm of degenerate matter at 4e17 kg/m^3 density, it surprisingly comes out to only 32kg.

          The downside is that this matter probably has much more binding energy than plutonium. So if you were to drop your heavy CPU, you might risk setting of an explosion larger than the biggest thermonuclear bombs.

          • That made me curious about how much such a CPU would weigh.

            If they made a 2 billion transistor CPU, and assume each transistor needs 2nmX2nmX10nm of degenerate matter at 4e17 kg/m^3 density, it surprisingly comes out to only 32kg.

            The downside is that this matter probably has much more binding energy than plutonium. So if you were to drop your heavy CPU, you might risk setting of an explosion larger than the biggest thermonuclear bombs.

            That might reduce the size of the DIY market.

          • So if you were to drop your heavy CPU, you might risk setting of an explosion larger than the biggest thermonuclear bombs.

            Don't drop that shit! Pray to God you don't drop that shit...

          • So if you were to drop your heavy CPU, you might risk setting of an explosion larger than the biggest thermonuclear bombs.

            So what you're saying is we're going to need to get better at setting heatsink mounting pressure and those shitty retention clips won't cut it.

            • No, what he's saying is Linus Tech Tips is going to destroy the world.
              • I honestly don't know how people can watch his videos. Something about his voice and intonation just make we want to hit mute and put subtitles on, but the I would actually see his content which seems to be just informative enough to be dangerous judging by the amount of "but Linus said" comments you see online when something doesn't work.

        • I remember when 120nm was the limit because of lithography. Then 65nm was the limit because of leakage. Then 23nm was going to be the limit for some other reason. They will continue to find ways to increase the number of gates in a component.
          • At some point you hit a limit "because atoms" and it's hard to see where to go after that.

            • Spintronics and quantum superdense coding.

              Google it.

              A single atom can have multiple states. Think of it like a baseball can have features like trajectory, spin (rpm). Each feature state can have a number assigned to it and that number is data. It doesn't have to be a 1 or a 0, theoretically it can be numbers like 101001 or 16374 etc. in a single atom.

              • And state becomes probabilistic, even truly random. No so good when you rely on reproducibility.

                • But then there are error checking and correction methods so you can reduce the error probability to an insanely low amount. I mean, if you want to get ridiculous everything is probabilistic.

            • At the very least you hit a limit because of the particular flavor of atom (Silicon) and have to switch to a different flavor. That's tough when we have decades of processes in order to manipulate silicon the way we need it.

              The day we make big chips out of something other than Si, expect horrible yields. Like make 100 chips and get 3 or 4 that work back kind of yields. Might be OK for for high-margin, low-volume stuff.

              • There aren't "flavors of atoms" with significantly finer Interatomic spacing, all candidates lie roughly in the same single digit angstroms scale.

        • As others have said, the node names are pure marketing and lost any precise relation to physical dimensions long ago.

          Roughly speaking, the so-called 7nm node has a 36 nm metal2 pitch, the distance between trace centers. Lately, this has been shrinking by about 10% per major node, so we can guess that the 3nm node will have about 29 nm metal2 pitch. The lithography for these nodes is imaged with 13.5 nm ultraviolet, and the boundaries that have to be imaged in the mask are considerably finer than the meta2 p

        • Marketing bullshit, as is the case with most of the "Xnm" processes. The gates and components themselves are far larger than the quoted size.
      • Yes, but it is not misleading in explaining that, unlike the lies taught in Am. Hist. 101, Captains of Industry rarely 'forge on to new horizons' but generally, while leading the sector, do as little as possible to remain there. Where, even a minor slip up can stop the tide. Read your Shakespeare, it's all there.
    • Re:How is it that.., (Score:5, Informative)

      by robi5 ( 1261542 ) on Friday July 24, 2020 @10:29AM (#60326053)

      Labeling TSMC as a cell phone chip maker is unusual. They're a chip foundry and they have been making all kinds of things. Including desktop x86 CPUs for AMD. They have been manufacturing chips for decades too.

    • TSMC is not a "cell phone chip maker". TSMC is a chip foundry. Their entire business is making chips including AMD CPUs and GPUs, NVidia GPUs and CPUs, etc. Yes they make cell phone chips for Apple but that is not the limit of their business.
    • Re:How is it that.., (Score:4, Informative)

      by GuB-42 ( 2483988 ) on Friday July 24, 2020 @10:42AM (#60326129)

      Intel uses a different naming scheme, where 7nm is on the same level as TSMC's 5nm. None of them represent actual distances, they are commercial designations.

      TSMC and Intel are both researching at the 2nm level, none of them are "close".

    • TSMC, a cell phone chip maker, is already close to 2nm but Intel, who has been making chips for decades, is having trouble hitting 7nm?

      Or this could read:

      Intel, wishing it was a cell phone chip maker, isnt anywhere close to 2nm.

  • This couldn't come at a more inopportune time for Intel, with AMD's surging popularity.

    Intel's strength has always been good chip design *and* good fab execution. Both of these have been compromised as of late, what with the raft of Intel security bugs, the 10nm pains, and now 7nm issues.

    What will be fascinating to watch is how third-party fabs respond to Intel outsourcing chip production. TSMC is already booked solid with AMD and nVidia. [wccftech.com] While I'm sure they'd be happy to take Intel's cash, they'd just b

    • The whole "nm" measuring contest is not equal though when it comes to density. Intel is working on more density packing in those nm length transistors so you get more transistors for a given real estate. If Intel could give up this density package advantage they could prob produce 5nm tomorrow via bigger die but seems to be hell bent on not. "Intel reports a density of 100.76MTr/mm2 (mega-transistor per squared millimetre) for its 10nm process, while TSMC's 7nm process is said to land a little behind at 91
      • While you make some good points, the increased density Intel champions isn't helping them with power consumption, heat production, and overall performance vis-a-vis AMD. Right now AMD is eating their lunch in everything except single-core performance, and doing it at a lower price point as well.

        • by Sloppy ( 14984 )

          Right now AMD is eating their lunch in everything except single-core performance, and doing it at a lower price point as well.

          "Thank God for Dwarf Fortress" -- Bob Swan

      • Yes, yes you can’t exactly compare TSMC 7nm with Intel 7nm in terms of geometry. That misses the point that TSMC has been making millions of chips for multiple companies the last several years while Intel has struggled in the same time. Intel has been trying to make 10nm chips the last 4 years while TSMC has forged ahead of them.
      • by hattig ( 47930 )

        Note that in the real world, TSMC 7nm designs are coming out higher density than Intel 10nm designs, despite what the processes can achieve on paper or from idealised structures.

        Intel is achieving around 50MTr/mm^2 on 10nm (Lakefield), AMD has 60MTr/mm^2 for Renoir, and Renoir has all the low-density I/O on die as well, which Lakefield has offloaded to a 22nm die.

        We'll know more when Tiger Lake is out, which is likely to be the best 10nm product from Intel so far.

    • by hattig ( 47930 )

      Agreed.

      TSMC could take Intel's money, but they would not risk their existing long-term agreements with long-term reliable customers, some of whom are exclusive to TSMC (Apple, AMD).

      So outsourcing will likely be limited to GPUs, and probably the GPUs for the Aurora supercomputer that Intel really don't want to mess up, but have a deadline that their own fabs now cannot meet.

      • TSMC has valid reasons why they will not fab Intel chips. This is ironic because back when Intel was on top with their fabs, they had a little excess capacity but didn’t do much with it. Intel said they made offers to others. While it is not confirmed, the rumor is that Intel wanted ridiculous amounts of money.
    • by robi5 ( 1261542 ) on Friday July 24, 2020 @10:55AM (#60326185)

      Intel's primary advantage used to be process, as well as flexing its brand and semi-monopoly. That's why they could afford major screw-ups that would have bankrupted many smaller or fabless companies:

      - Itanium fiasco - billions of stockholder and customer money burnt
      - Pentium 4 in general, including:
      - Deep, expensive to flush pipeline (NetBurst haha)
      - Rambus
      - Not properly utilizing the purchased DEC Alpha CPU tech
      - sticking to a front-side bus
      - subpar core interconnects compared to HyperTransport
      - missing the x86-64 bus, letting much smaller AMD dictate the future
      - Northbridge for so long?
      - subpar, horrible integrated graphics; still nothing compared to AMD/nVidia graphics
      - being a player in the mobile market (MessagePad, StrongARM anyone?) but then quitting it and failing in the mobile market ever since
      - the entire line of Atom processors
      - Larrabee? promised raytracing?
      etc etc

      It could ride through all these shots in the foot by virtue of its process and foundry count advantage, PR clout (perpetuating the megahertz myth; Intel Inside(tm) etc.) and bundling/enforcing subpar integrated graphics as if it was Internet Explorer, and forcing or heavily incentivizing exclusives eg. Dell at the time to remain Intel exclusive.

      Not as much architectural advance since the original Pentium and the Pentium-derived Core architecture that was developed on the side at the time and it came handy as P4 went up in flames, almost literally. My 2018 iPad is faster in many ways than my top shelf 2020 MBP 16"

      • Bubble memory!
      • Don't forget Intel's continued cheating on benchmarks [semiaccurate.com] and "AMD crippple" function [agner.org] shennangins.

      • by thegarbz ( 1787294 ) on Friday July 24, 2020 @01:03PM (#60326649)

        That's a long list of past technologies which turned out obsolete but few of them were major screw-ups that would have bankrupted anyone.

        - Itanium was an expensive fiasco, yes that is true.
        - Pentium 4 and Netburst was a highly profitable development which kept Intel at the forefront for the entire processor generation. It was hugely profitable, if anything it caused grief for AMD which was much stronger in the P2-K6 competition.
        - Rambus also netted Intel a metric shitton in the server market. Far from a failure.
        - Most of the rest of your list is just choices in development times, e.g. sticking with a Northbridge for so long didn't make them any less popular or less performant. Hell this was a time when there was a very real risk AMD may go under completely.
        - Subpar integrated graphics is a euphemism for: it does what it needs making it the most popular and widely used GPU on the market. Most PCs will never display a 3D graphic.
        - AMD didn't end up "dictating" anything. x64 is a small extension to x86, it doesn't give AMD any extra power. Important to realise is that both companies have been incredibly cross reliant on each other's patents long before this point.

        It's low power offerings however have been a joke, but critically they haven't actually been very expensive jokes. That's kind of why they failed. If you just tweak an architecture slightly it's nice and cheap but not performant (in the power per watt sense).
        - Larrabee? promised raytracing? - Yeah that was a fail but applies just as much to Atom comment. It wasn't an expensive failure.

        Not as much architectural advance since the original Pentium and the Pentium-derived Core architecture that was developed on the side at the time and it came handy as P4 went up in flames, almost literally.

        Yep they have been polishing that architecture for a long time now, and look where they ended up, still the fastest IPC on the market with everyone else playing catch-up. I'd like to have seen more real competition pushing Intel to do something new, but the reality is a large part of your list wasn't bad for the company, it'd made them the silicon powerhouse and gave them licenses to print cash.

        Also your history is backwards. It was the Athlons that could be used to cook breakfast, literally, and at the time Intel fans were mocking AMD for AMD's lack of thermal management. The Netburst architecture was power hungry but it never caught fire.

        • Subpar integrated graphics is a euphemism for: it does what it needs making it the most popular and widely used GPU on the market. Most PCs will never display a 3D graphic.

          Your point is correct but worded poorly: most desktop/WM GUIs are 3D accelerated.

        • by ras ( 84108 )

          Yeah, most of the things listed are failed experiments. Google does the same thing, and people also ridicule Google for it. Embarrassing perhaps, but they cost stuff all apart from a bit of spare cash. The reality is if they aren't having a lot of failures the aren't making progress. Everyone knows you have to kiss a lot of frogs to find a prince.

          Even more ironically, the one thing that may really hurt them was isn't in those two lists above: the Skylake debarcle.

          When I got my Dell Skylake XPS, the dis

      • subpar, horrible integrated graphics; still nothing compared to AMD/nVidia graphics

        Since AMD and nVidia offer discrete GPUs, why would you expect an integrated GPU to be able to compete? The thermal requirements alone to get an APU capable of competing with AMD and nVidia is just not reasonable, especially in desktops where graphics cards can be connected to gigantic radiators. Integrated graphics are mostly for people who don't care about gaming and want a CPU/GPU that can provide decent capabilities at

    • Intel's strength has always been good chip design

      What? Can you double-check your source data, somehow it came through as claiming Intel's strength was ever chip design? You didn't encode it using floating point, did you?

      • What? Can you double-check your source data, somehow it came through as claiming Intel's strength was ever chip design? You didn't encode it using floating point, did you?

        Excluding notable mis-steps like NetBurst vs. Athlon XP, Intel has a history of good chip design compared to competitors. The switch from NetBurst to Core architecture (coupled with AMD's abysmal Bulldozer launch) allowed Intel to dominate for almost two decades, leading IPC and performance-per-watt in most cases. Even now, Intel's single-core performance exceeds that of Zen and its derivatives.

        The problem is fewer and fewer loads are single-core and improving single-core performance is very difficult due

        • by majorme ( 515104 )

          Excluding notable mis-steps like NetBurst vs. Athlon XP, Intel has a history of good chip design compared to competitors. The switch from NetBurst to Core architecture (coupled with AMD's abysmal Bulldozer launch) allowed Intel to dominate for almost two decades, leading IPC and performance-per-watt in most cases. Even now, Intel's single-core performance exceeds that of Zen and its derivatives.

          The problem is fewer and fewer loads are single-core and improving single-core performance is very difficult due to the hard wall of clock scaling beyond mid-4GHz. Intel's various security issues highlight the "shortcuts" they've made to keep that single-core crown. AMD leads with core count by a wide margin, hence their growing appeal. Intel doesn't have a decent counter it can put up against such a paradigm shift.

          No, Intel's main strength for the last 20 years has been fabbing. Core was found to be extremely insecure, no? Combine all the cheats with a bleeding edge manufacturing, an AMD without cash due to illegal activity for which Intel was fined in both the EU and US.

          At this Intel point had switched to a very rapid cycle: the famous tic-toc. They had the best fabs and it was this shocking edge that convinced the company to greenlight Larrabee. They were so far had they actually thought they could beat everyone el

        • Intel got where they are by cutting corners, every generation where they were ahead, they had produced a product that was inferior to their own claims, inferior to initial expectations, and inferior to what the competition was building and released right after them.

          They're masters of corner-cutting in response to the business cycle, that's not the same as getting ahead by doing better engineering. This has been true since they first took market share with the 8088, and the house of cards is only falling dow

        • by jabuzz ( 182671 )

          The list of failed Intel CPU designs is legion. There is iAPX 432, i860, i960 and lets not forget we are supposed all to be using Itanium CPU's right now.

          The it turns out Intel where playing fast and loose with security on their AMD64 designs.

          Anyone who thinks Intel make good CPU's has been on the crack pipe again.

          • The list of failed Intel CPU designs is legion. There is iAPX 432, i860, i960 and lets not forget we are supposed all to be using Itanium CPU's right now.

            The it turns out Intel where playing fast and loose with security on their AMD64 designs.

            Anyone who thinks Intel make good CPU's has been on the crack pipe again.

            And if you look at their competition you find similar lists of failed or otherwise inadequate designs in their histories. Intel is far from alone in this, although they are the most visible due to their market presence.

            I'll agree with you on the security front. Intel's performance advantage seems to have come at the cost of security. Competing designs didn't have these (or at least as many of these) flaws, allowing Intel to claim performance superiority until this came to light. Now that these flaws are

    • The nanometer contest is primarily a dick measuring competition by the marketing departments. Smaller isn't automatically better especially when you consider design tradeoffs required to get it to work.
  • by stabiesoft ( 733417 ) on Friday July 24, 2020 @10:25AM (#60326039) Homepage
    Intel considers outsourcing manufacturing. https://finance.yahoo.com/news... [yahoo.com] That is a tsunami.
    • That is big news, but it is possible it's not as bad as it sounds.

      Current style of processing (i.e. silicon transistors as they look today) is a dead end and the money required to perfect each new node increases substantially. Instead of spending resources on that, it could be a good move to focus on getting new tech (e.g. spintronics, optical, etc.) working as fast as possible.
  • Cause it doesn't sound long off now.

    Will we get massive die sizes for hypotetical CPUs that almost nobody will actually buy, and Intel-like marginal optimization for everyone else until nanites and interstellar space travel are a thing?

    • I thought 2nm was next. But after that who knows. Wormhole technology?
      • More importantly, what are we going to call things once we get below 1nm? Back in the 1990's when I was in college, there was a lot of excitement when Intel introduced "sub-micron" tech (that's anything below 1000nm in today's jargon). Are we gonna start doing things like .75nm or come up with something different and more marketing-friendly? Angstroms anyone?

        • There is a plan to move away from feature size entirely and use density instead. However the simplest approach would be to use picometer (pm) as the metric system exists.
          • Good luck convincing idiot consumers than your 1000pm CPU is somehow better than the 3nm CPU it replaces. Yes, people are that stupid.

    • Layering, hopefully. 3D stacks of circuits.
  • Intel has went from a leading processor fab company to a company that acquires other companies and sits on their technology doing nothing while not investing into the new node scientists and engineers that provided the backbone to their current success. They are now on their downfall but don't even know it yet
    • while not investing into the new node scientists and engineers that provided the backbone to their current success.

      I don't know if that's actually true. Intel spends billions in R&D and has for some time. This is not a thing where throwing large sums of money at the problem always guarantees success (although it doesn't hurt). What it boils down to is TSMC's engineers "guessed" correctly about how to proceed and Intel's didn't.

      Granted it's not really guessing, but there is an element of luck involved in predicting the best way to proceed (out of many) with so many unknowns. Engineering a process that works reliab

      • Also it could be a case where in theory Intel’s approach is better but in practice is harder to implement. TSMC working for multiple customers must be more practical and had to keep making product.
      • Intel looked at the theoretical limits for each node and pushed transistor density hard. This worked up till 14nm, where they had to massage things a bit initially and saw a minor delay, and then they stumbled hard with 10nm, which was originally supposed to be out in 2015-6 before the minor delay that 14nm saw. The weird part is that they apparently made the same mistake with 7nm.

    • They are now on their downfall but don't even know it yet

      They've known it for five years at least. They don't know what to do about it.

  • We saw peak Intel some time ago.

  • It shows that the 10nm issues were not a one-off, but indicative of a long-term problem within Intel's fabrication area. Intel's messaging in the past has been very poor regarding 10nm delays - verging on the edge of legality when it comes to its financial results calls. The fact they have called this out (although I'm sure they've known for months already) is a better sign. But investor confidence will be dropping massively - Intel's stock being down 16% today shows this.

    Intel has been running on the momen

  • Intel is behind AMD is lot's of ways and there up changing for stuff that AMD does not do it not helping.
    Intel really needs more pci-e lanes on the desktop and more 4.0 all a round.

    • For the average consumer, I wouldn’t say PCIe 4.0 is a "need" right now. After all even next generation GPUs will not saturate 3.0 bandwidths. For some use cases that need high bandwidth 4.0 is nice to have.
      • Maybe, but when competitor A offers it, competitor B is at a disadvantage if they're not.
        I for one still wait for more PCI Express lanes on standard mobos, with the advent of nVME SSDs which can be used in x16 PCI Slots with PCI Express bifurcation.

      • well for the stacking 20 lanes off the DMI upping to cpu to DMI link to 4.0 is needed.

        • For what use case, does the average consumer need to stack 20 lanes off the DMI? As I said there are use cases for 4.0. At the moment the average consumer does not need this right now.
          • intel boards stack all storage , network, usb , sound off of it.
            and just have 1 X16 link for video.

            • You still haven't presented one use case why the average consumer needs to stack 20 lanes. Considering high end graphics cards don't even use x8 [digitaltrends.com], why do they need x20? There's still x8 left for storage network, usb, sound to use up. It would take a consumer using up video, M.2 NVMe, USB 3.0, and Ethernet 100% simultaneously for that to happen . Please explain that use case for the average consumer.
              • M.2 NVMe, USB 3.0, and Ethernet are all tied the DMI bus on intel desktop boards. AMD boards have X4 CPU for M.2

                • I know how PCIe works. What you seem to be missing is the point. In order for the average consumer to be bottle-necked by PCIe 3.0, they would have to playing a 2080 Ti at max settings while writing 4 GB/s of data to their NVMe drive, streaming 5 different 4k videos, and saving 5GB/s to an external HD via USB. All of that simultaneously. Please explain how the average consumer does all of that simultaneously. You have yet to show that they do that. That has been my point.
  • Intel's 10nm is denser than AMD's 7nm, the "node designations" stopped being being used to indicate feature size long ago. Intel has the smaller feature size for logic elements.

    • That may be true, but the fact that they're behind their own roadmap by a year is not good. Being unable to stick to your promises is going to make it harder and harder for them to get corporations to buy their stuff—hitching your roadmap to an unreliable supplier is bad for your own profits.

      The most obvious example is Apple, but I'm sure lots of other companies are going to be casting about for a more reliable partner if this is what Intel gives them.

      • I don't think so, Intel is much more massive with over ten times the revenue and dominant than AMD because of server chip market. No one buys a server on the basis of feature size on chip die.

        • I agree, but it's not the specific features that are the problem here. Intel is showing itself to be incompetent at managing deadlines and promising features. Right now, it's a matter of them not meeting a specific feature, but it points to a deeper problem with how the company is managed. Someone is making promises they can't keep, and now the company is a YEAR behind schedule. It would be different if they'd told everyone that they were only going to meet their 7nm goal in 2022 to begin with. You can plan

          • Correction they are a year behind when it comes to 7nm. They are 4 years behind when it comes to 10nm as they have never quite figured out yields on that process.
      • Being behind on fab density doesn't mean much. Intel can always contract to have their stuff manufactured by TSMC or some other fab if they fall drastically behind. What manufacturing at their own fabs got them was exclusivity and priority. As long as their fabs were ahead of everyone else technologically, their processors had a monopoly on that technology. If Intel's fabs fall behind, they just have to bid against everyone else for priority at third party fabs.

        It's ironic that you point out Apple as a
        • Apple doesn't own any fabs - they contract with TSMC and Samsung to manufacture their SOCs.

          Apple does have at least one fab [appleinsider.com] although the rest of your statement is correct.

        • by jabuzz ( 182671 )

          The point is that for at least the last 20 years Intel had a fab advantage over everyone else. In fact it's about all they had, because truth be told if Intel CPU designs where on an equal fab footing with everyone else then they would have gone bust years ago. For that matter if they had not been playing fast and loose with security their CPU's would have been no better than everyone elses even with a fab advantage.

  • From TFS:

    "Intel originally expected to catch up with AMD's 7nm chips in 2021, but didn't say when in 2021. With these new delays, that puts Intel's 7nm chip debut in 2022, at the earliest. By then, AMD may already be on its Ryzen 6000 5nm chips on Zen 4 architecture, according to its roadmap -- though that's assuming AMD doesn't run into any delays itself." [emphasis mine]

    AMD can not "run into any delays itself" because they do not fab their own chips. In 2009 AMD divested the Chip manufacturing part of the business to create Global Foundries. Global foundries then gobled Chartered Semiconductors and IBM semiconductor manufacturing. In 2018 AMD backstabbed Global foundries and anounced that 7nm onwards they would go with TSMC and Samsung only.

    Only TSMC and/or samsung could run into delays themselves, which, in turn, would affect AMD (and Apple, and Samsung, and Qual

    • by ceoyoyo ( 59147 )

      Typically if one of your critical suppliers cannot deliver, you do indeed "run into delays."

    • They didn't backstab anyone. GloFo failed multiple times to make the jump below "12nm", which arguably was just their own 14nm+. So AMD renegotiated and pulled the cpu chiplets over to TSMC, However the I/O die remains at GloFo because they don't need that to shrink since the smaller nodes actually offer more disadvantages than advantages at this point for I/O.

  • by PeeAitchPee ( 712652 ) on Friday July 24, 2020 @11:25AM (#60326317)

    I've been reading about the upcoming demise of Intel ever since the rebirth of AMD 3-4 years ago. I've seen this movie before. Apologies to the zealots, but it ain't happening, folks. Not even close.

    Is AMD finally in a position to force Intel to actually have to make better, more competitive products again? Absolutely. In fact, if I'm Intel I'm especially worried about EPYC and AMD's excellent performance on multi-threaded workloads in the data center. Server CPUs are Intel's x86 bread and butter -- they did over $7 billion in revenue in Q4 2019 in the data center [spglobal.com] -- and now they have no choice but to commit to neutralizing that threat. Expect heavy investment from Intel in that arena.

    Intel has also reclaimed the single-threaded performance lead (for now) with their 10th gen Core desktop CPUs. Don't be dismissive: many common computing activities can only be broken down so far until parallel processing doesn't help you any more (e.g.: optical character recognition). Know Thy Workload has always been and always will be critical when selecting the right tool for the job. Plus, it's unrealistic to think that we're going to re-write the entirety of the world's code so *everything* runs optimized for high-core processors. It ain't happening. It is likely that AMD and Intel will continue to leapfrog each other in single-threaded performance for the next several years at least.

    Simply put: Intel is utterly massive. Their market cap is over a quarter of a *trillion* dollars [fool.com] (that's trillion, with a T) -- over four times that of AMD. Moreover, their revenue is over 10x that of AMD's. They have a huge war chest of cash and in-house R & D to bring to bear, and now that they actually have to compete, AMD needs to be careful. Intel is also a marketing machine, and based on that alone AMD is at a significant disadvantage when it comes to competing for enterprise dollars.

    Most importantly: why would anyone root for the demise of Intel? Put your hate aside for a sec and think about it. The lack of two viable chipmakers means no real competition. We (anyone buying CPUs) all win when these two have to go at it and actually compete. It's fantastic for anyone doing anything with x86 CPUs. If you want better performance and more innovation, you want this battle to be as long-running and bloody as possible. And do you really believe if AMD were to somehow vanquish Intel they wouldn't resort to the same anti-competitive behavior that Intel has done? Come on.

    • It is likely that AMD and Intel will continue to leapfrog each other in single-threaded performance for the next several years at least.

      crazy that AMD actually caught up to Intel in single thread performance, when Intel held such a strong lead for so long.

      then again, im still laughing about Intel hiring will.i.am as the "director of creative innovation".

    • The only thing I really hear people rooting for WRT Intel's demise is the demise of their (essentially) monopoly marketshare.

      Two more equally-sized competitors would seem better all the way around so a single bad generation doesn't kill off AMD.

    • Their market cap is over a quarter of a *trillion* dollars (that's trillion, with a T) -- over four times that of AMD.

      Wow, that is almost as big as Nortel!

    • "Most importantly: why would anyone root for the demise of Intel?"

      You're the third post and nobody had rooted for the demise of intel. I think more people are interested in the idea that Intel is being challenged to do more than ride it's marketing group, and that there may be a sea change if Intel seriously explores outsourcing production. Living in Phoenix, where Intel has 10s of billions of dollars in infrastructure, that would be the kind of monumental shift in the industry that hasn't been seen since M

  • Intel thought they were ready for production, until researchers realized that they had mistakenly developed a 7cm chip.

    Back to the drawing board ...

  • Perhaps had Intel spent the money on R&D instead of stock buyback it could have hired more engineers to solve its engineering problems.

    I don't think manufacturing jobs are coming back to America without meaningful tax reform. Why spend money on costly capital improvement that looks bad on the balance sheet and get punished by the stock market, when the same money can be spent to artificially pump up the stock price and make the shareholders happy in the near term. Damn the long term prospect of the c

  • Firstly, Samsung and TSMC don't use the standard that Intel uses, so their 7nm is really not different than Intel's 10nm.

    And all of this ignores many other issues. Feature size, density are are super important as well. In fact, the Intel 10nm might have a slight lead in transistor density over the others. In the end, it's yield that wins.

    It's not surprising that Intel is having problems, things just get weird at the standard 7nm. EUV didn't pan out as well as liked and the other tricks all have impact on th

Today is a good day for information-gathering. Read someone else's mail file.

Working...