Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
IBM Intel Technology

IBM Research Alliance Has Figured Out How To Make 5nm Chips (cnet.com) 56

IBM, GlobalFoundries, and Samsung said Monday that they have found a way to make thinner transistors, which should enable them to pack 30 billion switches onto a microprocessor chip the size of a fingernail. The tech industry has been fueled for decades by the ability of chipmakers to shoehorn ever smaller, faster transistors into the chips that power laptops, servers, and mobile devices. But industry watchers have worried lately that technology was pushing the limits of Moore's Law -- a prediction made by Intel co-founder Gordon Moore in 1965 that computing power would double every two years as chips got more densely packed. From a report: Today's chips are built with transistors whose dimensions measure 10 nanometers, which means about 1,000 fit end-to-end across the diameter of a human hair. The next generation will shrink that dimension to 7nm, and the IBM-Samsung development goes one generation beyond that to 5nm. That means transistors can be packed four times as densely on a chip compared with today's technology. "A nanosheet-based 5nm chip will deliver performance and power, together with density," said Huiming Bu, IBM's director of silicon integration and device research. Take all those numbers with a nanograin of salt, though, because chipmakers no longer agree on what exactly they're measuring about transistors. And there's also a long road between this research announcement and actual commercial manufacturing. IBM believes this new process won't cost any more than chips with today's transistor designs, but its approach requires an expensive shift that chipmakers have put off for years: the use of extreme ultraviolet light to etch chip features onto silicon wafers.
This discussion has been archived. No new comments can be posted.

IBM Research Alliance Has Figured Out How To Make 5nm Chips

Comments Filter:
  • Moore's law (Score:5, Informative)

    by Whatanut ( 203397 ) on Monday June 05, 2017 @11:14AM (#54551913)

    I'll just leave this here.

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Since Whatanut doesn't spell it out. Moore's law is an observation that number of transisters in a dense integrated circuit doubles approximately every two years. This is a tech site, we should get it right - shame on msmash. Really, why go to the trouble to name Moore's first name and the year, and get what he actually said wrong.

      OK. Enough discussion on the inaccuracy. What do yall think of their 5nm claim?

      • by Khyber ( 864651 )

        "What do yall think of their 5nm claim?"

        They just about hit the limits of atomic transistors, if this is true and the feature size they refer to is not the trace but the transistor itself.

        From that, I expect to see at least an IPC improvement of 2 or 3 per thread given the shitty bloated coding everyone does now days, which should put us back on par with how things worked when we were on server-class P3 dies and didn't need these extra bullshit instructions.

        • But it's easier to build a faster processor than it is to retrain all of the shitty programmers writing bloated code.

        • Increasing instructions per clock is already extremely difficult and hardware intensive. I doubt that we'll see a doubling of IPC in a decade, and it may require compilers to optimize code for high IPC.

          Branches are a major consideration in increasing IPC. In order to prevent stalls caused by erroneous branch decisions, both paths may be speculatively executed as a new thread, and each branch may in turn hit another branch and in turn require a new speculative thread, all of which must be executed simultaneo

      • > What do yall think of their 5nm claim?

        Considering Silicon can't scale past ~5 GHz, whereas GaAs can scale to 500+ GHz I see this as a stop-gap solution. Sure, 5 nm will allow for more cores on a die, but single-threaded performance is STILL a bottleneck. No one seems to looking at the long game -- bio-computing -- due to the Billions invested in existing infrastructure.

        At some point we need to ditch Silicon -- this is just keeping it on its dying (performance) legs.

    • <SIGH>, I know. Moore's Law (observation, actually) is that transistor count (approximately) doubles every 2 years, not "power" (current times voltage?), or "processing power" (Whetstones perhaps?), and certainly not "speed" (GHz times bus width?). This process seems to obey the transistor count rule, but with heat already being the problem it is, it's hard to say what quadrupling the density actually buys you.
      • by tlhIngan ( 30335 )

        This process seems to obey the transistor count rule, but with heat already being the problem it is, it's hard to say what quadrupling the density actually buys you.

        Really, you cannot imagine there are any chips in your computer or smartphone today that will not benefit from a quadrupling of density?

        There are many transistor-limited devices out there - where the ability to stick more transistors in a smaller area is a net plus (or more transistors in the same area).

        I'm talking about memory. And memory in t

  • by SPopulisQR ( 4972769 ) on Monday June 05, 2017 @11:37AM (#54552027)
    We need to switch from nanometers to picometers. 5 nm is 5000 picometers. The diameter of silicon atom is 210 picometers, thus 5nm will be equal to approx 24 silicon atom diameters, which will provide a valuable perspective. I do understand that measuring lakes in olympic swiming pools will be used to compare measuring transistors with the silicon atom radius, however in this situation there is a limit on how small transistors can exist in practice.
    • It really is pretty exciting how close to atomic our transistors are becoming. We could conceivably create transistors that are smaller than some atoms. Regardless of what unit of measurement is used, it's really hard to understand the concept of how small that is.
    • We've done it before, in the switch from micrometers to nanometers [wikipedia.org]. The 80386 processors were 1um processes.

      To be honest, though, I don't remember anyone talking about process size until 90nm CPUs. Previously to that, everyone measured CPU performance by processor clock speed. So, I guess we needed a new performance metric to talk about (along with multi-core), since speeds more or less stalled.

      • by mentil ( 1748130 )

        I recall enthusiast sites talking about the .130 micrometer process, it wasn't often put in terms of nanometers until the 90nm process.

        • That could be. Maybe it's just today that we frame the measurements in such a way with the benefit of hindsight.

    • I prefer Angstroms, because that's closer to the dimemsions of atoms.
  • EUV (Score:5, Informative)

    by edxwelch ( 600979 ) on Monday June 05, 2017 @11:42AM (#54552055)

    "but its approach requires an expensive shift that chipmakers have put off for years: the use of extreme ultraviolet light"

    Actually, EUV has been planned for 5nm all along (even for 7nm). It make the process cheaper, not more expensive (by reducing the number of masks)

    • But EUV is much, much slower. For volume production, you end up with fewer wafers per day.

      • Immersion steppers have higher throughput, but as they require more production steps they are effectively slower.

    • But 'm sure that the shift to EUV is expensive which is what it is saying.

    • It make the process cheaper, not more expensive (by reducing the number of masks)

      You are so dating yourself. Just ten seconds on Wikipedia concerning EUV lithography would shave a decade off your musty knowledge.

      The whole point of the post-2009 lithographic era is that nothing traditionally used as a benchmark of progress comes for free.

      Advantage: fewer masks
      Disadvantage: vastly longer step time

      Moore's law is still hobbling along, but it definitely lost a testicle circa 2004–2009. You can see it in

  • The way we are building processors (primarily lithography) is old, tired and reaching it's limits. What we need now is to focus on figuring out how to make small machines that can work cooperatively to build new things, including copies of themselves. It's been known for quite some time that future of computing is in massive distributed systems, so why aren't we applying the same concepts to manufacturing processors?

    • What we need now is to focus on figuring out how to make small machines that can work cooperatively to build new things, including copies of themselves.

      Surely, nothing could go wrong [wikipedia.org] with that?

    • by mentil ( 1748130 )

      We'd probably have to use lithography to create these nanomachines in the first place. Using nano-tweezers to manipulate one atom at a time to create an array of nanoconstructors is infeasible, so it's more likely we'll start with larger machines that can make smaller machines. The problem is they'll be so small they can't rely on optics, yet will require some autonomy; humans can't just dump a sack of copper atoms into a hopper, they need to corral the atoms themselves. Is Intel really going to invent a Un

  • EUV has been worked on for a while, but at the scales they're ending up on, there's more problems that they don't address in the article. Electromigration starts to become a real problem when the transistors are only a few 10s of atoms wide for example.

  • ... for suing the correct definition of Moore's law!
  • I understood that when founders anouce improved density nowadays, it is more because of 3D stacking rather than miniaturization.

    But the mention of ultraviolet light suggest this time it is indeed about miniaturization. Anyone has more information?

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...