Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Hardware Science Technology

The World's Tallest Chip Defies the Limits of Computing: Goodbye To Moore's Law? (elpais.com) 79

Longtime Slashdot reader dbialac shares a report from EL PAIS: For decades, the progress of electronics has followed a simple rule: smaller is better. Since the 1960s, each new generation of chips has packed more transistors into less space, fulfilling the famous Moore's Law. Formulated by Intel co-founder Gordon Moore in 1965, this law predicted that the number of transistors in an integrated circuit approximately doubles each year. But this race to the minuscule is reaching its physical limits. Now, an international team of scientists is proposing a solution as obvious as it is revolutionary: if we can't keep reducing the size of chips, let's build them up.

Xiaohang Li, a researcher at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, and his team have designed a chip with 41 vertical layers of semiconductors and insulating materials, approximately ten times higher than any previously manufactured chip. The work, recently published in the journal Nature Electronics, not only represents a technical milestone but also opens the door to a new generation of flexible, efficient, and sustainable electronic devices. "Having six or more layers of transistors stacked vertically allows us to increase circuit density without making the devices smaller laterally," Li explains. "With six layers, we can integrate 600% more logic functions in the same area than with a single layer, achieving higher performance and lower power consumption."

This discussion has been archived. No new comments can be posted.

The World's Tallest Chip Defies the Limits of Computing: Goodbye To Moore's Law?

Comments Filter:
  • We've never had more than 4 layers of silicon? That's surprising to me.

  • Cool (Score:5, Interesting)

    by liqu1d ( 4349325 ) on Tuesday November 04, 2025 @07:43PM (#65773956)
    Iâ(TM)d always assumed thermals and cost would be the limiting factor in chip thickness.
    • Re:Cool (Score:5, Insightful)

      by MachineShedFred ( 621896 ) on Tuesday November 04, 2025 @08:25PM (#65774022) Journal

      This was my thought too - if you have essentially 6 layers of silicon with insulators between to create 600% of the transistor density, you're also consuming >600% of the power (nothing is ever 100% efficient) and therefore producing >600% of the wattage to dissipate without a corresponding size in radiative surface area.

      How do you not cook the center of the cube when we're already throwing 70W into a single chip the size of your fingernail? Maybe central heat pipes that each layer hooks up to, running vertically through the die? And how much area per layer do you lose to that, at what increased manufacturing complexity (read: cost and reduction in yields)?

      I'm sure those are all answerable engineering questions to present if the value is there. And my guess is that since it's very obvious that stacking chips is a sure-fire way to increase transistor area, that the value hasn't been worth the added complexity to solve the inherent problems because die shrinks were always cheaper and easier to do... right up until they aren't.

      It's good that someone is asking the question and showing that it can be done. But I'd wager [paywall so couldn't RTFA to confirm] that they aren't exactly stacking up the highest performance Xeon or Epyc chips 41 high and running them at full throttle.

      • Re: (Score:1, Informative)

        by Anonymous Coward

        What paywall? I didn't have issue looking at either elpais or nature.com website..

      • Re:Cool (Score:5, Interesting)

        by CaptQuark ( 2706165 ) on Wednesday November 05, 2025 @12:41AM (#65774262)

        How do you not cook the center of the cube when we're already throwing 70W into a single chip the size of your fingernail? Maybe central heat pipes that each layer hooks up to, running vertically through the die? And how much area per layer do you lose to that, at what increased manufacturing complexity (read: cost and reduction in yields)?

        They partially answered that in the elpais article:

        To demonstrate the viability of their design, the team made 600 copies of the chip, all with similar performance. The researchers used these stacked chips to implement basic operations, achieving performance comparable to traditional non-stacked chips but with significantly lower power consumption: just 0.47 microwatts, compared to the typical 210 microwatts of state-of-the-art devices.

        That's a 400:1 power reduction by stacking the chips. Part of it makes sense. If you have 10 items about the size of poker chips, stacking them vertically means much shorter interconnects than if you arrange them flat and have connections horizontally between them. Waiting for a signal to propagate over traces three poker chips wide requires more voltage or current compared to the distance vertically. There are also possibly less capacitive interactions without the longer, parallel traces.

      • This reminds me of something that was done back in the (I think) 90s for one of the Pentium chips. Instead of it lying flat on the motherboard it had all of its connectors along one edge and stood upright on that edge in a special mount that kept it upright so that all of it was exposed to the air and didn't need a heat sink or special fan. Yes, it had its drawbacks, mostly that it couldn't be used in a laptop and needed a tall case, but it worked and worked well. I know, because I used one for several y
        • Re:Cool (Score:4, Informative)

          by thesandbender ( 911391 ) on Wednesday November 05, 2025 @02:40AM (#65774348)
          All Slot I/II's required heatsinks and most had fans (some OEM's didn't but it was intended for the OEM to install the fan). Now the heatsink was often preinstalled (or part of) the cartridge... maybe that's what you were thinking? The max TDP was around 20-30W, not crazy but still required a fan or a chonky passive heatsink. The card/slot was also not done for cooling reasons, it was done so they could bundle L2 cache with a dedicated bus instead of having it on the motherboard (L2 still wasn't on the chip package at this point).
          • That's going far past anything I would know; I'm software, not hardware and it was a long time ago. All I remember is that there were a pair of uprights that helped hold the chip up and it went from the motherboard almost to the top of the case.
            • If it went almost to the top of the case it must have been a SFF system, because Slot 1 and Slot A processors alike were less than half the height/thickness of a typical ATX case.

        • You're misremembering. The card edge thing was to try and get rid of the ZiF. it had nothing to do with thermals and my 233 and 266 came with a fan on them, and thermals were pretty ass even though those chips were not hot or nasty by any means.
      • by tlhIngan ( 30335 )

        There are ways to mitigate the heat though. The problem is not heat itself, but the heat density. You don't want all the transistors in the same location busy with something, for example because that creates a hot spot. So if you can distribute your logic design so the busy parts that are active at one time or another are spread out or located in different parts, it's much less of a problem.

        This is why you can have "turbo boost" on CPUs - where if you have idle cores, you can run some cores faster than norm

    • by gweihir ( 88907 )

      It actually is interconnect first, thermals second and then things like clock distribution and power. There always was more space for logic, but that logic could then not be used.

    • by JBMcB ( 73720 )
      Same thought here. Heat is a major limiting factor for modern CPUs. The die of the IBM POWER chips isn't that much bigger than an AMD EPYC, but the chip package is huge, mainly to accommodate a massive heat spreader with lugs for water cooling.
  • by 93 Escort Wagon ( 326346 ) on Tuesday November 04, 2025 @07:46PM (#65773962)

    Where's the "defies the limits of computing" part?

    • by AvitarX ( 172628 )

      I assume they mean a 10x jump is outpacing Moore's law.

      I would bet though that it'll take long enough to commercialize that it will fall right in place with Moore's law.

    • by Jeremi ( 14640 ) on Tuesday November 04, 2025 @08:29PM (#65774028) Homepage

      Where's the "defies the limits of computing" part?

      Defies the thermal limits, probably.

    • by cusco ( 717999 )

      Moore's Law assumes two dimensional architecture, I don't think that three dimensional layouts were even considered to be possible at the time much less something someone would want to do.

    • by gweihir ( 88907 )

      Nowhere. This is a meaningless stunt.

      • Really? What do you base that claim of it only being a stunt? Multilayer with this much layers is really something special, at this time.
        • by gweihir ( 88907 )

          Some understanding of chip making helps. There is a reason people stop at 6 layers or so. Yes, you can stack more. No, it does not make sense to do so.

          • Some understanding of chip making helps. There is a reason people stop at 6 layers or so. Yes, you can stack more. No, it does not make sense to do so.

            Nobody is stopping. Flash memory is stacked into dozens of layers, modern TSMC and Intel process sport over a dozen layers.

            • by gweihir ( 88907 )

              FLASH is chip-stacking, not layer stacking. Fundamentally different.

              • FLASH is chip-stacking, not layer stacking. Fundamentally different.

                Even limiting your comments to layering they are still comically wrong.

                • by gweihir ( 88907 )

                  What is it with you people, always trying to claim you are right, when clearly you are not? How do you ever learn anything?

          • And technology moves forward, decades ago we had trouble even doing one layer, and progress proved us wrong as we could do more. And now this chinese scientist shows even many more layers as before can be achieved using a new method.
        • IT DOESN'T WORK LIKE THAT, FUCKWIT thermals and interconnects don't scale linerarly when you stack.
          • And apparently this chinese scientist has found a way to do it properly. People like you always think they know better, but in reality you don't know hack shit as you have no clue how this new technique is done. Thanx to advances in technology and understanding we get better new technologies. And don't go saying "oh, but laws of physics...", those "laws" aren't set in stone as we still gave a lot to learn, those "laws" are just made by humans to describe something, but it's not definite as we still have mor
  • by Tschaine ( 10502969 ) on Tuesday November 04, 2025 @07:54PM (#65773978)

    How do you get the heat out of a bunch of CPU cores that are sandwiched between layers of additional heat-generating cores?

    I wonder how much effort it would take to get the defect rate low enough to be commercially viable. It's a hard problem today. Does this make it easier because you can use wider features and just build upward to get the desired number of features? Or does everything just get a lot harder to do consistently?

    • by Coius ( 743781 )

      I wonder if the peltier effect / Peltier Cooling would be the solution to this, obviously in a very thin form. You could use the outside edges of the chip substrate to move the heat too, however coolers would require re-design, but that's a possible way to limit the heat between the layers by adding Peltier type cooling in the sandwich.

      • Nah, put a hole down the center of the chip, where the heat will be concentrated, and pipe it out.
      • by Khyber ( 864651 )

        No, because whatever side is cool, the other side is hot.

        This means one chip layer gets cooled while the other on the opposing side of the cooler is getting cooked.

    • by Tschaine ( 10502969 ) on Tuesday November 04, 2025 @07:58PM (#65773990)

      Turns out, they're targeting low power applications, so heat isn't the same challenge that it is with desktop and server CPUs.

      But since they're not trying to push the envelope of Moore's law, I wonder why stacking is needed in the first place.

      • by MachineShedFred ( 621896 ) on Tuesday November 04, 2025 @08:35PM (#65774042) Journal

        My guess is that they aren't targeting performance, but rather making a lower power system-on-chip that really is a fully-featured system-on-a-chip and incorporates lots of low-power and low-heat peripheral crap like I2C / serial / USB / SATA in addition to RAM, flash storage, NIC, etc. - put the highest wattage bits on the top for direct interface with the heat spreader, and stack the other stuff below with some thermal magic in the sandwich to move as much heat from the lower layers to the edges as possible so you aren't adding to the thermal load of the CPU core from below.

        This kind of thing could be really cool in the low-power embedded / industrial controller space where nobody is looking for laptop performance out of a chip. But you are trading one complexity for another: instead of having to use a lot of geometric area to mount and connect all the peripherals to the CPU, you end up with a shitload of thermal management problems for a very compact system without the geometric area requirement.

        Unfortunately, that geometric area really helps with the thermal problems.

      • But since they're not trying to push the envelope of Moore's law, I wonder why stacking is needed in the first place.

        Moore's law isn't about computing performance, it's purely about transistor density, that they are pushing given that Moore was focusing his discussion on two dimensions.

        • Sure, but the question still stands: what low-power scenarios will benefit from chips that have a smaller surface area?

          We're already making fingernail sized microcontrollers, for example, but there's only so much IO you can do with something so small, so the CPU and storage needs are also pretty small.

          I don't deny that this will make cool stuff possible, I just wonder what sorts of things they have in mind. It's a profound change of direction, but what direction are we going?

    • How do you get the heat out of a bunch of CPU cores that are sandwiched between layers of additional heat-generating cores?

      Easy, you just use synthetic diamond for the substrate. /s

  • One of the things about making something with a larger volume is that you will also decrease the amount of heat it can get rid of.

    How are they going to cool these new 3D chips?

    • by Tablizer ( 95088 )

      I suspect they'll eventually end up being designed akin to tall buildings with corridors between adjacent buildings ever few floors. The coolant will run in between.

    • by rossdee ( 243626 )

      I think it would have to be liquid cooling

      • Liquid immersion cooling can do this, especially if there are micro-channels built into the chip to aid fluid flow. This is the kind of thing that fluid manufacturers have been waiting for (disclaimer: I work for one of those manufacturers)

  • From my understanding, the number of layers wasn't the issue ever, the thermal issues were. When you stack, you start compound the heat issues. With the tech these guys are talking about, the performance starts to degrade at 50c. Any real world scenario using these chips will require highly specialized cooling solutions.

    So while great, they managed to stack vertically higher than others before... is the size/density benefit offset by the cooling requirements? Or is this one of those theoretical wins tha

  • It's sadly still relevant.

    • by Jeremi ( 14640 )

      OTOH the nice thing about software is that it's easy to update, so anyone is free to replace their slow/inefficient software with a faster/efficient version as soon as they obtain it, at which point their fast hardware should run the efficient software very quickly. Nothing (except possibly bad management decisions?) is preventing anyone from creating efficient software, either.

      • by narcc ( 412956 )

        The industry has forgotten how to write efficient software. We've had multiple generations brought up believing that memory is free and that increasingly fast computers will magically solve any performance problems.

        For reasons I can't even begin to comprehend, people still believe that nonsense.

        Ages ago, we'd caution developers against 'premature optimization' as a hedge against needless complexity. These days, the trade-off is the same, just flipped on its head: The code that's easy to read and maintain

  • Every Two Years (Score:5, Informative)

    by crunchy_one ( 1047426 ) on Tuesday November 04, 2025 @08:13PM (#65774006)
    Moore tells us that density doubles every two years, not every year. Also, Samsung has been producing 96-layer V-NAND dies at scale since 2019.
    • This is what I was thinking - RAM guys have been stacking chips for a hell of a long time. First as actual package stacks on the DIMM - I remember seeing some very dense modules in the DDR2 days with stacked packages; and then as you correctly point out: Samsung has been layering dies inside the package for years as a microscopic version of stacking chip packages.

      It's also why DIMMs have heat spreaders on them now.

      • Stacking chips is not the same as a monolithic chip with multiple active layers. When chips are stacked, each chip can be tested before stacking, and the final yield becomes a question of successful interconnect and not damaging chips during assembly. With multilayer chups each layer must be perfect for the device to work.

        I'd guess that they're not using a process with the smallest geometry. That way they can have a process that is basically very high yield for each layer; the resulting die will have accep

    • by gweihir ( 88907 )

      What Samsung does is stack dies on top of each other. What these people here do is a die with more layers. But this is very likely just a stunt that is non-viable for any real use.

    • by narcc ( 412956 )

      It's not a law of nature. It's also quite a stretch from what he actually said. This is where it comes from [intel.com]

         

    • Samsung has been producing 96-layer V-NAND dies at scale since 2019

      There are many fundamental differences between V-NAND flash cells used to store charges and chips designed to perform logic tasks. Not just the construction, the design, the way transistors are manufactured, but also the packaging and assembly. It's one of the reasons your FinFET based CPU can burn itself into a crisp but your SSD does not. This technology is something that so far has only found meaningful application in charged storage. The closest you get to traditional logic being stacked is some cache s

    • Yeah - SK hynix 3D NAND is now over 300 layers

  • "Intel bricks inside!"

  • by PPH ( 736903 ) on Tuesday November 04, 2025 @11:10PM (#65774200)

    ... can it get before all the engineers start speaking different languages?

  • To be fair... they did say 600% more logic so 6x and not 41x (41 layer stack) so they probably accounted for heat dissipation in addition to pads/vias.

    Cuz with 41 layers and no heat dissipation concerns... you can probably do over 39x logic...

    Also Moores referred to 1 chip and not 41 stacked chips... so no. If it was 1 package then we could easily wire 10000 chips into 1 big ass package back in the 70s and Moores law wouldn't even be a thing...

  • It isn't about density of components or any metric other than cost. This is just a feedback loop where people can afford to have more components in a given gadget so they add them driving further cost reductions.

    "For simple circuits, the cost per component is nearly inversely proportional to the number of components, the result of the equivalent piece of semiconductor in the equivalent package
    containing more components." ...

    "Thus there is a minimum cost at any given time in the evolution of the technology.

  • If I remember correctly, around 2000 Siemens-Fujitsu was testing a "vertical" chip for supercomputers, but I think it never reached the market.
    • by mccalli ( 323026 )
      Motorola/IBM too, same time period. I talked to the head of their chip division at the time. POWER (not PowerPC) was doing investigation into vertical chips.

      Mind you, this guy also told me there would never be a chip faster than 2Ghz due to thermal limits so...yeah.
  • Stacked layers traps heat. Heat makes chips die. Also, how do you route all that shit? No duh, 6 layers is 6c as much shit in theory. Also, the fabrication cost just went up and yield went down by a large margin (multiply the yields of each layer and don't fuck up on the last ones, or it's scrap)
  • There is no reported working model. The designed-only chip doesn't defy anything - as presented here at this time it's 100% hype.

  • Before you know it, these things will look like a miniature Borg cube. On the other hand, maybe we've always been wrong about Borg cubes, maybe they *are* miniature and only *look* enormous in the movies!

  • Diamond is one of the best heat conductors, and it is insulating, I believed for a long time that we would create chips with layers separated by diamond

"Though a program be but three lines long, someday it will have to be maintained." -- The Tao of Programming

Working...