Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Technology

We're Not Prepared For the End of Moore's Law (technologyreview.com) 148

Gordon Moore's 1965 forecast that the number of components on an integrated circuit would double every year until it reached an astonishing 65,000 by 1975 is the greatest technological prediction of the last half-century. When it proved correct in 1975, he revised what has become known as Moore's Law to a doubling of transistors on a chip every two years. Since then, his prediction has defined the trajectory of technology and, in many ways, of progress itself. Moore's argument was an economic one. It was a beautiful bargain -- in theory, the more transistors you added to an integrated circuit, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip.

Almost every technology we care about, from smartphones to cheap laptops to GPS, is a direct reflection of Moore's prediction. It has also fueled today's breakthroughs in artificial intelligence and genetic medicine, by giving machine-learning techniques the ability to chew through massive amounts of data to find answers. But what happens when Moore's Law inevitably ends? Or what if, as some suspect, it has already died, and we are already running on the fumes of the greatest technology engine of our time?
This discussion has been archived. No new comments can be posted.

We're Not Prepared For the End of Moore's Law

Comments Filter:
  • Maybe the easy benefits (profits) of IT and automation have already been realized and has nothing to do with transistor count?
    • Yeah, not only did I prepare, I'm already there, sipping lemonade.

      • Same here. And same for 99% of the world as well, their computers, phones, whatever are fast enough for whatever they need to do. I've got a ~14-year-old PC (remember Q6600's?) that, after an SSD backgrade some years ago, is pretty much indistinguishable from a PC fresh off the shelf today, apart from it running Windows 7 rather than Windows SuckingChestWound today. In either case virtually nothing I do on it pushes the CPU past 5%. So for most people when/if Moore's Law ends, they won't even notice.
        • by jbengt ( 874751 )

          . . . for 99% of the world as well, their computers, phones, whatever are fast enough for whatever they need to do.

          Back in the 70s and early 80s, people could be truthfully saying the same thing about their calculators, phones, electronic typewriters, pencils, and papers. Thing is, people won't know what help more calculation speed can be until they figure out how to use it, and 99% of people won't figure out how to use it if it isn't available yet.

    • by Roger W Moore ( 538166 ) on Monday March 02, 2020 @05:12PM (#59789116) Journal
      The lack of benefit from transistor count has been true for quite a while. The incremental speed of a single core has been extremely low for a decade or more. Clock speed has maxed out at around 3-4 GHz and the only single-core increments now come from better designs. The fact that modern CPUs come with multiple cores is something that not a lot of code can properly utilize. Writing code that is truly multi-threaded is far harder than writing for a single core so not a lot of people do it and even those that do usually end up with inefficient designs leading to blocking that slows things down.
      • Most tasks just aren't suited for being parallelized. For the ones that are, we use GPUs or Hadoop.
        • That depends on the task. In particle physics, there are a lot of tasks that GPUs are not good at because the time it takes to transfer the data to and from the GPU eats up most of the gain of using the GPU.
          • If the task performance is databound, then how does it help to have multiple CPUs?
            • At the very least you can thread the call for data so that the main UI does not freeze, even better would be if you could thread it in a way that you could carry on with other tasks and data calls while other calls where in progress. There is definitely a very big need for more software to use more CPU's, which is why they keep coming out with easier and easier ways to kick off threads and handle callbacks.
              • There is definitely a very big need for more software to use more CPU's,

                Not really seeing it. The only examples you give work just as well with pre-emptive threading as they do with multiple cores. Also, you seem unfamiliar with common data paradigms like select() and epoll().

      • by nonBORG ( 5254161 ) on Monday March 02, 2020 @06:19PM (#59789440)
        Actually I will disagree. Multiple cores etc all take more transistors, when we went to 14nm it was also a drop in power consumption which allows higher density and speed.

        There is not a lot of preparing to do in my opinion, however for the past >10 years we have been told it is the end of Moores law and every time Moore (who had a law based on an observed trend rather than a scientific principal) keeps on coming through.

        We are still seeing Moores law helping with 5G, phone SOCs more memory in smaller packages. However things are getting to the point where how much Moore do we need. Do we need to double the speed of phone processors or get more megapixels in our cameras or more storage in our watch? The pressure for more (Moore) is possibly reducing because of the applications side but what do I know, I am not going to predict the end of Moore's law anytime soon. Didn't some guy say we can now close the patent office because everything that will be invented has been invented? It takes a great deal of arrogance to predict the end and many have tried and fallen under the momentum of Moore.
        • by thomst ( 1640045 )

          nonBORG pointed out:

          There is not a lot of preparing to do in my opinion, however for the past >10 years we have been told it is the end of Moores law and every time Moore (who had a law based on an observed trend rather than a scientific principal) keeps on coming through.

          Wish I had points to give you the +1 Insightful upmod this deserves.

          I've been calling it Moore's Observed Trend since the mid-1990's, because "the number of transistors on a chip" will eventually, inevitably crash into the brick wall of physical limits (such as the minimum number of atoms-width separation between traces to prevent electron leakage from one to the next), and BANG! - there will go the densification of the neighborhood.

          "Moore's Law" is still the sexier name, though, I have

        • I can't help but notice that your user ID is more than 5 million. You can't be very old. Do you even remember when cores used to get dramatically faster every year? Moore's law is indeed already effectively over. It was over 10 years ago.

          Are there some gains still being eeked out of die shrinks and optimization? Yes, but it is no longer following any sort of dramatic increase every year. At least not for CPUs. GPUs still have not hit their wall because they focus on embarrassingly parallel tasks can just ad

          • Comment removed based on user account deletion
            • by Aereus ( 1042228 )

              This isn't a good comparison when AMD floundered for close to an entire decade with very minimal gains, and it's only very recently that they've hit upon a tenable architecture again. Two datapoints doesn't prove a trend.

              Also, Threadripper is significantly larger than a regular processor and is made of numerous separate dies, so it's already cheating in a way. (Eyeballing it appears between 2-3x the areal size of a standard quad-core CPU)

        • and every time Moore (who had a law based on an observed trend rather than a scientific principal) keeps on coming through.

          Except it hasn't been true for over a decade!

        • ...how much Moore do we need?

      • "The fact that modern CPUs come with multiple cores is something that not a lot of code can properly utilize."

        This is irrelevant. The vast majority of the really CPU-intensive tasks are highly parallelizable. Most of the things that you can't parallelize aren't CPU-limited, they're I/O limited. This is especially true for things that lots of users do, like compression, decompression, video encoding/decoding, graphics filters, audio encoding, and yes, gaming. All of these have opportunities for parallel exec

      • Correct. We have H265, we have graphics cards, we have 5G, that's all I ever need, besides DNA edits, and that is here now. The speed of light is a constant is also and the time for charge saturation, at least on silicon is also a constant, and x-ray lithography is also at a hard boundary. Even if solved, charge reliability would remain. Next kid off the block is associative memory, which should not be called AI, and a Z80 alternative register set(s) - which kinda appears on GPU cards. Thus only supercond
    • Your suggestion has nothing to do with transistor count.
  • by fahrbot-bot ( 874524 ) on Monday March 02, 2020 @04:50PM (#59788968)

    ... from a Law to Moore's Strongly-Worded Recommendation.

    • by organgtool ( 966989 ) on Monday March 02, 2020 @05:00PM (#59789022)
      That's all it ever was - it should never have been called a "law". In science, that word is reserved for limits that can never be broken. That's not to say that Moore's contribution hasn't been extremely useful but It should have been named something like "Moore's Trend" or "Moore's Observation".

      /rant
      • (discussing the title of the "Rules of Acquisition" book)
        QUARK: Then why call them Rules?
        GINT: Would you buy a book called "Suggestions of Acquisition"? Doesn't quite have the same ring to it, does it?
        QUARK: You mean it was a marketing ploy?
        GINT: Shh. A brilliant one. Rule of Acquisition two hundred and thirty nine. Never be afraid to mislabel a product.

      • That's not to say that Moore's contribution hasn't been extremely useful but It should have been named something like "Moore's Trend" or "Moore's Observation".

        Barry Manilow's Mandy Moore's Law & Order Special Victim's Unit of Measurement

        • We still think of CPUs and even GPUs as a 2-dimensional device. As temperature and fab changes emerge, you'll see 3D chips.

          The problem is, Intel screwed up when it started making multi-core machines that had no software and no mechanism of getting rid of dirty cache between and among cores. Blame them for bad design; the Oracle Sparc had it, but Oracle's purchase of Sun's assets was a shitshow in and of itself.

          Will the density be renewed as we go 3D? Maybe. The x64/x86 designs waste a lot of space because o

          • Eventually the x64 ISA legacy is going to become an insurmountable problem. Modern processors are still pretending they are an 80386 in some ways.

            • Largely. The grafting of under-CPUs, and memory protections for virtualizations (along with guess-aheads) have made both Intel and AMD CPUs recklessly insecure.

              The AMD PSP core is a crime-- an insane design. The inability to correctly manage cache is another travesty, but the list is long.

              Certainly it will take time for ARM fundamentals to catch up. But Samsung, Apple, and others are desperate to get away from Intel's oil-well-in-the-basement problems, and we all understand WHY.

          • I like what you wrote because it makes a point that I've been seeing on /. for the last 8-14 years. Everyone looks at Moore's Law in a flat plane ( thinking 2d ) why? can't it be 3D in the sense that it might just look like some star trek data cube, maybe a processing cube?

            we need a newer and different standard for computer's ability to process data.

            maybe using a mathematical standard of how fast it hit's 1 billion digits of pi ( or trillionth ) , and then move it from 1 internal point of memory to another.

            • In a 2D chip, heat dispersion characteristics are down to algorithms that are pretty light. 3D heat dispersion, noise, crosstalk, junction noise, and a myriad other characteristics (like how do you do QA when you can't see through layers) are all problems yet to be solved for production quantities.

              555 timers and their logic are still around, embedded into the USART sections of SystemsOnChips/SoCs. I'm guessing x86 logic and instruction sets will be around long after I'm dead, just as the logic in 4004s stil

              • Comment removed based on user account deletion
                • >>I'm not buying moore's law is effectively dead

                  that's a fair an valid point. I might have stated that it should be, and that was not the point I wanted to express.

                  I think Moore's law is now used for goal setting.

                  What I wanted to project is that we need newer standards to understand what a chip can do within it's type. and have newer goals to achieve

      • by MightyMartian ( 840721 ) on Monday March 02, 2020 @05:22PM (#59789164) Journal

        The notion of "laws", by and large has been dropped by physicists, it's more a holdover from the era of Classical Physics, in no small part because there are sneaking suspicions that in the very early universe, or at extremely high energies, some of those laws might not apply, or at least apply in quite the same way. Physical theories from that era are still referred to in that way, but nowadays, you don't see anyone creating new "laws" per se. For instance, we talk about the "laws of gravity", which are a shorthand for Newtonian laws of motion and gravitation, but by the late 19th century astronomers and cosmologists knew those laws didn't fully encompass observation, and ultimately Newton's mechanics became subsumed into General Relativity as how, for the most part, objects behave at non-relativistic velocities.

      • by Kjella ( 173770 )

        That's all it ever was - it should never have been called a "law". In science, that word is reserved for limits that can never be broken. That's not to say that Moore's contribution hasn't been extremely useful but It should have been named something like "Moore's Trend" or "Moore's Observation".

        I guess someone should have a talk with Murphy and Betteridge and Sturgeon too. Or stop being an anal retentive pedant, one or the other... there's no possible way to confuse this with an actual law of nature unless you're a total cunt waffle.

        • >Murphy and Betteridge and Sturgeon
          those can be considered laws because they are based on humor and not rigorous testing

        • by ceoyoyo ( 59147 )

          Those laws of nature are the same thing: an observed relationship. A few of them, particularly the ones you learn in school, turned out to be true a little more often than others is all. Still not really universally though.

      • by tg123 ( 1409503 )

        ...... In science, that word is reserved for limits that can never be broken......". /rant

        No "Moore's Law" has been correctly used in this case as its a prediction and in Science that's really what Laws are mathematical equations that given a set of information you can use to predict the outcome. Laws are predictions as nothing is absolute in Science. https://www.youtube.com/watch?... [youtube.com]

    • ... from a Law to Moore's Strongly-Worded Recommendation.

      *Pirate Voice* "They're more what you'd call Moore's Guidelines than an actual law"

      • ... from a Law to Moore's Strongly-Worded Recommendation.

        *Pirate Voice* "They're more what you'd call Moore's Guidelines than an actual law"

        Paraphrasing Eddie Izzard [wikipedia.org] from his show Dress to Kill [wikipedia.org]: Before he had worked it out, the Heimlich Maneuver was more of a gesture.

    • Comment removed based on user account deletion
  • What? (Score:5, Insightful)

    by Thyamine ( 531612 ) <.thyamine. .at. .ofdragons.com.> on Monday March 02, 2020 @04:51PM (#59788978) Homepage Journal
    It feels like this is trying to lead to a 'everything has been invented' sort of argument. Obviously more breakthroughs will occur, and they may have nothing to do with Moore's Law. In the meantime, developers will have to go back to actually coding efficiently and not just dumping terrible code into a 8 core processor.
    • by Shaitan ( 22585 )

      "In the meantime, developers will have to go back to actually coding efficiently and not just dumping terrible code into a 8 core processor."

      Bingo, this can and will lead to greater gains than Moore's law. Of course you might have to have developers go back to developing and bring back competent admins... this is probably a good idea anyway since churning out buggy dev grade code directly into production with a handful of automated checks isn't a sustainable game plan for the long term.

      • by Shotgun ( 30919 )

        Bill Gates called.....

        • by Shaitan ( 22585 )

          Shhh... that is the scariest logic of all. All the suits will see are $$$'s everywhere and in the meantime this crap is overtaking the way Airline, Energy, Rail switching, and my favorite entire hospital systems (I know, I'm helping deploy the crap). People do remember there were reasons admin slowed down the dev upgrades and deployments. DevOps isn't about overcoming a technical obstacles admins couldn't solve, it is about bypassing the controls admins put in place to protect the integrity, reliability, an

    • by urusan ( 1755332 )

      I'm pretty sure dumping terrible code into an 8 core processor is going to continue unabated.

      • Re:What? (Score:5, Insightful)

        by urusan ( 1755332 ) on Monday March 02, 2020 @06:58PM (#59789568)

        I should probably clarify what I'm talking about.

        Writing terrible code is only marginally related to hardware performance. Sure, it helped it to flourish back when the modern mainstream software development culture was being founded, but if hardware stopped getting better today we would continue to see terrible, bloated software everywhere.

        The biggest reason for terrible software is our software business culture. Software developers have to keep moving forward as quickly as possible in the short term both because that's the business culture and because software that gets out first tends to succeed regardless of how shitty it is (at least as long as it's not totally unusable). Once a piece of software has "won" market share in some space, it can get by with far less effort, only losing out after years to decades of terrible decisions and only if something consistently good is waiting in the wings to take its place. It additionally puts the business with the shitty but fast-to-market software in a position where it can root out competition while the competition is still weak, further strengthening its position. Thus, getting software to market fast has to take top priority for any successful business, and our software development culture has grown up around this primary force.

        Bloat in particular is an easy quality hit to accept for development speed, since it tends to have less of a negative impact on adoption than other quality issues.

        On top of that, the costs of bloat has almost always been paid by a someone other than the party developing the software. With boxed software, the person buying the software pays the costs of bloat, not the person developing the software and making the sale. With cloud software, the person using the software online is paying for the client-side of bloat, which is often where most of the bloat is as a result of these forces.

        Still, as long as the software is "good enough" and the bloat isn't so extreme as to make the software no longer "good enough", then bloat will not dissuade most consumers, and so it will continue, and our business culture will continue to amplify this problem.

        Also, we're never going to reach an "end" to software development and finally have perfect software to put in place and call it a day. In addition to the constant changes to business requirements, software goes through cycles of fashion, leading to a never-ending treadmill of change. In addition to the internal fashion cycles that are likely more obvious to software developers (which could arguably be ended one day if the perfect solution came along), there's external fashion cycles driven by users (and designers). A piece of user-facing software that still works perfectly well can be seriously hurt by not keeping up with these external fashion cycles for long enough.

        (Side note: assuming there is a perfect endpoint to software development, it would make this whole discussion irrelevant, since we'd be done with development once we wrote the perfect software and wouldn't have to go back and work on the perfect software. Diving into the perfect code would just be an intellectual exercise from there on out.)

        As long as there's no end to software development, then we'll continue to see speed-to-market win out over quality. Even when things have topped out, those who can chase fashion trends the best will tend to be the most successful. After all, this is true for other mature technologies as well (clothing technology hasn't radically changed in decades and keeping ourselves clothed is trivial, yet people continue to spend money on new trendy clothing) except when regulation forces a higher level of quality (cars are not immune to fashion trends either, though the safety aspects of cars require them to conform more tightly around the current optimum).

        It also doesn't help that there are so many people working in IT and development that probably shouldn't be there in the first place. It's relatively well paying work and there's staggering demand for developers,

    • Most industries go through their exponential growth, followed by a maturing into much slower improvements, and sometimes getting replaced by something else entirely.

      We still see occasional better steel come out, despite the iron age being decidedly played out. Maybe IC's are on that path, and maybe that is not a terrible thing that computing power will plateau and some of our collective focus will move on to the next big thing (whatever that is).

      Most folks have more computing power in their hands or on the

    • I'd argue the last major breakthrough was the semiconductor. Everything since then has been incremental improvements to existing discoveries.

  • by mykepredko ( 40154 ) on Monday March 02, 2020 @05:00PM (#59789028) Homepage

    More efficient, less bloated & easier to follow (ie not a million methods in the way of getting to the essential function).

    More seriously, I suspect that we'll continue to improve processes and innovate with new technologies - essentially keeping Moore's Law "True" (in the sense of getting twice the capabilities every 18 months or so).

    • by TWX ( 665546 )

      Frankly this can't happen soon enough.

      Though I suspect that some companies will simply start demanding essentially two computers in one, the first to be the actual brains, the second to run the UI.

      Given how far GPUs have come, we might already be there.

    • by Calydor ( 739835 )

      Unfortunately the trend seems to be heading more towards turning programming into a game of LEGO. You get building blocks and you put them together to eventually have something that looks kinda like what you intended. It's just that most of the blocks aren't quite the same shade, and they aren't nearly as standardized as LEGO blocks are so they don't always fit well together.

      • I've been programming professionally for 35+ years - people have always done that, grabbed blocks and try to get them to fit together. It's incredibly frustrating when you're trying to fix problems with their "code".

        This is indicative of a bigger problem; the actual number of people who can program (which I think is about 25% of people who are coding today).

  • by quax ( 19371 ) on Monday March 02, 2020 @05:00PM (#59789030)

    Our current software stacks are incredibly bloated, compromising performance for ease of use.

    If hardware is no longer magically getting better every year, then we need to again learn to squeeze more performance out of the hardware that's at hand.

  • It has been dead for some time. There were literally decades where the die could shrink offering the same power for less money since the smaller a transistor can be made the more of them you get from the same size piece of silicon. As the limits of this got closer things slowed down and leakage current became a bigger issue. Now the limits are looking quite near and the incremental cost of smaller sizes is often high enough that there is no cost advantage, just a speed or density advantage. There's no d
    • Moore's law only ever talked about transistor density. A side effect of that was that cost decreased or performance increased, but those were not a part of the law itself. We've still go a ways to go before we hit the absolute limits of what's physically possible, but there are plenty of engineering problems to solve along the way. However a bigger limiting factor is that what we have today is already "good enough" for most people. We've hit the point where Moore's law means that you can put an incredibly p
      • Loosely, I think phones are about as powerful as a high-end desktop computer was a decade ago.

        10 years ago would have been the start of the Core architecture or more specifically Westmere [wikipedia.org]. So that could be a 10 core 3.5 Ghz CPU probably overclockable to 4.3 Ghz. I highly doubt any current phone soic can compete with that, but I'd love to see a benchmark.

        I am still using an I7-4770k Haswell launched in June of 2013 and it can play every modern game that I've tried on it without any problem at all. I would certainly like to see a benchmark of that against a Snapdragon 865. I would be very surprised if

  • by Gravis Zero ( 934156 ) on Monday March 02, 2020 @05:04PM (#59789052)

    There is waaaaay too much software that is written like absolute shit and eats an obscene amount of resources to do such a simple job. Abstractions on top of abstractions and virtual machines on top of virtual machines have all lead to abominations of programming like the Electron "platform". I maybe I'm just showing my age but bragging you cut your 130MB application down to 8.5MB is plain pathetic because proper applications capable of the same things are measured in KB.

    The sooner people are confronted with limitations the sooner someone will write better applications.

    • It's tough to blame people. As someone who didn't get a masters in IT, it seems extremely hard to find online sources that teach efficiency in coding especially in specific languages. Maybe someone here knows what those places might be.
    • Companies don't like to hire programmers to optimize their code unless speed is something widely complained about by users, but yes I think Electron apps are some of the absolute slowest. I tried some Electron based HTML editor that was way too slow even in Linux. I think it was Atom. This is totally absurd for a text editor. Text editors ran just fine on my 486-33.

      • by jbengt ( 874751 )

        Text editors ran just fine on my 486-33.

        For that matter, text editors ran just fine on my first personal computer, a Tandy 286, and even on my first work computer, an IBM 8086. They main difference is it was not WYSIWYG and only showed the display font.

    • Eroom's law: Software gets slower as hardware gets faster

  • by Mirek Fidler ( 6160990 ) on Monday March 02, 2020 @05:19PM (#59789152)

    I feel like I read this same thing over and over again, for at least 15 years.

    Still, price of single transistor in IC keeps going down more or less exactly as the law predicts. 8 core CPUs are now mainstream, soon 12. Gargantuan CPUs are built with as much as 50 billions of transistors and you can actually buy them without robbing bank.

    Please note that Moore's law says nothing about frequency or performance. Those are collaterals...

    • Please note that Moore's law says nothing about frequency or performance. Those are collaterals...

      It also says nothing about how many cores are on a single chip. It is specifically about how many transistors are on a chip.

    • That's because it is generally accepted in academic circles that Moores law ended a decade ago.
      > more or less exactly as the law predicts
      I mean, if you consider a margin of 25% "more or less exactly", then yes.
  • In what way is Moore's law impacting anyone other than PHBs and idiot journalists?

    We'll use whatever capabilities are built into the chip, same as it ever was.

  • by Dutch Gun ( 899105 ) on Monday March 02, 2020 @05:26PM (#59789186)

    What's amusing to me about Moore's Law is the never-ending string of failed predictions about it's demise over the past several decades. I distinctly remember at 90nm there were plenty of pundits who thought we were at the absolute limit of physics, and predicted a hard wall then and there. Obviously, with modern chips now approaching 7nm or even 5nm, that death was called just a wee bit early.

    In fairness, this article does admit its more of a tapering-off than a "death". And I'm certainly not saying we won't ever hit either a hard or a practical (economic) limit. Physics tells us you can't shrink the dies forever, of course. And we've long since stopped seeing exponential single-core speed increases. But there may still be some techniques or even some other materials that will allow for the same sort of improvements, albeit in a slightly new direction.

    Frankly, I'm not sure the death of Moore's Law would be a real tragedy. Personal computers and phones are already ridiculously overpowered for what people typically use them for, and plenty powerful even for some pretty CPU-intensive work. It might even be nice to buy a phone and expect it to last a decade, which is about how old my plenty-still-powerful PC is.

    • What's amusing to me about Moore's Law is the never-ending string of failed predictions about it's demise over the past several decades. I distinctly remember at 90nm there were plenty of pundits who thought we were at the absolute limit of physics, and predicted a hard wall then and there.

      You are clearly younger than I am. I remember the exact same predictions when feature sizes were hitting one micron. And I'll bet some older engineers could say the same thing about ten micron feature sizes.

      Obviously th

    • Arguably, Moore's Law already died 10 years ago. You can see that transistor counts have already slowed down: https://upload.wikimedia.org/w... [wikimedia.org]

      If you're going on the original definition, transistors per IC, there's no fundamental wall. We can theoretically make the chips any size, but heat, signaling delays, production yield, etc, will fight you the whole time.

      Doing so in a practical manner is harder. For the last few decades it's largely based on shrinking feature sizes.

      The limit you mention at 90nm or

  • Number of transistors is one thing, but we are not going to hit a wall anytime soon:

    1: We are using a brain-dead concept of a computer in the first place. A shift to a passive backplane architecture with compute boards for everyday machines will help everything, even if we can't get things going past a certain point.

    2: Our biggest bottleneck right now is I/O. Figuring that one out will do a lot more for computing than a lot of other things.

    3: Software is written like garbage. A lot doesn't even bother

    • With respect to IO being the bottle neck, there are ways to figure it out and plenty of ongoing work on it. We use transceivers which are high speed serial data channels. We were at 5Gbps a few years ago and now are up over 20Gbps (this is per lane) The underlying technology is LVDS (low voltage differential signalling) even though LVDS itself is a standard and not the current standard. If you need some serious IO you can design it into your chip (it obviously needs to be deigned in at both ends.) Of course
  • The feedback loop is cost driven. People can afford more transistors because they cost less than they once did so they buy more. It is not any specific metrics like transistor count, density, process...etc. Even if you hit a wall and stop investing in new technologies to enable the next node cost can still continue to be driven by other factors.

    Moore's law to me is better reflected in the $60 SBC powerful enough to replace a desktop PC or 1TB SSD for less than $100.

    • This is where we run into trouble as the tooling cost at 14nm and lower goes up so much that we may not be achieving any real cost benefit. However over time these things will get cheaper. There is plenty of room just to move exiting block diagram level designs to a smaller node size to achieve better results than the node size they were done at.
  • I haven't seen my computer getting twice as fast every two years, I still wait the same amount of time for stuff to happen as I did with Win 3.2. Of course, more stuff is happening, but I don't want to wait for every friggin' program to phone home all the time.
  • by 140Mandak262Jamuna ( 970587 ) on Monday March 02, 2020 @06:52PM (#59789546) Journal
    The electro-chemical batteries are showing a trend similar to the Moore's Law. Every seven years the power density doubles, price halves. So far it show no sign of running out of steam, and we might get two or three doubling before it runs out.

    We seem to be hovering around 120 $/kWh for cells and 150 $/kWh for packs. 75 $/kWh packs are total game changers for the transportation sector and the intermittent power sources like wind and solar. At that price it would be cheaper to build solar+wind+storage than to run fully paid for natural gas power plants. kWh/kg might be competitive even in aviation.

  • by Jodka ( 520060 ) on Monday March 02, 2020 @07:10PM (#59789610)

    Engineers still have a few more rabbits to pull out of the physics hat: Germanium can be clocked higher then silicon, trinary or quaternary arithmetic seems feasible on materials with larger band gaps, stacking increases gate count, then there is quantum computing.

    Not that all of those things will prove feasible, yet there is some hope for the future after shrinking feature size runs out.

  • https://hardware.slashdot.org/... [slashdot.org]

    This is last century's programmable ASIC's great leap brought forward into the present. Can it be programmed by BOTS? will determine if industry bites.

  • It's not like it's hard to stand up processing pipelines on cloud networks these days, and programmers are already putting a fair bit of effort into breaking down jobs so that they can farm them out to those pipelines. Massively parallel is the next big thing and it's already been here for years. Hell, they were starting to really talk about breaking up jobs for parallel processing when I was in college, and that was decades ago.
  • Hate to break it to you - Moores law as doomsday store will hardly have impact on the market.
  • seem to come back every two years but it keeps going.

  • For centuries we lived without moores law and we will again.
    No other aspect of technology improves exponentially the way computers were doing, and yet we're all still alive.
  • by FeelGood314 ( 2516288 ) on Tuesday March 03, 2020 @12:56AM (#59790522)
    The cost of a leading edge foundry has been doubling every 3 years. There are now only really three leading edge foundries in the world, Intel in America, Samsung in South Korea and the Taiwan Semiconductor Manufacturing Company (TSMC). Worse they all get their photolithographic machines from ASLM in the Netherlands since that is the only company that can make extreme ultraviolet lithography equipment. The number of players has been getting smaller and smaller every generation of chips.
  • Every 18 months. Which meant multiplying roughly by the golden ratio every 2 years (~1.6).
  • every so many months yet another new post about moore's law coming to an end.
    it's like the 'year of the linux desktop' but for cpu's.

  • Moore's Law stopped translating into better performance more than a decade ago. Instead of using the vast growth of available transistors for CPU design, instead of giving us faster, more complex cores that take advantage of more transistors, the CPU manufacturers simply started doubling the number of cores in a CPU, for the same price, as if more cores automatically translates into more performance. And so we got exponential growth in cores without real increase in performance for most real world tasks. We

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...