Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software IT Technology

David Patterson Says It's Time for New Computer Architectures and Software Languages (ieee.org) 360

Tekla S. Perry, writing for IEEE Spectrum: David Patterson -- University of California professor, Google engineer, and RISC pioneer -- says there's no better time than now to be a computer architect. That's because Moore's Law really is over, he says : "We are now a factor of 15 behind where we should be if Moore's Law were still operative. We are in the post -- Moore's Law era." This means, Patterson told engineers attending the 2018 @Scale Conference held in San Jose last week, that "we're at the end of the performance scaling that we are used to. When performance doubled every 18 months, people would throw out their desktop computers that were working fine because a friend's new computer was so much faster." But last year, he said, "single program performance only grew 3 percent, so it's doubling every 20 years. If you are just sitting there waiting for chips to get faster, you are going to have to wait a long time."
This discussion has been archived. No new comments can be posted.

David Patterson Says It's Time for New Computer Architectures and Software Languages

Comments Filter:
  • by Anonymous Coward on Friday September 21, 2018 @12:38PM (#57355588)

    We've only had three new ones come out this week. We need M0AR! M0AR languages!! M0AR syntaxes!!

    M0AR of all the things!

    In fact, it should be a requirement for all CS majors to develop their own language before graduations, so everyone can be *THE* subject matter expert in a language. That would be awesome. Everyone would be able to charge $500/hr for being the ONLY expert in their language.

    What could be wrong with this??

    • by mccalli ( 323026 )
      To be honest, it used to be. What's now called domain-specific languages, we used to call Lex and YACC exercises. You had to learn various grammars etc. and be capable of developing your own. This would be 1990-92, for all I know it still is a requirement, though I would imagine the tooling has changed.

      The belief in syntax as immutable is wrong - it's a tool like any other, change it if it holds you back. I'm thinking now about the continued wedding to things like C etc. - they have their place, but they
  • Starting in 2005 (Score:4, Interesting)

    by Jogar the Barbarian ( 5830 ) <greg@NOsPaM.supersilly.com> on Friday September 21, 2018 @12:41PM (#57355608) Homepage Journal

    A SPECint graph shared on Quora shows this slowdown starting back in 2005.

    https://qph.fs.quoracdn.net/ma... [quoracdn.net]

    • That graph seems to confirm Moore's law is still holding.

      "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles about every two years." - Moore's law [wikipedia.org]
      • That graph seems to confirm Moore's law is still holding.

        Yep. It leaves off the multi-core performance which should follow the transistor count growth.

      • From wikipedia: Intel stated in 2015 that the pace of advancement has slowed, starting at the 22 nm feature width around 2012, and continuing at 14 nm. Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two." Intel is expected to reach the 10 nm node in 2018, a three-year cadence.

        So Moore's Law is slowing from 2 to 3 years.

        • by DRJlaw ( 946416 )

          From wikipedia: Intel stated in 2015 that the pace of advancement has slowed, starting at the 22 nm feature width around 2012, and continuing at 14 nm. Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two." Intel is expected to reach the 10 nm node in 2018, a three-year cadence.

          So Moore's Law is slowing from 2 to 3 years.

          Which slipped to 4Q2019 [extremetech.com], with little prospects of an (Intel-scale) 7 nm process following on any reasonable timescale.

          3 years my

    • by pz ( 113803 ) on Friday September 21, 2018 @01:49PM (#57356184) Journal

      People confuse Moore's law with performance. Moore observed that the total number of transistors on a chip was doubling every 18 months. For a long time, that meant that the clock frequency was also doubling.

      Then, a nasty habit of physics to smack us in the phase --- err, face --- came along in the form of speed of light limitations. Given the size of contemporary chips, it just is not (and is unlikely to ever be, if what we know about fundamental physics is correct) possible to communicate from one side of a 1 cm die to the other much faster than in the range of a handful of gigahertz clock speeds, give-or-take. Even with photons going in straight lines in perfect vacuum (none of which happens on a chip) the best you could hope for would be a 30 GHz clock rate, a paltry ten times faster than today's CPUs.

      One obvious solution is to make circuits that are smaller, and thus we started to get more CPUs on a single die. Still, those CPUs need to synchronize with each other, the cache system, etc., so there remain chip-spanning communications constraints.

      The limits on the size of transistors, and thus perhaps on the total number on a chip, are looming but haven't arrived yet. The limits of raw clock speed most definitely have. It is safe to say that our chips will continue to get faster for a while, but the heady days of generation-to-generation massive improvements in single-thread CPU performance are over.

      • In Star Trek TNG they used warp fields to enable data transfers faster than light speed. (Yes I know warping & FTL is just fiction.)

  • And? What if we don't need it to keep getting endless faster?
    • Many things people hope to come true in the future are predicated that processor power increases. If it doesn't, we have hit a wall technically. Much of the progress in the last 50 years is based on digital computing.
      • by gweihir ( 88907 )

        We have hit that wall. This technology is mature and everything it can do well, it can do now at close to maximum-possible speeds. Sure, software sucks today and coders are mostly incompetent and there is some speed increase to be expected from that angle, but that is it. That there was an area of seemingly exponential growth does in no way imply it will continue without limits. And it does not.

        • Correct. We have hit that wall. And that means a lot of things aren't going to come true that people were depending on. People were spoiled by digital computing and think that progress is inevitable. My point is that is isn't.
          • by gweihir ( 88907 )

            I completely agree to that.

            Turns out that in this the past is not a reliable predictor of the future, as is true in any other area.

        • We have hit that wall.

          Not really. We have come to an obstacle that requires a change in tactic. Processing power is still increasing at Moore's law but only for multi-core applications. Once we adapt to the new paradigm where to double your performance you need to double the number of cores, things will pick up again. However, this is a really hard change to make and it is going to take some time to adapt.

    • by gweihir ( 88907 )

      It does not really matter, because we will not get it going much faster than today anyways. There is no "new architecture" or "new language" that will change that. Massive parallel systems failed last century and did so several times. Vector architectures have just hit the same brick wall as conventional ones. There really is nothing else.

  • by david.emery ( 127135 ) on Friday September 21, 2018 @12:47PM (#57355642)

    I worked on the BiiN project. https://en.wikipedia.org/wiki/... [wikipedia.org] A 'capability' was a specific -hardware protected- feature that was set up to be unforgeable and contain access rights. This computer architecture approach date back to the Burroughs 6500 https://en.wikipedia.org/wiki/... [wikipedia.org] and even back to some aspects of MULTICS.

    They're definitely not von Neumann architectures, since a capability pointing to executable code is a very different thing than a capability pointing to data. In many respects, these would be "direct execution engines" for object-oriented languages (even C++, with some restrictions on that language's definition).

    A huge part of this is getting over the illusion that you have any clue about the (set of) instructions generated by your compiler. If you're working on a PDP-8 or even PDP-11, C might well be close to 'assembly language'. But with the much more complex instruction sets and compiler optimizations to support those instruction sets, most languages are far removed from any knowledge of what the underlying hardware executes.

    • I have a few wafers of the 432 in my desk draw at work.

    • by WorBlux ( 1751716 ) on Friday September 21, 2018 @01:50PM (#57356188)

      Are you familiar with the proposed Mill architecture? Thier work with what they call turfs and portals sound very similar. It allows secure calls accross protection boundries, hardware managed data stacks, unforgeble process ID's, and byte-level permission granularity. It's definately not a RISC machine, but it's not a C machine either with hardware features that treat pointers as type of thier own which contain hardware managed meta-data bits usefull to accelerate garbage collection.

  • Various alternate architectures have been tried out over the decades. A lot of other programming models have been tried out as well. They all basically failed or live on only in niches because people could not hack coding for them.

    Performance increases for most tasks are over. Deal with it and stop proposing silver bullets. It only makes you look stupid.

  • by bkmoore ( 1910118 ) on Friday September 21, 2018 @12:55PM (#57355692)
    Moore's law predicted early exponential growth in semi-conductors, but as in all fields it eventually hits an inflection point and becomes asymptotic, infinite transistor density will never happen.
    • Of course, but have we hit that inflection point? By all accounts were only slightly behind the times in transistor count in ICs with them doubling every 3 years now instead of every 2. Still very much a large logarithmic gain.

  • Software Devs (Score:5, Interesting)

    by Anonymous Coward on Friday September 21, 2018 @01:08PM (#57355838)

    This all points back to software devs. I've spent 2 decades dealing with low-level drivers and optimizations in assembly language. Now, not that I would expect developers to write assembly language, the problem I run into is that software developers of high level languages can't even write efficient code at their level. On top of that, they don't even understand how the language stack works, what code constructs give better performance in one language versus another. In addition, they can't even profile their code anymore or look at logs.

    If anything needs changing, it's software developers first. They keep eating up all the computer resources and say "get more this/that for your computer." No, pull your head out of your 4th point of contact and learn to write efficient code. We were doing this shit in the 90s all the time. We even advertised for assembly programmers in NEXT Generation magazine, constantly!

    While there's nothing wrong with using high-level languages, programmers today have lost the art of what it means to be lean and mean. I don't hire any developer unless they can demonstrate they know the stack for the language in which they use.
    Me: "Oh, no assembly language experience?"
    Applicant: "Oh, no. Is that required here?"
    Me: "In rare cases, but I'm trying to understand if you even understand how a computer works at a fundamental level. In fact, have you ever worked with state diagrams?"
    Applicant: "No."
    Me: "Okay, you write an application that simply opens a file. What are the failure modes of your application and the opening of the file? Can you draw a state diagram for this?"
    Application: "A flow chart?"
    Me: "No, a state diagram. Given a set of inputs, I want you to diagram all outputs and failure modes for each state."

    Applicants could answer these questions in the 90s and early 00s, but rarely anymore. I blame software devs for this problem. Hardware engineers are always having to pick up the slack and drag everyone up hill because software devs can't pull their own weight.

  • I assume such a laptop would be pretty fast.
  • I know we've said this before, but I think we really have reached the point where the overwhelming majority of users can no longer tell, use, or appreciate an increase in processing speed. It wasn't that long ago that it was necessary to have a cutting edge CPU to do a lot of important end-user tasks. Now I do the majority of my work - which is vastly more computationally intensive than work I did not long ago - on my laptop. This isn't a cutting-edge gaming laptop or workstation replacement laptop eithe
    • "Can we make processors even faster yet? Sure"

      No, we can't. That is the point. The processor you get next year will only we marginally faster than the one from this year.
      • No, we can't. That is the point.

        Define faster. The single core performance may only increase by a few percent but the number of cores keeps increasing. So if your algorithm can use multiple cores it will be faster if not then it won't.

    • Ditto. My Surface Book Pro is only faster than the Thinkpad it replaced due to faster SSD architecture. Both were bought at a $3k price point (work allowance) and gotta say I prefer the lighter SB even if its keyboard and no mouse nipple almost suck.

      • As bad as the SB touchpad may be, I would bet money it's better than the touchpad I've had to deal with lately when helping a colleague who uses an HP laptop. Apparently she ordered a "gaming" laptop from HP as it was the one on the company list under "high performance". That touchpad is so infuriating awful that I won't meet with her unless I have a mouse with me. To make it even worse (as hard as that is to believe) it has a touch screen as well, which I found out by accident once. Why anyone thinks a
  • Can't read the link so I assume it's about parallelism.

    I think we welcome languages that encourage users to divide a problem into many smaller ones. But do we really need them?

    What I mean to say is that the value of software lies in the APIs and libs you develop. Having it perform well in a parallel environment takes a bit of clever thinking but most of us will hack it.

    There are quite a few programming models and frameworks that already allow astonishing things to happen in parallel. What is Patterso

  • In his blog https://caseymuratori.com/blog... [caseymuratori.com] Casey Muratori advocates the move away from drivers to instruction set architectures (ISAs). Back in the day individual software could boot the entire computer in relatively few lines of code and still do its job while fitting on a single sided, single density floppy disk. Even today, you don't see game vendors making bootable Linux versions of their games that could theoretically work on both Mac and Windows, but I get his point.
    • I'm not sure I understand, but I don't really see the point in having individual applications be bootable on hardware. If anything, it'd make more sense to me to push more stuff from the OS into the firmware so that the firmware would present a standard set of APIs/protocols and the OS wouldn't need to worry about drivers. And then, in turn, standardize APIs across operating systems so that cross-platform apps would be easier.

      Either way, good luck getting any meaningful change out of the computing indust

      • I think that was kind of his point too. He goes into great detail on many aspects of a new ISA and even discusses the vendor issue toward the end. I highly recommend the video.
  • by Tangential ( 266113 ) on Friday September 21, 2018 @01:31PM (#57356032) Homepage
    Maybe can eliminate shared libraries, dynamic linking and other archaic constructs that came into existence to protect scarce resources like RAM and disk space. Let's put each process in its own 'jail' like existence with closely monitored mechanisms for communication between processes.
  • Transistors are doubling every 24 months or so, on par with moore's original enunciation of the law, and slightly off the 18 months of his revision of said law.

    What is not working anymore is _*"People's Interpretation"*_ of said law, that dictate that computers sould be 2x faster every 18 months. Moore never said that. He only said that in a given sqr centimeter of silicon, the optimum number of transistors would double every 24 months. then he latter revised the number to every 18 months.

    When Moore's law w

  • I think it's time for teleporters, holodecks, and replicators. Is everyone with me??
  • by mrwireless ( 1056688 ) on Friday September 21, 2018 @02:22PM (#57356412)
    so.. slightly delayed then?
  • Why is is that there are always those people on here that think we don't need anything new or faster?
    Have you never run any development system and think, OMG, why is this taking so long?

    We need way faster CPU's and computers, In fact we need Quantum computers.
    I would love to compile 1.3 million LOC for 10 different platforms in 3 seconds.

    The faster we go, the bigger the systems are they we can build. Try running a Neural Network the size of your brain!
    A NN with 10^11 (one hundred billion) and 10^15 synaptic

  • It is computers are becoming more tied to the The Cloud where to do anything the computer has to be online. And then there's constant upgrades to upgrade in order to meet the next upgrade (and pay more money). For most of my stuff I don't need a faster computer, just something to do stuff without having to deal with downloading crap I don't need.
  • Patterson's argument is blatantly intellectual dishonest... he talks about single program performance as if single programs are never parallel these days. He mischaracterizes Moore's law as being about single program performance (his creative definition). It is not, it is about transistor density, which continues to increase roughly according to Moore's law, and with no end in sight. Sure, process node shrink is slowing down, but parallelism is increasing rapidly, roughly balancing that. And 3D stacking is

  • How much of this is slowdown is marketing driven? There's no reason to release a chip that's 50% faster if people are buying plenty of the older chip. You want to spread that out over time.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...