Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Google News

Google Mourns Veteran Engineer Luiz Andre Barroso Who Invented the Modern Data Center (wired.com) 35

Brazilian engineer Luiz Andre Barroso, who ripped up the rulebook at Google, has died. His radical ideas for data centers laid the foundations for cloud computing. Wired: Luiz Andre Barroso had never designed a data center before Google asked him to do it in the early 2000s. By the time he finished his first, he had overturned many conventions of the computing industry, laying the foundations for Silicon Valley's development of cloud computing.

Barroso, a 22-year veteran of Google who unexpectedly died on September 16 at age 59, built his data centers with low-cost components instead of expensive specialized hardware. He reimagined how they worked together to develop the concept of "the data center as a computer," which now underpins the web, mobile apps, and other internet services.

Jen Fitzpatrick, senior vice president of Google's infrastructure organization, says Barroso left an indelible imprint at the company whose contributions to the industry are countless. "We lost a beloved friend, colleague and respected leader," she writes in a statement on behalf of the company.

This discussion has been archived. No new comments can be posted.

Google Mourns Veteran Engineer Luiz Andre Barroso Who Invented the Modern Data Center

Comments Filter:
  • Uh oh (Score:2, Funny)

    He's not the guy who was following Google Maps directions and drove off that broken bridge in North Carolina, was he?

  • Uh wut? (Score:5, Informative)

    by iAmWaySmarterThanYou ( 10095012 ) on Friday September 22, 2023 @06:06AM (#63868239)

    I was doing data center work before google existed. "Server farm" was not a term invented at google. They took it to the next level by building their own super stripped servers to reduce costs but that level of customization was only possible by google and a few other huuuuuge companies. Everyone else bought Dell or whatever.

    • Re:Uh wut? (Score:5, Insightful)

      by Ed Avis ( 5917 ) <ed@membled.com> on Friday September 22, 2023 @06:15AM (#63868269) Homepage

      I do remember back in the day Google was known for its contrarian approach. Consultants would tell you that for an "enterprise" data centre you needed expensive servers, redundant power supplies on each unit, RAID on each unit in case a disk failed, ECC memory and so on. But Google decided to get the reliability at the large scale, throwing together large numbers of cheap systems with off-the-shelf parts and if one of them fails, well you just leave it there and use the remaining ones.

      Nowdays it's conventional wisdom that servers should be "cattle, not pets". Perhaps in even in 1999 the smart people knew that. Perhaps I am setting up a straw man with these "consultants" who wanted an expensive, gold-plated approach. For sure it would have happened anyway without Google. But this guy did have to swim against the current.

      • Before 1999, (years before), we were buying shitty low end servers by the rack with no server level redundancy. No offense to this guy, RIP, but at best he brought years old concepts to big companies. In fact, I doubt anyone at the time with any level of traffic was building big ass web servers with redundant power supplies etc. Scaling was a solved problem at the time and those same pre-google concepts are still the underpinnings of data center building today. The only "new" concept was multi regional

        • He's not being credited with the idea of using a bunch of low cost components to run decentralized applications.

          He's being credited with conceiving of the data centre as a computer.

          The insight wasn't in the selection of the components, but how the components were integrated, and that concept of integration is what evolved into the cloud.

          • Sun taught us that back in the 90s.

          • by Jzanu ( 668651 )
            Precisely [springer.com].

            From the abstract of his book:

            This book describes warehouse-scale computers (WSCs), the computing platforms that power cloud computing and all the great web services we use every day. It discusses how these new systems treat the datacenter itself as one massive computer designed at warehouse scale, with hardware and software working in concert to deliver good levels of internet service performance.

            • Shrug, lots of others were doing similar at a smaller non-billion $ scale before google existed. I am not making any claims to having come up with anything new. My first internet company was already doing this stuff before I joined.

              To do what google did required lots of money. The concepts weren't new. We just couldn't afford them. We did the parts that were justifiable financially.

              • by Jzanu ( 668651 )
                Scale is the point. Thousands and even millions of devices operating together in an organized fashion is what Barroso's design enabled. Just as it was not based on simply cheap hardware, it is also not simply a larger financial investment. His work was a change in the architecture creating a new type all together. All of those thousands of devices are used to execute the same software together for operating cost, load management, power usage, and overall reliability improvements that make everything before
        • At the time, it was called a Beowulf cluster. Ideally, your code does checkpoints and restarts so that a failure on a node would result in the computation moving to a different location in the cluster.
      • This is basically correct from my memories as well. Very contrarian and I remember business people in 2007 saying that when Google sales guys showed up in jeans to a C-Suite meeting it was unheard of back then.

        Some point they switched to just being a vague hollow shell that has been taken over by the government, like Microsoft.

      • But Google decided to get the reliability at the large scale, throwing together large numbers of cheap systems with off-the-shelf parts and if one of them fails, well you just leave it there and use the remaining ones.

        That strategy is not only older than Google, it's older than the popularity of the internet. See: DQS [yale.edu]. If a host goes down, your jobs just don't get submitted to it. At Silicon Engineering (nee Sequoia Semiconductor) we used it to send Verilog, magic, and spice jobs to idle nodes. If we had a problem with one node, it didn't affect business.

      • When Google started, they were quoted and Sun E10K and a few giant Internet pipes. Somebody (Rubin?) said that just the Internet plan would wipe his budget. So they started building their own servers using lots of consumer PC motherboards. I would like to know why they removed the AC power supply and went with DC.
    • Re:Uh wut? (Score:5, Informative)

      by mjwx ( 966435 ) on Friday September 22, 2023 @07:24AM (#63868417)

      I was doing data center work before google existed. "Server farm" was not a term invented at google. They took it to the next level by building their own super stripped servers to reduce costs but that level of customization was only possible by google and a few other huuuuuge companies. Everyone else bought Dell or whatever.

      Erm... they didn't say "invented the datacentre", the headline says "modern datacentre". TFS went on to elaborate that he bucked the trend by using low cost parts.

      • using low cost parts.

        Revolutionary. Nobody would've ever thought to do that.

      • I never said they or anyone else invented the data center. That's not what the article is about nor what my reply is about.

        The article and my reply are both about the evolution of the data center over time.

        Who said anything about "invented the data center" besides you?

      • Bucked which trend? Remember when we called those "Beowulf clusters" named after the Beowulf cluster built by IIRC, NASA several years before Google was founded.

        • by bgarcia ( 33222 )

          Yes I do. Beowulf clusters were used for big parallel computing projects. They were the cheap alternative to supercomputers.

          But that handled huge compute, not data serving. Google was the first company to realize that you could build a datacenter using cheap computers. And they went much more extreme. They put bare motherboards on the rack [wikipedia.org] - no cases. Made it easier to cool without a huge case in the way.

          • Trees but this is getting ever more qualifiers.

            Now it's not that he came up with the idea of using cheap computers at scale, it's that he came up with the idea of using cheap computers at scale for a different task.

            Sure and a bunch of iterating on the cheapness to make it cheaper.

            But he still didn't invent the idea of scaling using cheap computers, that predated Google's existence by 4 years.

      • Erm... they didn't say "invented the datacentre", the headline says "modern datacentre". TFS went on to elaborate that he bucked the trend by using low cost parts.

        Ha! We were already doing that in our server room back then. Not because we thought it was a good idea... it was just that my boss was very cheap. /j

    • Much like they had food delivery before Uber Eats and Grubhub. But nowadays everybody acts like something is new and never seen before just because some tech company is "disrupting" it.

    • I saw one of the early server racks from Google. They were a set of motherboards in a stack, separated by thin slabs of cork. Seriously. They keep it around possibly so the young employees can marvel at it. Of course, that was probably from the early startup days when money was scarce and there was so much more competition from the big boys.

  • That is an impressive legacy, yet such a young age to go. It's almost as if he opened the doors to people everywhere wanting to work in tech. RIP

No spitting on the Bus! Thank you, The Mgt.

Working...