Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IT Technology

Second Life for Server Components (ieee.org) 31

Scientists have developed a method to reuse components from decommissioned data center servers, potentially reducing the carbon footprint of cloud computing infrastructure.

The research team from Microsoft, Carnegie Mellon University and the University of Washington demonstrated that older RAM modules and solid-state drives can be safely repurposed in new server builds without compromising performance, according to papers presented at recent computer architecture conferences.

When combined with energy-efficient processors, the prototype servers achieved an 8% reduction in total carbon emissions during Azure cloud service testing. Researchers estimate the approach could cut global carbon emissions by up to 0.2% if widely adopted. The cloud computing industry currently accounts for 3% of global energy consumption and could represent 20% of emissions by 2030, according to computing experts. Most data centers, including Microsoft's Azure, typically replace servers every 3-5 years.
This discussion has been archived. No new comments can be posted.

Second Life for Server Components

Comments Filter:
  • by bill_mcgonigle ( 4333 ) * on Thursday November 14, 2024 @02:35PM (#64946057) Homepage Journal

    ```
    older RAM modules and solid-state drives can be safely repurposed in new server builds without compromising performance
    ```

    We're not gonna pretend that faster memory and faster storage doesn't benefit some workloads.

    But yes, often it doesn't matter and wasting old hardware is foolish.

    Despite using less continuous energy, besides the manufacturing energy and pollution costs, every dollar that goes into the purchase had energy and pollution costs in its creation and 'economists' usually set that to zero, bizarrely.

    Some guy on here was bragging how he does software encryption, secure-wipes the drive, then puts a bullet through it.

    Yay, more landfill that can't go into RoHS recycling streams. Or something.

    A myopic focus on carbon is like 5% of the environmental story.

    • Also density, though that has slowed some. There was a time when I had a pile of 2GB ram sticks on my desk. I was asked why we could not use them or sell them, it was because you needed more in a system than they could add up to and the value dropped accordingly. I offered them up as keychains but the boss did not smile.

      • by Anonymous Coward

        My boss just gave away old servers to employees who wanted them, when they were barely even worth selling.

        • I couldn't even give away the ram.

          • was it registered? 'cause that's unusable in most non-server systems

            • by KlomDark ( 6370 )
              Registered?? You mean ECC?
              • Not all ECC is registered.

                • by KlomDark ( 6370 )
                  What is registered? Been doing hardware for over 30 years and no idea what you're talking about.
              • The Registration Authority for Memory (RAM) is an international body where you can register your sticks of memory. By registering the memory, the authority can keep track of what each page is being used for, what program it's assigned to etc. This means that swapping applications in and out is quicker. With modern internet-connected servers, you can even apply to the Registration Authority for a copy of a page of memory, if your copy has become corrupted, or you want to restore it to another servers.

  • by XXongo ( 3986865 ) on Thursday November 14, 2024 @02:37PM (#64946065) Homepage
    Wow, Second Life is still around? Who knew?
  • by Anonymous Coward on Thursday November 14, 2024 @02:40PM (#64946085)

    Scientists have developed a method

    "Buying used hardware" just doesn't sound as sophisticated, but that's what it is.
    Scientists have also discovered that the secondary market often has lower prices than retail.

  • RAM usually doesn't get repurposed because there's usually a new standard by the time you're ready to build new servers.

    SSDs get faster with every generation, they are putting them on PCIE5 now for example. HDDs don't necessarily always get faster, sometimes they only get more dense, but SSDs are still speeding up consistently.

    Sometimes you haven't got a new standard for RAM in the upgrade period, so yeah you could reuse that. But in 3-5 years, SSDs absolutely will have sped up. If it's just the OS on them then no big deal, hopefully. If you're storing data on them, and many are now doing that, then that's going to affect you.

    What Microsoft discovered here is that for a refresh in a specific time scale, reusing those things didn't affect performance. It's not a general finding, it's specific to the time period studied.

    • TFS left out a few interesting details that make the story noteworthy. The salvaged RAM and SSDs absolutely do not hold up to 3-5 year newer tech but these components are new enough to used in CXE controllers for backwards compatibility and then paired with more powerful and efficient AMD Bergamo processors to keep them relevant. They are essentially making a B-team of server capacity and have created a software layer in their cloud infrastructure that will assign tasks to it that won't suffer from the su

      • Here's why I don't think it's noteworthy: Support.

        Here's another reason: They discovered older, slower parts were acceptable for older, slower servers?

        "wow"

  • Virtualization and The Cloud exist because most legacy applications don't need the full horsepower of a 386 to accomplish their goals. Of course a cloud workload can run on older hardware. They should run the hardware until it dies. Every new SaaS running on the cloud is just emulating a dBase III application, poorly, and charging you a monthly fee for the BS.

    • by lsllll ( 830002 )

      You sound like old man yelling at cloud. There are reasons why dBase III didn't survive to serve today's application needs. Hell, Ashton Tate didn't even introduce SQL until dBase IV, let alone have many of the features desirable by today's applications for their DB back-end. As someone who has written SaaS relying on only OSS software (who was very proficient in Advanced Revelation - a DB way ahead of its time in the 80s) I can assure you dBase III has nothing on even the most minute open source DB solu

  • by Jayhawk0123 ( 8440955 ) on Thursday November 14, 2024 @03:28PM (#64946193)

    If we're talking about energy consumption and density/hardware upgrades... how about a focus on making software more efficient

    If software needs fewer clock cycles, or memory, or storage to run... you reduce the need for that hardware sprawl, increase workload and extend the lifecycle of current builds.

    I know there are the few coders that take efficiency to the extreme... but the vast majority don't even know what i am talking about. "Compute is cheap"... was and is the mantra of all those coding bootcamps and most academic courses.

    • Most code is write once read never. Code on servers is written to add some functionality. If there is a bug more code will be written to handle the specific bug case. The code is often just too big for the average programmer to actually go find the bug and the coders that are good enough are too valuable to be wasted on most bugs. Faster to just through more code at the problem.

      More scary is this is how hardware is now done. I've worked on a sub $10 processor that had 3 AES engines, each slightly d
    • by MerlynEmrys67 ( 583469 ) on Thursday November 14, 2024 @04:36PM (#64946357)

      I don't know of a single server configuration guide that doesn't say something about turning off power management in the bios. This one setting seems to reduce latency on the first hit, but cause a huge increase in the power needed on the system over time

  • by Anonymous Coward

    I laughed at the idea of someone using old hard drives and server boards to make flying dicks.

  • Yeah... (Score:4, Insightful)

    by AntronArgaiv ( 4043705 ) on Thursday November 14, 2024 @04:29PM (#64946337)

    All of my computers are in their second (or sometimes, higher) lives. Recycled computers still work, processor or memory "pulls" from eBay haven't failed me yet, and at that price, who cares? I'm typing this on a system running a Gigabyte motherboard I got off eBay, and which runs the current version of Linux Mint just fine.

    I am perfectly willing to use the castoffs of those who must always have the "latest and greatest", or of companies that require their machines to be under manufacturer warranty. People "usually" take pretty good care of their work PCs, at least the people who have used the Dell Latitudes I've bought second hand.
    Add an SSD to an older machine, install Linux, and something that lugs along under Win10 can be quite zippy when running Mint.

  • by BrendaEM ( 871664 ) on Thursday November 14, 2024 @07:15PM (#64946771) Homepage
    Presently, I have a Ryzen 3900x, with 64 GB of RAM, but OpenFoam (aerodynamic Computation Fluid Dynamic solver) is pretty RAM hungry. I am working on an electric vehicle design. I have run over 12 Computational Fluid Dynamic studies on the vehicle. With the RAM I have I can only mesh/solve down to 10mm, and for a decent test I need to be at 5mm (the height of a windshield wiper blade), and for a good one, I want 2.5mm, the width of a shut-line or height of an emblem, also the point of diminishing returns. BUT, to do so, I would need 256GB for 5mm, or 512GB or memory for 2.5mm. If you have an old server please donate it to me, contacted from my channel, on my sig line. I only need mediorcre graphics to set up and launch the case with FreeCAD, as I do my design on my other system, so such a machine would only need a graphic card slot with a few lanes. Well, thank you for reading this far.
    • by jezwel ( 2451108 )

      ... to do so, I would need 256GB for 5mm, or 512GB or memory for 2.5mm.

      Sounds ripe for recoding to use swap on a PCIE connected NVME drive with high read/write speed. These things are 4+TB now, and the speeds are similar to old DDR2 RAM sticks. Is it at all feasible to be looking at revisiting code in this way?

  • The central problem with using old gear is ongoing electrical costs are really going to bite per unit of RAM/CPU vs. new gear. And that doesn't touch support or reliability concerns that also increase costs. Plus, running more and nonuniform models in your fleet is $$ for developer and data center hands' time.
  • The hoax that keeps on giving. If anything, recycling old component is good for us plebs who can now buy a 100 gbps melanox for 90 bucks

10 to the minus 6th power Movie = 1 Microfilm

Working...