Forgot your password?
typodupeerror
IT Technology

Memory Prices Have Nearly Doubled Since Last Quarter (counterpointresearch.com) 40

Memory prices across DRAM, NAND and HBM have surged 80 to 90% quarter-over-quarter in Q1 2026, according to Counterpoint Research's latest Memory Price Tracker. The price of a 64GB RDIMM has jumped from a Q4 2025 contract price of $450 to over $900, and Counterpoint expects it to cross $1,000 in Q2.

NAND, relatively stable last quarter, is tracking a parallel increase. Device makers are cutting DRAM content per device, swapping TLC SSDs for cheaper QLC alternatives, and shifting orders from the now-scarce LPDDR4 to LPDDR5 as new entry-level chipsets support the newer standard. DRAM operating margins hit the 60% range in Q4 2025 -- the first time conventional DRAM margins surpassed HBM -- and Q1 2026 is on track to set all-time highs.
This discussion has been archived. No new comments can be posted.

Memory Prices Have Nearly Doubled Since Last Quarter

Comments Filter:
  • DDR, NAND, and even spinning rust has gone through the roof.

    Lucky for me, I id all my buying during Microsft's upgrade forcing, so I won't need any memory until after the AI hardware market collapse.

    • by Anonymous Coward

      What gauge is it?

    • by gweihir ( 88907 )

      Same here. The only thing I got was a spare 2TB SSD, still at old prices.

      The collapse of the LLM insanity cannot come soon enough.

      • by bartoku ( 922448 )

        Why would the "LLM insanity" collapse? The demand is strong and growing! I cannot remember a "technology" with a faster adoption rate in my lifetime.

        Four out of five of the top AI companies have weathered multiple economic downturns and have other business ventures funding the LLM wing. OpenAI is the only one with all its eggs in one basket, and I am sure Microsoft, Google, or Meta would buy it up in a heart beat just for the server space. The fifth, nVidia makes the gold rush shovels, a consumable, GPUs, t

        • If only pets.com said they would ship for free for only $139/year then they would be as successful as Amazon

          • by bartoku ( 922448 )

            Or maybe pets.com should have started with something lighter, like books for dogs, that the US postal service would ship cheaply.
            Then over 25 years build up a shipping and warehouse network that fits their shopping regions while using USPS and UPS to fill in any holes.

    • Yet I look at actual prices and add it all up, and it looks only slightly higher than what I paid for my current setup some years ago. I wouldn't balk at these prices if I was looking to upgrade. Although a 2022-era machine with 64GB of RAM still does fine in 2026.
  • by silvergig ( 7651900 ) on Friday February 06, 2026 @06:10PM (#65973676)
    No problem for the AI hyper scalers...they'll just do some 100 trillion dollar deals back and forth and it all will work out.
    • No problem for the AI hyper scalers...they'll just do some 100 trillion dollar deals back and forth and it all will work out.

      I genuinely don't think the circle-jerk deals are the real force behind this. Microsoft "investing" in OpenAI doesn't inherently create datacenter demand. OpenAI "investing" in nVidia doesn't either.

      What does create demand is all the people asking CoPilot to summarize an e-mail for them. Or to write a paragraph of code they could Google the results of. Or create a picture of their wife's head on a walrus. The load is what's screwing us. All the useless, pointless, wasteful load.

      • No problem for the AI hyper scalers...they'll just do some 100 trillion dollar deals back and forth and it all will work out.

        I genuinely don't think the circle-jerk deals are the real force behind this. Microsoft "investing" in OpenAI doesn't inherently create datacenter demand. OpenAI "investing" in nVidia doesn't either. What does create demand is all the people asking CoPilot to summarize an e-mail for them. Or to write a paragraph of code they could Google the results of. Or create a picture of their wife's head on a walrus. The load is what's screwing us. All the useless, pointless, wasteful load.

        Agree. That could be solved by the AI-bros actually charging for the services, which will have to happen someday. They will have to start making actual profit on this shit, someday. Although, I'd also think that a lot of the useless shit people ask AI to do could be handled by local models, which would be cheaper to run. I would think summarizing an email should certainly not require 10 top of the line GPUs to do by now.

        • by ceoyoyo ( 59147 )

          Although, I'd also think that a lot of the useless shit people ask AI to do could be handled by local models, which would be cheaper to run. ...
          I would think summarizing an email should certainly not require 10 top of the line GPUs to do by now.

          It doesn't, and they wouldn't. Your request doesn't get 10 top of the line GPUs, it probably gets one for a second or so, or maybe a few for a fraction of that. Then the next of your billion friends gets to use it. By the time they've had their turn you've had a chan

        • No problem for the AI hyper scalers...they'll just do some 100 trillion dollar deals back and forth and it all will work out.

          I genuinely don't think the circle-jerk deals are the real force behind this. Microsoft "investing" in OpenAI doesn't inherently create datacenter demand. OpenAI "investing" in nVidia doesn't either. What does create demand is all the people asking CoPilot to summarize an e-mail for them. Or to write a paragraph of code they could Google the results of. Or create a picture of their wife's head on a walrus. The load is what's screwing us. All the useless, pointless, wasteful load.

          Agree. That could be solved by the AI-bros actually charging for the services, which will have to happen someday. They will have to start making actual profit on this shit, someday. Although, I'd also think that a lot of the useless shit people ask AI to do could be handled by local models, which would be cheaper to run. I would think summarizing an email should certainly not require 10 top of the line GPUs to do by now.

          They'll monetize this shit through shoveling ads at the users.

          And while you are correct that local models would fit most use cases, there's *MUCH* more incentive to keep all of this based on datacenters. The companies behind it want your data. They *NEED* your data. They desperately need your data. It is their lifeblood. And not just a little of your data. No, if you have a thought that registers high enough for you to ask an AI a question, they want that question, and your response to the answers, because

    • They can trade between themselves all they like, but the AI companies will have to pay full price for RAM.
      • Nah. That's the miracle of the circular deal:

        The RAM manufacturer invests $100 Million into the AI company, in return the AI company buys $100 Million in RAM.

        The AI company gets the RAM they need, and their market valuation is inflated by the new investment.

        The RAM manufacturer gets to book a $100 Million sale, AND they have an asset on their books of a share in an ever-increasingly valuable AI company -all of which looks good to their investors and drives their share prices higher. The RAM company loses

  • More than doubled (Score:5, Informative)

    by NormAtHome ( 99305 ) on Friday February 06, 2026 @06:33PM (#65973718)

    Up until September the G.Skill memory kit I'd been buying to build computers was $119.99 and now its over $400 that's close to a 400% increase

  • by ffkom ( 3519199 ) on Friday February 06, 2026 @07:03PM (#65973758)
    ... wait until "running AI" earnestly competes with you for energy, water, land, sunlight... the little nuisance of "gaming PC expensive!" will then soon be forgotten.
    • It's no competition really. Billionaires get to decide the priority, and it will be for their money making data centers and not the nearby communities.

  • Well boys and girls (Score:5, Interesting)

    by OrangeTide ( 124937 ) on Friday February 06, 2026 @07:29PM (#65973776) Homepage Journal

    The bloatware show is over. We coders aren't going to be allowed to import a massive library to solve some tiny problem in our projects. We might have to actually design and code up light weight solutions again if we want our programmers to actually fit on people's computers. People that put 8GB in their mid-range laptop are going to be stuck there for quite some time, at least another two CPU generations.
    And that doesn't mean you can write your apps to eat up 7GB of RAM, no people still will want to run more than one program on their computer at a time.

    • LOL as if coders pushing text messaging apps like WhatsApp (which is currently using just shy of 1GB on my machine) give a rats arse about consumers. No the bloatware won't change. Libraries save coding time, coding time is dollars, dollars is the only thing anyone cares about. The consumer is no longer right. We live in the enshittification age now, we remove features while increasing resource consumption.

    • Computer prices have varied a lot in the past 40 years, and especially if you take into account inflation, they've been at the cheap, commodity priced, position for a while now. There have been plenty of times it cost $2,000+ for what was considered, if not an entry level PC, a "good but not premium" PC. That is, the type of thing your office would equip you with.

      What I'm trying to say I guess is that, alas, no, developers aren't going to stop producing bloated stuff that imports hundreds of unverified thir

    • The bloatware show is over. We coders aren't going to be allowed to import a massive library to solve some tiny problem in our projects. We might have to actually design and code up light weight solutions again if we want our programmers to actually fit on people's computers. People that put 8GB in their mid-range laptop are going to be stuck there for quite some time, at least another two CPU generations. And that doesn't mean you can write your apps to eat up 7GB of RAM, no people still will want to run more than one program on their computer at a time.

      Bull. We'll be told that running "programs" at all on the local machine is no longer possible by the powers that be, to justify yet more data-suck. All tasks easily performed on the weakest of systems will be relegated to the cloud, and we'll be left with overpowered dumb terminals at home because someone out there covets your data. Nobody's going to tell us to turn-around on coding bloat beyond bloat. Not when there are non-existent datacenters that need a justification for eventually existing.

  • I need to upgrade my zfs pool from an old xeon workstation to a proper server. There are no deals to be had anymore. Everyone is taking ram and selling it separately. Used storage isn't getting any cheaper either. I'd like to upgrade my 8tb sas drives to something larger but not at these prices.

"Conversion, fastidious Goddess, loves blood better than brick, and feasts most subtly on the human will." -- Virginia Woolf, "Mrs. Dalloway"

Working...