Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Technology

AMD's Monstrous Threadripper 7000 CPUs Aim For Desktop PC Dominance (pcworld.com) 62

AMD's powerhouse Threadripper chips are back for desktop PCs. Despite declaring the end of consumer Threadripper chips last generation, AMD announced three new Ryzen Threadripper 7000-series chips on Thursday, with up to 64 cores and 128 threads -- and the option of installing a "Pro"-class Threadripper 700 WX series for a massive 96 cores and 192 threads, too. PCWorld: Take a deep breath, though. The underlying message is the same as when AMD released the Threadripper 3970X back in 2019: these chips are for those who live and breathe video editing and content creation, and are optimized for such. Nevertheless, they almost certainly represent the most powerful CPU you can buy on a desktop, for whatever purpose.

The key differences between the older workstation-class Threadripper 5000 series and these new 7000-class processors are simple: AMD has brought forward its Zen 4 architecture into Threadripper alongside a higher core count, faster boost frequencies, and a generational leap ahead to PCI Express 5.0. Consumers will need new motherboards, though, as the new "TRX50" consumer Threadripper platform uses the new AMD TRX50 HEDT (high-end desktop) chipset and sTR5 socket. And did we mention they consume (gulp) 350W of power? In some ways, though, the new Threadripper 7980X, 7970X, and 7960X consumer Threadripper offerings are familiar. They stick with AMD's tried-and-true 64-core configuration, the same as the Threadripper 5000 series, moving down to 24 cores. The 12- and 16-core configurations have been trimmed off from the prior generation.

This discussion has been archived. No new comments can be posted.

AMD's Monstrous Threadripper 7000 CPUs Aim For Desktop PC Dominance

Comments Filter:
  • Moved my Ryzen to being a Linux box doing compiles and other stuff. That way I don't have to listen to it.

    • 1. This article is about Threadripper, not your puny Ryzen.
      2. When one could afford spending that much on a Threadripper, they certainly can afford a couple extra grands for a beefy, quiet watercooling system.

      • by HBI ( 10338492 ) on Thursday October 19, 2023 @09:45AM (#63936919)

        That's supposed to make things better? Still have to get rid of the heat somehow, so you've added water pump noise to the mix. Still blowing the fans across the radiator. Also my 'puny' CPU runs too hot for desktop use, how are these current CPUs going to somehow improve. 350w of dissipated heat is similar to a space heater.

        • I have a TR Pro, and am using a very silent air-cooled heat sink. If you buy something loud, you get something loud. If you buy something quiet, you get something quiet. You just have to pay more for quiet.

          • by slaker ( 53818 )

            I have a 3960X about 1m from where I sleep. I tried using an AIO water cooling setup but found the pump to be louder than a Be Quiet! HSF that actually worked better for cooling anyway.

            A lot of water cooling setups aren't designed to deal with the larger dies or different hot spots found on Threadrippers, so they're less than ideal in just about every way compared to high end air cooling.

            • I have a 3960X about 1m from where I sleep.

              Well, that's your main mistake. Desktop HSF (and water cooling) setups are meant for a box under your desk - possibly on your desk if you wear headphones a lot of the time and don't mind spending extra on quiet cooling. Nobody is targeting ambient noise levels of "Here, you should sleep with your head right next to it!"

          • I have a TR Pro, and am using a very silent air-cooled heat sink. If you buy something loud, you get something loud. If you buy something quiet, you get something quiet. You just have to pay more for quiet.

            Did I see a picture of that silent air-cooled heat sink on the FanlessTech website?

            I can only imagine how big that air-cooled het sink might be without added mechanized cooling apparatus (fans).

        • by jacks smirking reven ( 909048 ) on Thursday October 19, 2023 @10:27AM (#63936981)

          All about fan design, speeds, pressures as well as a decent case design. Having high quality fans like Noctua or the high end Corsair models on a large radiator, like a 360 or 420mm can be surprisingly quiet and effective since the fans won't have to be at max speed which is where a lot of noise can come from, or the larger Noctua air coolers can also be very quiet.

          • Not to mention the almighty MoRa 420 :)
            I mean, hell, if you place it in a cool-ish room (e.g. AC controlled environment at 21-23 degrees Celsius), you could even passively cool that Threadripper.

        • by war4peace ( 1628283 ) on Thursday October 19, 2023 @10:27AM (#63936987)

          I take it you haven't really used watercooling, and that's okay.
          Pump noise is minimal. I have just built a machine last Saturday, with a RTX 4090 and a 13900K, both watercooled, a MoRa 420 with 9x 140mm fans and dual-D5 EKWB pumps in a volute. Fans turn at 900 RPM, ramping up to 1500 RPM while the machine is rendering, with CPU and GPU usage at 100%, total dissipated power at around 700W, give or take. You can't hear the pumps, even when machine is idle, paradoxically the loudest component is the analog flow meter (that little plastic wheel inside clicks while turning). Flow sits at a comfortable 290 L/h. Maximum liquid temperature is 40 degrees Celsius, CPU core temperature (highest core temperature) is at 91 degrees Celsius, which is expected.

          A 350W Threadripper would probably not make the fans go above 1000 RPM. And if you want REALLY quiet, you could always put 4x 200mm Noctua fans on that MoRa 420, at 550-600 RPM, they will do the job perfectly. I have similar fans on my personal build, and yes, the loudest component is the pump, but it's a stupidly overspecced Eheim 1262 which vibrates a bit, because I was overconfident and used hard tubing coming off it. I should have used 16/10 ZMT soft tubing instead, but oh well. But that's 32-34 dB at 50 cm away from where the pump is.

        • by Anonymous Coward

          You can get quiet pumps. You can get quiet fans. 350W is half an average space heater on the low setting.
          Before I dealt with cooling, I just assumed all fans were the same. Then I replaced the shitty loud chinese fans that came with my water cooler with premium Noctua fans and it made all the difference from being annoyingly loud to pleasant background white noise. And I also hung the radiator loosely inside my PC with rubber instead of rigidly mounting it. Didn't cost me a couple extra grand either!

        • by AmiMoJo ( 196126 )

          350W isn't all that much these days. Many GPU exceed that, and the last two generations of Intel consumer CPUs reach just shy of 300W under load.

          With a good size radiator you won't get too much noise.

        • That's supposed to make things better? Still have to get rid of the heat somehow, so you've added water pump noise to the mix. Still blowing the fans across the radiator. Also my 'puny' CPU runs too hot for desktop use, how are these current CPUs going to somehow improve. 350w of dissipated heat is similar to a space heater.

          Big case + big fans = quiet PC. More area you have slower fans have to turn to move the same volume of air.

          Don't have anything particularly special.. air cooled 2x120mm CPU and 4x140mm fans (PSU included) in addition to whatever the GPU has. Even above 500 watts in total barely notice fans spooling up.

    • Maybe buy a good fan?

      My threadripper runs silent with a large Noctua dual fan 140mm fan.

  • I'm all for over-the-top CPUs, but I can't imagine a desktop user needing that much power. Isn't 'content creator' a fancy word for a YouTuber and why would a YouTuber need so much CPU power?
    Maybe some graybeards could use it attempting to solve the universe in FORTRAN?
    • Re:96 cores?? (Score:4, Insightful)

      by Courageous ( 228506 ) on Thursday October 19, 2023 @09:25AM (#63936871)

      Most gamers would do best with a PC that has the highest single thread performance.

      • by Rei ( 128717 )

        I mean, let's face it, the vast majority of work, and especially *threadable* work, these days is done by GPUs.

        • You're not wrong. Although you'll typically see at least one thread become CPU bound in many games, especially simulations. That's why it's good to have high single thread.

        • GPUs have some severe RAM size limitations though, at least compared to system memory for a given fixed amount of money.
      • Agreed. Ryzen 5800X3D or 7800X3D are beasts for gaming especially Factorio. [factorio.com]

        Many games are using Unreal Engine and while it is multi-threaded having a strong single thread performance like you stated is 100% spot on.

    • That just means lazy devs can add another abstraction layer or two. Software and the internet in general doesn’t feel any faster than it did 30 years ago.

      • It does if you try to run modern software or internet on a 30 year old PC
        • by jvkjvk ( 102057 )

          I don't think your comeback has the zing you intended if you try and parse it at all.

          I believe you are trying to say that because of the graphics and algorithmic processing that we are doing much more today than 30 years ago that such software would run slowly on 30 year old systems.

          Which might make sense but not really from a personal compute standpoint and not as a response to the parent post.

          When I type this, there is nothing here that couldn't have been done with 30 year old hardware, just as fast, just

      • I have and use an Amiga 1200. Even with it's 68060 accelerator card, I can tell you it is definitely slower than even low end PC's of today.

    • Considering AVX-512 support, solving the universe in Fortran might not be a terrible use case for these. Just don't use Intel's gimped MKL.
    • Re:96 cores?? (Score:5, Insightful)

      by JBMcB ( 73720 ) on Thursday October 19, 2023 @09:45AM (#63936915)

      I'm all for over-the-top CPUs, but I can't imagine a desktop user needing that much power. Isn't 'content creator' a fancy word for a YouTuber and why would a YouTuber need so much CPU power?

      Well, content creators also make movies, commercials, cartoons, advertisements, and music. Ever see how many triangles, or quads, make up a single Pixar character? Even with it's enormous render farm, it takes hours to render a single frame of a Pixar film. Now imagine someone having to create that frame on their desktop.

      Also: large scale GIS. Medical imaging. EDA (this is a huge one.) Developing large projects (a full build of the software I work on takes an hour on the dedicated build farm that has dozens of Xeon cores.)

    • Re:96 cores?? (Score:4, Insightful)

      by DarkOx ( 621550 ) on Thursday October 19, 2023 @09:51AM (#63936927) Journal

      Isn't 'content creator' a fancy word for a YouTuber

      Usually but not always. It might be the guy/gal who is a 3DSMAXX wizard generating all the animations for the local car dealerships, ice-cream parlors, or the guy taking architectural drawings and turning them into full high-fidelity walkthrus of buildings for the construction companies clients.

      His/Her workflow is actually changed if he can see what 'it' looks like right-bloody-now will full textures and lighting, vs click-wait and hour or see it without textures and partial lighting.

      How many of those guys are still making enough money to justify buying kit like this I am not sure but there still independents out there.

      I also suspect there is a place in academia for this stuff. A lot of researchers have enough programing skills to get themselves into a heap a trouble. They know how to read and files, create threads, and express the math they want but maybe not in the most computationally efficent manor. Now that SSDs have given us TBs of affordable storage with high IOP counts its possible to do a lot of big data stuff right on your workstation if you have enough compute to match it.

      These guys don't know how to do complex distributed architectures with reliable message buses and so forth but they can get the local python script together. They can try things they can iterate, without having manage an entire crew of developers to support their effort. 15k for a badass workstation is a good value proposition vs trying to get have the comp-sci departments grad students to show up to the same meeting is a good value proposition for them.

    • 640K should be enough for anyone sonny.

    • I'm all for over-the-top CPUs, but I can't imagine a desktop user needing that much power. Isn't 'content creator' a fancy word for a YouTuber and why would a YouTuber need so much CPU power?

      Maybe some graybeards could use it attempting to solve the universe in FORTRAN?

      I've got the 3960X 24 core version. The biggest use I've found is in data science, if I need to do a bunch of operations on a big dataset I turn on multiprocessing, watch all 48 CPUs hit 100%, and do a 10 minute operation in seconds. Though I don't really see the need to go higher.

      There's some other weird edge cases like running a bunch of VMs at once, though there the machine taps out around 8 even though I have RAM and cores to spare.

      For ordinary computing I don't find it any quicker than my laptop with a

      • by ls671 ( 1122017 )

        There's some other weird edge cases like running a bunch of VMs at once, though there the machine taps out around 8 even though I have RAM and cores to spare.

        I run about 50 vms with an 8 x Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz (1 Socket) 48 threads (shows 48 cpus in /proc/cpuinfo) with 192 MB RAM and the bottle neck is IO. Check your IO delay/wait. I rarely get to use more than 25% of CPU power.

        Unless you run purely CPU intensive stuff like some scientific simulations or, even a better example; cracking keys with multithreaded software using CPUs instead of GPUs (see hashcat), your bottle neck will always be IO unless you have a $100,000 SAN with fiber links

        • There's some other weird edge cases like running a bunch of VMs at once, though there the machine taps out around 8 even though I have RAM and cores to spare.

          I run about 50 vms with an 8 x Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz (1 Socket) 48 threads (shows 48 cpus in /proc/cpuinfo) with 192 MB RAM and the bottle neck is IO. Check your IO delay/wait. I rarely get to use more than 25% of CPU power.

          Unless you run purely CPU intensive stuff like some scientific simulations or, even a better example; cracking keys with multithreaded software using CPUs instead of GPUs (see hashcat), your bottle neck will always be IO unless you have a $100,000 SAN with fiber links I guess.

          Well, I guess you could also max out all CPUs to 100% if you launch as many bash scripts with endless loops nothing as you have threads available, you could also manage max out the CPUs but hard to do with real life vms use cases.

          These were single threaded fluid dynamics simulations, and on the VM itself they pin a CPU if available. Though there was some IO at the start and finish, and they were Windows VMs so I wouldn't be shocked if there were OS services mucking around with the IO.

    • They could be generating AI likenesses of actors. Do you think emulating Fran Drescher's voice is easy?

      • Thank you for triggering PTSD flashbacks of a red-eye flight aftermath where I fell asleep in a New Jersey hotel room with the TV on. I woke up hearing this horrible noise, and while disoriented to AM versus PM, eventually realized that the horrible noise was Fran Drescher laughing.
    • I was wondering this also. I assume it's for people doing high end graphics projects, like making movies 100% in CGI or something like that. Maybe if you had to compile large projects often.

      As a gamer it doesn't make sense to me.

  • Prices (Score:5, Informative)

    by EvilSS ( 557649 ) on Thursday October 19, 2023 @09:27AM (#63936879)
    7960x - 24/48c - $1,499
    7970x 32/64c - $2,499
    7980x - 64/128c - $4,999

    And these use DDR5 RDIMMs, no UDIMMs, so RAM is going to be costly as well.
    • A decent rackable Ryzen system can be well under $500 with at least 8/16. I'd think dispatching your renders to a rack in the back room would be much more enjoyable than trying to stuff it all into one super-hot desktop system.

      I guess if you need a luggable it makes sense.

    • Re: (Score:3, Interesting)

      by boulat ( 216724 )

      7960x: $31.25/thread, 7970x: $39.06/thread, 7980x: $39.06/thread

      Meanwhile:

      Ryzen 9 7950x ($550) 16/32: $17.18/thread
      Ryzen 9 5950x ($500) 16/32: $15.63/thread

      You could probably make a cluster of 4x 5950x (420W) at $2K vs 1x 7980x (350W) at $5K and put the difference into GPUs and more RAM

      Better question is: can you find tasks that you can parallelize, scatter/gather and store results in shards, and scale linearly by adding more hardware.

      • As far as I'm concerned, this takes the cake as "dumbest analogy of the day", and it's almost midnight where I live.
        I'm positive we have a winner here.

        First off, price per thread is a meaningless measure. Judging this way, an EPYC 7742 beats all of them. 128 threads for $700 must be the best there is, right?

        Then there's this gem:

        You could probably make a cluster of 4x 5950x (420W) at $2K vs 1x 7980x (350W) at $5K and put the difference into GPUs and more RAM

        No, you put the difference into 3x motherboards, 3x PSUs, 3x cases and 3x base storage, let alone RAM and GPUs for each.

        Threadripper platform is more than just a CPU and some suppor

        • by boulat ( 216724 )

          Looking at specs the 'crapton' comes out to 16 more lanes, unless I'm missing something?

          Also assuming that motherboard is at least $800 compared to say X670E-A at $335, you are still comparing compute with storage before GPU:

          4x 7950X with motherboards: (4x $335), 4x PSUs (4x $100), 4x 4x128GB DDR5 (4x $450), 4x 4x 4TB NVMe M.2 drives (16x $300): $8,340
          1x 7980x with motherboard: (1x $800), 1x PSU ($100), 4x 128GB DDR5 ($450), 4x 4TB NVMe ($1200): $6750

          You are getting 512 GB of RAM, 64TB NVMe vs 128GB of RAM

          • 1. He said 5950x, not 7950x. That's a two generations difference.
            2. You can't just add up RAM and disk space and call it a day. Spreading workload across 4x different machines might not even be possible, depending on which software you use, and even if possible, the bottleneck will be networking throughput. That's why one CPU with 64 cores and 128 threads will always beat 4x separate CPUs with 16 cores and 32 threads each, provided software could scale as such, of course.

  • by roc97007 ( 608802 ) on Thursday October 19, 2023 @12:33PM (#63937265) Journal

    Serious question, not trying to be argumentative. Besides screaming fast frame count in games, what's the use case for this product for home users? It seems like, save for gaming, computers have been overpowered for most people's use for years now.

    Possible counteragrument: A friend of mine who is not terribly computer savvy, has much higher end hardware than I, as I tend to use refurbished gear from a few years back, and let someone else pay the list price penalty.

    She's always coming to me saying her computer is "slow". She does banking and shopping and email and a few low-resource games like Sudoku.

    I look at her computer, and it really IS slow. So I start digging into it, and there's just a huge slew of stuff running that shouldn't be; her internet connection has been hijacked by a proxy, and when I reboot it, a bunch of screens pop up that she can't recognize or explain.

    So I spend an hour uninstalling stuff and cleaning things up and fixing obvious security issues, and it "runs faster", so I hand it back.

    Then six months later, it's a bus stop toilet again.

    Obviously, she doesn't practice safe computing. She goes to sketchy sites because "there's where the games are" and clicks on anything that offers coupons or discounts. I've tried to educate her, but to no avail so far.

    The point being, I wonder if the true use case for increases in CPU speed, number of cores and so forth, is to run all the cruft that regular users accumulate, and maybe have enough left over for email?

    • Hhmm, I think, as mentioned in other posts, the real use case for this kind of tech is rendering massive 3d projects. Some mentions architectural walkthrough with lighting, another mentioned compiling a huge project locally and a 3rd mentioned massive crunching of data with sub-optimal scripting languages. I could see trying to do some really hard math calculations back to back or also simulations with many different weights also being a use case.

      Your friend, as you mention, clicks everything, agrees to let

      • You're right, she doesn't close tabs and iconifies applications instead of dismissing them. This is another big factor. Another thing I didn't account for is that I use an ad blocker and she does not, so you're right, she probably has videos playing in all those tabs. Yeesh. So that's why computers have been getting faster.

        Her machine took several minutes to boot and spewed a massive number of screens before the desktop finally stabilized. One app, for instance, was "daily bible verse". I have to wond

    • Serious question, not trying to be argumentative. Besides screaming fast frame count in games, what's the use case for this product for home users?

      The use case for this product is anything BUT gaming. Most games (with very few exceptions) would run much faster on most current-gen desktop CPUs.
      It's not for home users either. Of course, it can be used as such, but it's like cracking nuts with a microscope, just because it's heavy.

      No, this type of CPU is great for massive transcoding (in certain cases), rendering (in certain cases) and task parallelization. You could slam a bunch of VMs on it through a hypervisor, e.g. Proxmox VE. You could have a virtua

      • I guess what confused me was the line "AMD's powerhouse Threadripper chips are back for desktop PCs". This implies to me that you go into Best Buy and the PC on the end display is equipped with AMD's New Powerhouse Threadripper, and I was wondering, why?

        For servers or perhaps serious workstations for niche applications, sure, I guess.

        In perspective, the primary use for my workstation is photo editing. I do mass changes to hundreds or thousands of photos at once, and I don't need this kind of power.

        • I guess the editors were trying to differentiate between server-grade platforms (EPYC) and HEDT (which means High-End DeskTop), where Threadripper sits.
          In the end, it's a CPU aimed for a desktop PC, ok, a workstation, but still... tomayto, tomahto.

    • by devloop ( 983641 )

      She's always coming to me saying her computer is "slow".

      I look at her computer, and it really IS slow. So I start digging into it, and there's just a huge slew of stuff running that shouldn't be; her internet connection has been hijacked by a proxy, and when I reboot it, a bunch of screens pop up that she can't recognize or explain.

      So I spend an hour uninstalling stuff and cleaning things up and fixing obvious security issues, and it "runs faster", so I hand it back.

      Stop simping. You're just enabling her to never learn how to keep her system clean.

      • That's a point. In my defense, I've talked to her repeatedly about practicing safe computing, but she insists she just can't do without all these sketchy applications. As she's a childhood friend of my wife, I can't really stop helping her without causing trouble at home.

        BTW, her alternate solution was to just buy a new computer. I sometimes wonder if there's a conspiracy going on.

    • by tlhIngan ( 30335 )

      We use it as a build machine. 128 threads is very useful when you want to compile a large codebase. Sure a single build using all 128 threads is unlikely, it does allow us to run 4x builds simultaneously.

      We built a 64 core threadripper machine with 256GB of RAM and 40TB of SSD storage (4x 8TB SATA SSDs, 4x 2TB NVMe SSDs) for use as a build server. Users could build their projects on it, and it serves as a Jenkins build node.

      It could also be used as VM - with 128 threads suddenly giving VMs 4 cores doesn't s

  • It is disappointing that these new AMD CPUs are not supporting older sockets. I am on Ryzen 9 and would have purchased this new CPU right away, but not if I have to do full system update (mobo, ram) as what I have is just good enough.
    • This is a totally different product line: The AM5 part is 24 PCIe 5 lanes(plus 4 for chipset connectivity) and 2 channels of DDR 5; the threadripper is 48 PCIe 5 lanes, plus another 32 PCIe 4 lanes, and 4 channels of DDR5. The threadripper pro is 128 lanes of PCIe5 and 8 channels of DDR5; while a full epyc is 128 lanes and 12 DDR5 channels. Substantially different requirements in terms of signal pins; plus the extra power and ground pins required by the higher core count parts.
      • by sinij ( 911942 )
        I understand why the socket update was done for these CPUs, but marketing 7000 server/data center CPUs to workstation users and gamers is a mistake. More so, what is needed is 500$-range drop-in (i.e., AM5) update not a entirely new socket.
  • The lack of a stacked cache option seems somewhat curious.

    The extra cache only matters for certain applications; but that list includes certain workstation things and, where applicable, the benefit is not small or subtle. Are they banking on those people being sufficiently performance sensitive that they'll just suck it up and go with full Genoa-X parts; or is that coming later?
  • I have a Threadripper 3970X ( 32 core version ) and, if I had to do it all over again, would opt for fewer cores and a higher clock speed instead.

    Most software still isn't written to take advantage of multi-core processors.
    Some exceptions are CPU rendering but, even those have been moving over to GPU rendering instead.

    For the rest, the majority of applications will saturate one core to 100% while the other 31 cores sit idle doing absolutely nothing most of the time.
    I don't game on this machine and it's usua

  • > And did we mention they consume (gulp) 350W of power?

    14th gen i7 & i8 pulled 397W & 428W respectively [anandtech.com] while merely being 20c/28t & 24c/32t

  • That many cores would make sense for professional use and applications tailored towards exploiting that much oncurrency, however for vonsumer end desktop use, even in high end gaming, this would be a waste.
  • After buying some smart plugs and looking at statistics and comparison on power use between my 5900X (IIRC 105W TDP) and a Mac Studio with M2 Ultra, which in quite a few workloads is similarly fast (Lightroom just flies on that machine), I'm shocked at how inefficient the Ryzen is. Moving to a 350W TDP CPU is fully out of the question, given the temperature in my home office in the summer.

    I'm even contemplating moving more workloads - well, not FPS games, of course - to the Mac, because it's quieter, smalle

    • After buying some smart plugs and looking at statistics and comparison on power use between my 5900X (IIRC 105W TDP) and a Mac Studio with M2 Ultra, which in quite a few workloads is similarly fast (Lightroom just flies on that machine), I'm shocked at how inefficient the Ryzen is. Moving to a 350W TDP CPU is fully out of the question, given the temperature in my home office in the summer.

      I'm even contemplating moving more workloads - well, not FPS games, of course - to the Mac, because it's quieter, smaller, and just as fast, it seems. Sure it cost an arm, a leg and one kidney, for sure, especially as it's not upgradable, but the SOC architecture and Arm makes for a much more efficient system.

      Where MACs have an advantage is the unified memory between CPU and GPU. Sure high end discrete GPUs have way more power and bandwidth and will circles around a MAC but for some apps having more memory is way more important.

      In terms of this 96 core monstrosity and only 8 channels of DDR5 it would seem proportionality is truly out of whack. This works out to an abysmal by today's standards 3 GB/s/core (3x worse than my relatively "low end" workstation) and I highly doubt they will ever see anything remotely

  • by AcidFnTonic ( 791034 ) on Thursday October 19, 2023 @05:11PM (#63937939) Homepage

    I have a 128 thread last-gen Epyc and see no reason to upgrade. Thing is wicked quick.

Keep up the good work! But please don't ask me to help.

Working...