PCI Express 7.0 Standard Provides Eight Times the Bandwidth of Today's Connections (arstechnica.com) 52
The group responsible for developing and updating the PCI Express standard, the PCI-SIG, aims to update that standard roughly every three years. From a report: Version 6.0 was released earlier this year, and the group has announced that PCIe version 7.0 is currently on track to be finalized sometime in 2025. Like all new PCI Express versions, its goal is to double the available bandwidth of its predecessor, which in PCIe 7.0's case means that a single PCIe 7.0 lane will be able to transmit at speeds of up to 32GB per second. That's a doubling of the 16GB per second promised by PCIe 6.0, but it's even more striking when compared to PCIe 4.0, the version of the standard used in high-end GPUs and SSDs today. A single PCIe 4.0 lane provides bandwidth of about 4GB per second, and you need eight of those lanes to offer the same speeds as a single PCIe 7.0 lane.
Increasing speeds opens the door to ever-faster GPUs and storage devices, but bandwidth gains this large would also make it possible to do the same amount of work with fewer PCIe lanes. Today's SSDs normally use four lanes of PCIe bandwidth, and GPUs normally use 16 lanes. You could use the same number of lanes to support more SSDs and GPUs while still providing big increases in bandwidth compared to today's accessories, something that could be especially useful in servers.
Increasing speeds opens the door to ever-faster GPUs and storage devices, but bandwidth gains this large would also make it possible to do the same amount of work with fewer PCIe lanes. Today's SSDs normally use four lanes of PCIe bandwidth, and GPUs normally use 16 lanes. You could use the same number of lanes to support more SSDs and GPUs while still providing big increases in bandwidth compared to today's accessories, something that could be especially useful in servers.
Why is this news? (Score:1)
Yeah, n.0 4x == (n+1).0 1x every generation; why is this news?
Re: (Score:2)
Re: (Score:2)
Because it is tech info instead of political bullshit. Quit being pedantic.
Re: (Score:2)
Seriously? The rare time we get actual news for nerds you've got criticism?
Re: (Score:2)
Re: (Score:1)
1. its a six digit number which is from Jan 2003
2. This isn't news or stuff that matters (its an expected outcome from a decade+ of the same improvements)
3. PCIe 5.0 is barely available; 7.0 definitely doesn't matter
4. Elon can go fornicate with himself
Re: But do we need it, really? (Score:2)
Not PCI-SIG's problem though, they are just doing their jobs and doing it pretty well it seems.
Re: (Score:2)
Re: (Score:2)
Maybe not for your average desktop computer, but there server level tasks that require more bandwidth.
Re: (Score:1)
Like what, fucking crypto mining?
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
If you want to put a Dual 100Gigabit ethernet card on a server you're pretty much forced to use a whole PCIe gen3 x16 slot to keep it fed.
Correction: that should be PCIe gen4 for the dual port card, gen3 only works for the single.
Re: (Score:2)
Re: (Score:1)
Who could possibly use 32 GB per second?
I think there's a local rule requiring that to be phrased as "32 GB per second should be enough for anyone".
Re: (Score:2)
Higher resolution porn? As always?
Re: But do we need it, really? (Score:2)
Video codecs need it. Try pushing 8K uncompressed video frames around at 60 fps between RAM, GPU and other devices capture/play out cards. Then try doing it with multiple streams or renditions concurrently.
Re: (Score:1)
How about try finding a useful application for 8K video first.
Re: (Score:3)
We've had requests for 16K 60 fps live HEVC encoding, mostly companies doing VR and such like.
Sports seem to be the vehicle that is promoting 8K more generally though.
Personally, I'd rather panel manufacturers focused on reproducing 100% of the BT.2020 colour space at existing resolutions instead of adding more pixels. I will never have a TV big enough to benefit from 4K in my living room, let alone 8K. But if there's money to be made from providing 8K tooling, why not?
Re: (Score:1)
Eh, I guess so, maybe, but standard def was good enough for half a century, and when we went to 1080p they had to start purposefully blurring the content feed so you don't see the pores in the talk show hosts' skin. In the context of my experience, 4K seems like it could be useful for medical equipment but 8K still seems just ridiculous.
But what are you going to plug in? (Score:2, Insightful)
Re:But what are you going to plug in? (Score:4, Interesting)
For the desktop, this is true, to an extent.
However, the datacenters want to suck it all up in short order. A single port of NDR infiniband needs 16 lanes of PCIe 5. They want to have not only fast NVMes, but a lot of them. Ditto for GPUs.
Even if the appetite for 'faster' doesn't increase, if an SSD only warrants a single lane of pcie7, if a gpu only warrants 4 lanes, etc, then there's budget for processors to start dialing back the pcie lanes to traditional numbers to be cheaper to implement and facilitate.
By the same token, one could imagine a class of desktop processors going to, say, 8 lanes total with no extra lanes from a southbridge. Faster lanes, cheaper by reducing the count.
Re: (Score:2)
Presumably you could devise a PCI-Express switch which would take one lane of 7.0 in, and spit a whole bunch of lanes of 4.0 out.
Re: (Score:2)
Considering it is currently possible (though frequently not supported by consumer hardware) to take an x16 slot and use it as four x4 channels, you're probably right. This ability most likely does lie within the spec, and most likely will be implemented in real hardware. Whether consumer grade hardware will support it is another matter. Who knows?
Re: (Score:2)
There are also switches which take 1 lane in and put multiple lanes out, obviously time-divided somehow. It seems a smaller jump from that to this.
Re: (Score:2)
For the desktop, this is true, to an extent.
However, the datacenters want to suck it all up in short order. A single port of NDR infiniband needs 16 lanes of PCIe 5. They want to have not only fast NVMes, but a lot of them. Ditto for GPUs.
Even if the appetite for 'faster' doesn't increase, if an SSD only warrants a single lane of pcie7, if a gpu only warrants 4 lanes, etc, then there's budget for processors to start dialing back the pcie lanes to traditional numbers to be cheaper to implement and facilitate.
By the same token, one could imagine a class of desktop processors going to, say, 8 lanes total with no extra lanes from a southbridge. Faster lanes, cheaper by reducing the count.
The datacentre is the least likely place to adapt a new standard before it's tested and popular, especially outside of niche uses. When you buy a server you're buying it for years. 3-5 minimum. 7 is not unusual and I've not been to a DC yet that didn't have a 10+ year old box sitting around because it does something that they haven't figured out (or more likely, the client just hasn't been arsed) to move to newer hardware. OS Virtualization has helped this a lot, but that 15 yr old Solaris box is still ther
Re: (Score:2)
The datacenter has extremes. AVX512 spent years as datacenter-only. No home is doing QSFP at all. Datacenters are the only places that bother with Infiniband or Fibre Channel (though not common). nVidia's GPUs nowadays split their microarchitecture to be entirely different for the Datacenter, to the point of things like SXM form factor which is datacenter only. On ethernet, home is just now starting to see some 2.5g ethernet, datacenter has long been well beyond that commonly.
More directly, PCIe gen 4
make better use of less lanes? have an switch on M (Score:2)
make better use of less lanes? have an switch on MB that can take say X16 5.0 or 6.0 and spit out 2 or more full X16 3.0 / 4.0 slots?
make the chipset link not be over loaded?
Re: (Score:2)
make better use of less lanes? have an switch on MB that can take say X16 5.0 or 6.0 and spit out 2 or more full X16 3.0 / 4.0 slots?
make the chipset link not be over loaded?
It exists already; it's called "PCIe bifurcation".
Infuriatingly, I've yet to find a motherboard that lets me manually budget this. I would love to have a mobo capable of utilizing 12 NVMe SSDs, even if it's only at x1 speed. However, the only options I see for this will split a PCIe into 4 x4 lanes, which is great, but still makes density problematic.
It's possible to add a truckload of SSDs to a Threadripper motherboard because those have 64 lanes or something absurd like that, but both the processors and m
Re: (Score:2)
"PCIe bifurcation" does not let you get 2 full links of X16 3.0 to an 3.0 devices out an 4.0 x16 from cpu.
Now an switch can take that 4.0 or higher from the cpu and setup outgoing links at 3.0 X16 with out overloading the cpu link.
Re: (Score:1)
Re: (Score:3)
Maybe they should stop creating a new standard each year and actually try to create devices that can make use of that bandwidth.
Maybe you should RTFS and see that we have devices which do actively use such bandwidth. There's a reason slots in your motherboard currently have 16x parallel lanes and the complexity that involves.
now we've jumped to 7.0?
No we haven't. Read the first quoted sentence in TFS. Unless you're a timetraveller from the future we do not have PCI Express 7.
Now when you're done talking about the fact that broadband is pointless and 56kbps modems should be enough for everyone simply because you don't see a use case for something many years
Re: (Score:2)
broadband is pointless and 56kbps modems should be enough for everyone simply because you don't see a use case
Large Javascript frameworks, ads, and trackers, of course!
Re: (Score:2)
Re: (Score:2)
Large Javascript frameworks, ads, and trackers, of course!
A great example of things that didn't exist at the time.
Re: (Score:3)
Re: (Score:2)
They are taking a break from new standards each year. They are only coming out with new standards every three years so they actually take two years off between new standards.
We haven't moved to PCIe 7 yet since the standard won't even be finalized for another 3 years.
Re: (Score:2)
Even NVMe SSDs are handicapped by old file-system protocols at this point.
You should look at Microsoft's DirectStorage. It was originally created for Xbox Series X and is making its way over to PCs now. The big gains are less CPU overhead and lower latency. You can do things like load a texture and use it in the next frame. Another gain from it is by making the loads less CPU driven, you can load data directly into GPU memory and have the GPU decompress it. If you're just going to transfer data from one PCIe device to another, the extra bandwidth certainly helps.
Why change so often? (Score:2)
Maybe it's because it's easier to change in smaller steps and test things out, but why create so many standards so closely together? Or are they artificially limiting progress to allow it later (to keep the pace)?
I'd be surprised if they "invented" technology to use every 3 years like clockwork. I guess they could be piggybacking on silicon lithography improvements like most chip tech is, and just following their curve.
Part of me would rather have the same tech for much longer, with a bigger jump so it's
Re: (Score:2)
With the highest of high end GPUs currently, say like a 3090, you can expect a single digit percentage performance boost from the latest and greatest PCIe standard. But it will have quite more of an effect on your NVMe throughput, if that's not already saturated in the hardware itself.
When SATA 3 was still the best way to attach an SSD, nobody bothered making anything capable of more than 600 MB/s. It just didn't make sense. Attaching the SSD directly to PCIe lanes was what was required to bust that situati
Re: (Score:2)
Oh, for an example of a product that was too good for its own good, see the IBM T220 and T221 [wikipedia.org]. If today's standards for 4k had existed then, then I think two things would be true now:
1. The T220 and T221 would still work with modern video cards, and
2. We would have had 4k on the desktop many years sooner.
Instead what we had were $15,000 displays running on bespoke hardware that weren't 100x better than what they replaced.
Re: (Score:2)
Why x8 lanes? Running multiple parallel lanes is nothing more than an admission that the system wasn't fast enough in the first place. The real comparison is 16x vs 1x. And while currently there's no meaningful difference between 8x and 16x there definitely is a bottleneck at 4x and below.
Never enough. (Score:2)
By the time PCIe is in consumer products GPUs will need the extra bandwidth. PCIe 4.0 is already six years old, so if the pattern holds we're looking at 2031 or so before seeing PCIe 7.0 on consumer desktop motherboards.
Re: (Score:3)
By the time PCIe is in consumer products GPUs will need the extra bandwidth.
What do you mean? They need the extra bandwidth now, already. There's a reason the GPU has 16x PCIe lanes going to it. It uses far more bandwidth that PCIe 4.0 provides. So do NVMe drives.
It's not always about increasing total bandwidth. Sometimes it's about potential to reduce complexity.
Too much focus on PC's (Score:2)
Ok (Score:3)
Not bad. You could drive three ports on a 10 gigabit ethernet card simultaneously at full speed off a single lane, or 4 lanes of a quad Infiniband card. Let's say you dedicated 32 lanes to Infiniband, you could drive 12 links of NDR at close to full capacity, or about 75% of 4 lanes of GDR. That's hellishly impressive.So there's networking requirements for a bus this fast.
There's only one catch. There's bugger all you can do with this much data. True, you could use it to build a very nice NDR Infiniband switch, but there are probably faster switches out there today and therefore they can't be using this approach.
Imagine a Beowulf cluster of these (Score:2)
Slots.