

PCI Express 7.0's Blazing Speeds Are Nearly Here, But PCIe 6 is Still Vapor (pcworld.com) 35
An anonymous reader shares a report: PCI Express 7 is nearing completion, the PCI Special Interest Group said, and the final specification should be released later this year. PCI Express 7, the backbone of the modern motherboard, is at the stage 0.9, which the PCI-SIG characterizes as the "final draft" of the specification. The technology was at version 0.5 a year ago, almost to the day, and originally authored in 2022.
The situation remains the same, however. While modern PC motherboards are stuck on PCI Express 5.0, the specification itself moves ahead. PCI Express has doubled the data rate about every three years, from 64 gigtransfers per second in PCI Express 6.0 to the upcoming 128 gigatransfers per second in PCIe 7. (Again, it's worth noting that PCIe 6.0 exists solely on paper.) Put another way, PCIe 7 will deliver 512GB/s in both directions, across a x16 connection.
It's worth noting that the PCI-SIG doesn't see PCI Express 7 living inside the PC market, at least not initially. Instead, PCIe 7 is expected to be targeted at cloud computing, 800-gigabit Ethernet and, of course, artificial intelligence. It will be backwards-compatible with the previous iterations of PCI Express, the SIG said.
The situation remains the same, however. While modern PC motherboards are stuck on PCI Express 5.0, the specification itself moves ahead. PCI Express has doubled the data rate about every three years, from 64 gigtransfers per second in PCI Express 6.0 to the upcoming 128 gigatransfers per second in PCIe 7. (Again, it's worth noting that PCIe 6.0 exists solely on paper.) Put another way, PCIe 7 will deliver 512GB/s in both directions, across a x16 connection.
It's worth noting that the PCI-SIG doesn't see PCI Express 7 living inside the PC market, at least not initially. Instead, PCIe 7 is expected to be targeted at cloud computing, 800-gigabit Ethernet and, of course, artificial intelligence. It will be backwards-compatible with the previous iterations of PCI Express, the SIG said.
Even 5.0 would be nice (Score:3)
AMD's chipsets (which after Intel's collapse is most chipsets from a consumer retail standpoint) are all connected using a PCIe 4.0 x4 link, which is easy to bottleneck when you're hanging so much off them, all the USB controllers, multiple m.2 slots, PCIe slots, etc. Even their latest and highest-end chipsets have this bottleneck. Even just moving the chipset alone to PCIe 5.0 would be a big improvement.
Re: (Score:2)
amd desktop cpu's have like
upto 2 X4 links to m.2
usb links directly in the cpu.
X16 (can be split X8 X8)
X4 chipset link.
Re: (Score:3)
Up to 2 X4 links to m.2
Yes, but the second set of x4 lanes is dedicated to the USB4/Thunderbolt controller, so using that m.2 slot requires permanently stealing half or all of its lanes.
usb links directly in the cpu.
Only two USB 10G and two USB2 ports. All other USB ports connect through the chipset. On X870, that will typically include 1x 20G, 4x 10G, and 6x USB2.
X16 (can be split X8 X8)
Only on some motherboards, most can't bifurcate that. And even on some that can, using the second slot may starve the GPU fans of air.
X4 chipset link.
But only PCIe 4.0. And you're hanging a *lot* off that link. If
Especially stable and low-power 5.0 would be nice (Score:2)
It's all datacenter (Score:2)
PCIe, Nearly Fast Enough for Video Cards (Score:2)
Re: (Score:3)
But if we're talking about games large enough for this to matter, I would assume the use of DMA since that's kind of exactly what it's for. Especially since once you get under the hood of read the actual tra
Re: (Score:2)
DirectStorage promised direct NVME to GPU communication, bypassing the CPU, but to my knowledge the Windows version of it still lacks this feature -- it is mostly just direct-on-GPU texture decompression.
Vulkan also has the on-GPU decompression as an extension, but also no CPU bypass.
Re: (Score:2)
the texture load tries to go from NVMe
I'm still using HDDs you insensitive clod.
Re: (Score:2)
the texture load tries to go from NVMe
I'm still using HDDs you insensitive clod.
An 8 inch floppy should be enough for anyone!
Re: PCIe, Nearly Fast Enough for Video Cards (Score:2)
Some of us are still using tapes.
Actually.. More Than 800GigE (Score:3)
>It's worth noting that the PCI-SIG doesn't see PCI Express 7 living inside the PC market, at least not initially. Instead, PCIe 7 is expected to be targeted at cloud computing, 800-gigabit Ethernet and, of course, artificial intelligence.
PCI-E 6.0 will support 800GigE NICs because a 16-lane slot will handle 1Tbit/sec. These exist now; Nvidia's already launched the 800G CX8 NIC even though there's not yet a server motherboard to connect it to.
PCI-E 7.0 will support 1600GigE (yes, it's a thing) NICs because a 16-lane slow will handle 2Tbit/sec. It'll likely be eons before we see a motherboard that can do it.
Re: (Score:2)
My quoting sucks. As does my spelling.
Re: (Score:2)
The CX8, has the ability to be installed in a PCIe Gen5 system that occupies 2 x16 slots through an auxiliary PCIe connection card. This ability began with their ConnectX-6 when they started to support Gen4.
Re: (Score:2)
The CX8, has the ability to be installed in a PCIe Gen5 system that occupies 2 x16 slots through an auxiliary PCIe connection card. This ability began with their ConnectX-6 when they started to support Gen4.
Sure, assuming you have the extra PCI-E slot(s) to offer it, along with the lanes to drive it.
Re: (Score:2)
>It's worth noting that the PCI-SIG doesn't see PCI Express 7 living inside the PC market, at least not initially. Instead, PCIe 7 is expected to be targeted at cloud computing, 800-gigabit Ethernet and, of course, artificial intelligence.
PCI-E 6.0 will support 800GigE NICs because a 16-lane slot will handle 1Tbit/sec. These exist now; Nvidia's already launched the 800G CX8 NIC even though there's not yet a server motherboard to connect it to.
PCI-E 7.0 will support 1600GigE (yes, it's a thing) NICs because a 16-lane slow will handle 2Tbit/sec. It'll likely be eons before we see a motherboard that can do it.
Given the development lead times for the server processors to support PCIE 6.0 (and the mainboards for such), later in 2025 is the earliest expectation for public availability (and only at the server levels) for such. The transition to PAM4 signalling has increased costs to the point that PCIE 5.0 may be the stopping point for consumer boards for an additional few years (at least at pricing most consumers expect).
Don't really care that much. (Score:3)
We're at a point where passively cooled mini-pcs that cost less than 2000 Euros are closing in on the Petaflops range. With system clocks sometimes exceeding 5GHz, RAM clocks doubling or even quadrupling that and single desktop CPU dies come with 64 cores and a GFX unit built in. With standard mini-boards that can shove Terabytes around in seconds.
I'm writing this on an older Tuxedo laptop with 24GB RAM and 1TB of storage, with two screens and a bizarre range of applications running that each waste obscene amounts of resources because these days everything that renders pixels to a screen has to lug around it's own 50MB of an electron stack even if it's only a glorified and hideously bloated Xchat client rippoff with with nice colorful animations. And the computer doesn't even budge. The fan only spins up when I'm doing an Ubuntu update or compiling something big in the summer.
If I need my private AI Bot/Assistant/Buddy my requirements might grow again and catch up with current state of the art but for now I'm more than good. I still clearly remember my Cyrix P200+ with a 75Mhz sysclock, the first desktop CPU that needed a fan, smaller than a matchbox, and its 32MB of RAM. We've come a long way since then and then at least trippled everything we had wished for. I can say for sure that I'm totally OK and certainly do not need yet another bus upgrade for my computing needs. And I expect it to stay that way with an 80-90% chance.
My 2 cents.
Re: (Score:2)
All I Want Is More Slots (Score:5, Interesting)
It's such a pain to get a motherboard with expansion slots these days, that doesn't cost a king's ransom.
Back in the PCI days, a motherboard would have 4-8 slots, and they all worked, all the time. Buy card -> add card -> install driver -> use card. Done.
Now, it's a game of whack-a-mole...
The motherboard has four slots - an x16, an x8, and 2 x1's. The X16 ratchets down to x8 if the x8 slot is also occupied. The first x1 slot is useless because it's immediately adjacent to the x16 slot, so the GPU fans cover it. The only way you can use it is if the GPU is in the x8 slot, which isn't a win because the second x1 slot is itself immediately adjacent to the x8.
Meanwhile, the x8 slot shares its bandwidth with one of the NVMe slots, so if there's an NVMe drive on the board, it ratchets down to x4, but if the x16 slot is populated AND the first NVMe slot is populated, then the x8 slot doesn't work at all. This leaves one working x1 slot, but shoot me now, it's not one of the open-ended ones, so I can't fit this x4 card in and let it run at x1 speeds, so I have to buy a new x1 variant of the x4 card I already have, slap it in and realize that the HBA cables aren't long enough to reach the drives, but it's the only slot that it'll work in...so I get a longer HBA cable and oh, the processor only has 24 PCIe lanes, which are taken up by the x16 slot, the two NVMe drives, and the northbridge...which apparently, needs the bandwidth for the completely empty SATA ports, but I *can* get it to work if I upgrade my processor to one with 28 lanes, which then requires a firmware upgrade to work, but I can't get the machine to boot into BIOS to do the update because I returned the 24-lane processor, and OH FFS.......
NONE of this, of course, is ever described on the box, nor documented in the manual, nor is it configurable in the BIOS so I can at least make some choices and visualize what will and won't work. No, one must search around and hope that some Redditor is in the same boat and took the time to map it out and document it online.
The way around this, of course, is to get a Threadripper CPU that costs $1,500, to power the $1,200 motherboard that has 8 x16 slots that all work, but now the power usage doubles and you need an eATX case to fit it, which doesn't fit on the desk anymore, and OH FFS...
What I would *love*, is for a motherboard that handled quantity over quality. Do I need PCIe 7? no...but if PCIe 7 has octuple the throughput of PCIe 4, and I've got 24 lanes of PCIe 7 from the CPU, make a motherboard that makes it function like 192 lanes of PCIe 4. Give me 8 full-length PCIe 4 slots, 4 NVMe slots, and give the rest to the Northbridge. Every single slot and port works, regardless of what else is populated.
I'll take some variants of this - give me a motherboard that's got only one PCIe slot, but fill the rest of the board up with NVMe slots. Can I fit a dozen? Because I want a ludicrously-fast NVMe NAS without having to buy PCIe switches at $300 a pop. An 8-slot board is pretty much guaranteed to be eATX; I'll take a variant that has 6 slots that's standard ATX size with the same principle.
Ultimately, I'd love nothing more than a desktop motherboard that isn't a game of musical slots....
Re: (Score:2)
Re: (Score:2)
I posed a similar question on hardware forums a while ago and got slammed for it.
Basically, I queried why do motherboards have everything built-in and permanently consuming lanes, and then having their tech becoming obsolete, instead of just having more slots with said functionality on add-in cards. That way manufacturers wouldn't need so many different SKUs for different configurations, it'd just be a handful of motherboards with different bundled add-in cards (essentially modern Super-IO cards) for the WI
Re: (Score:2)
Many things that used to require a card can be done cheaply through USB nowadays: ethernet, audio, wifi, bluetooth, hdmi and more. The need for many PCI slots has become rather niche, and niche is premium.
I understand you want higher end performance but most folks do not need better than 1Gbps networking within the house. I would like a 10Gbps network in my house but I'm not able to justify the premium, just like I don't really *need* a sports car.
Re: (Score:2)
Re: (Score:2)
Back in the PCI days, a motherboard would have 4-8 slots, and they all worked, all the time. Buy card -> add card -> install driver -> use card. Done.
The old days had wildly different requirements. It's been a *LONG* time since I've seen a PC with more than a GPU in it. That's the target market for motherboard vendors. On the flip side I fondly remember a P200 MMX with a Matrox GPU, Voodoo 2 video accelerator, Soundblaster AWE32, and a 56k modem card. That was the bare minimum required for a gaming PC. I added a 10base2 network at some point and then when USB was released I realised I was out of upgrade options and didn't have a spot for the card since I
128 gigatransfers per second (Score:3)
err... wtf is this metric?
Re: (Score:2)
This is a way of representing the raw line rate of a single PCIE lane. At a low level, this metric makes sense for those folks designing the PCIE PHYs themselves. The PCI SIG group will go into great detail into the PHY link, so in that context, this metric makes sense.
Re: (Score:2)
Re: (Score:2)
err... wtf is this metric?
It is a clue that you are not getting 128gb/s but something less because some of those transfers are protocol overhead rather than user data. How much less? That is not a fixed value or they would do the translation for you and advertise Gb/s.
Re: (Score:2)
err... wtf is this metric?
A standard metric used to describe transfers on busses of varying channel width and potentially varying signalling. Things like PCI-e varies in how many bits can be sent on each sample based on a number of factors (not just electrical limits in the slot, but dynamically allocated), so the underlying bus speed is described in GT/s. To figure out the actual bandwidth you just multiple the transfers by the bus width and correct for the signalling used. For PCI-e 7.0 in a X16 configuration those 128.0 GT/s equa
Re: (Score:2)
The block code used in PCIe 6 and 7 is 242 payload bytes per 256 raw bytes, with 6 bytes of FEC overhead and 8 bytes of CRC. So it's properly written as a 242B/256B code, versus the older 128b/130b code that expanded 128 data bits to 130 transfer bits. PCIe 6 needs that because its signaling scheme is PAM4 instead of binary NRZ, which in turn means (all things considered) a much higher bit error rate.
"stuck"? (Score:2)
There is nothing "stuck" about being on PCIe Gen 5 at home. There is literally NO hardware in the consumer market that uses that much bandwidth, not even remotely close. We have some NVMe drives that can burst at the x4 speed, but who in their normal usage is actually reading/writing at well over 10GB/sec sustained? That's literally a 100gbps link if they're interacting with a NAS on their local network. And gaming? The amount of bandwidth consumed by GPUs is no where even remotely close to what a Gen 5 x16
Re: (Score:2)
Truth. I want real world examples of people running into bandwidth limits.