Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IT Technology

PCI Express 7.0 Standard Provides Eight Times the Bandwidth of Today's Connections (arstechnica.com) 52

The group responsible for developing and updating the PCI Express standard, the PCI-SIG, aims to update that standard roughly every three years. From a report: Version 6.0 was released earlier this year, and the group has announced that PCIe version 7.0 is currently on track to be finalized sometime in 2025. Like all new PCI Express versions, its goal is to double the available bandwidth of its predecessor, which in PCIe 7.0's case means that a single PCIe 7.0 lane will be able to transmit at speeds of up to 32GB per second. That's a doubling of the 16GB per second promised by PCIe 6.0, but it's even more striking when compared to PCIe 4.0, the version of the standard used in high-end GPUs and SSDs today. A single PCIe 4.0 lane provides bandwidth of about 4GB per second, and you need eight of those lanes to offer the same speeds as a single PCIe 7.0 lane.

Increasing speeds opens the door to ever-faster GPUs and storage devices, but bandwidth gains this large would also make it possible to do the same amount of work with fewer PCIe lanes. Today's SSDs normally use four lanes of PCIe bandwidth, and GPUs normally use 16 lanes. You could use the same number of lanes to support more SSDs and GPUs while still providing big increases in bandwidth compared to today's accessories, something that could be especially useful in servers.

This discussion has been archived. No new comments can be posted.

PCI Express 7.0 Standard Provides Eight Times the Bandwidth of Today's Connections

Comments Filter:
  • Yeah, n.0 4x == (n+1).0 1x every generation; why is this news?

  • Maybe they should stop creating a new standard each year and actually try to create devices that can make use of that bandwidth. Even NVMe SSDs are handicapped by old file-system protocols at this point. Very few things show a benefit when moving from PCIe 3.0 to 4.0 - now we've jumped to 7.0? Sort of like opening a McDonalds on the moon today in anticipation of future colonists.
    • by Junta ( 36770 ) on Thursday June 23, 2022 @04:18PM (#62645666)

      For the desktop, this is true, to an extent.

      However, the datacenters want to suck it all up in short order. A single port of NDR infiniband needs 16 lanes of PCIe 5. They want to have not only fast NVMes, but a lot of them. Ditto for GPUs.

      Even if the appetite for 'faster' doesn't increase, if an SSD only warrants a single lane of pcie7, if a gpu only warrants 4 lanes, etc, then there's budget for processors to start dialing back the pcie lanes to traditional numbers to be cheaper to implement and facilitate.

      By the same token, one could imagine a class of desktop processors going to, say, 8 lanes total with no extra lanes from a southbridge. Faster lanes, cheaper by reducing the count.

      • Presumably you could devise a PCI-Express switch which would take one lane of 7.0 in, and spit a whole bunch of lanes of 4.0 out.

        • by Mal-2 ( 675116 )

          Considering it is currently possible (though frequently not supported by consumer hardware) to take an x16 slot and use it as four x4 channels, you're probably right. This ability most likely does lie within the spec, and most likely will be implemented in real hardware. Whether consumer grade hardware will support it is another matter. Who knows?

          • There are also switches which take 1 lane in and put multiple lanes out, obviously time-divided somehow. It seems a smaller jump from that to this.

      • by mjwx ( 966435 )

        For the desktop, this is true, to an extent.

        However, the datacenters want to suck it all up in short order. A single port of NDR infiniband needs 16 lanes of PCIe 5. They want to have not only fast NVMes, but a lot of them. Ditto for GPUs.

        Even if the appetite for 'faster' doesn't increase, if an SSD only warrants a single lane of pcie7, if a gpu only warrants 4 lanes, etc, then there's budget for processors to start dialing back the pcie lanes to traditional numbers to be cheaper to implement and facilitate.

        By the same token, one could imagine a class of desktop processors going to, say, 8 lanes total with no extra lanes from a southbridge. Faster lanes, cheaper by reducing the count.

        The datacentre is the least likely place to adapt a new standard before it's tested and popular, especially outside of niche uses. When you buy a server you're buying it for years. 3-5 minimum. 7 is not unusual and I've not been to a DC yet that didn't have a 10+ year old box sitting around because it does something that they haven't figured out (or more likely, the client just hasn't been arsed) to move to newer hardware. OS Virtualization has helped this a lot, but that 15 yr old Solaris box is still ther

        • by Junta ( 36770 )

          The datacenter has extremes. AVX512 spent years as datacenter-only. No home is doing QSFP at all. Datacenters are the only places that bother with Infiniband or Fibre Channel (though not common). nVidia's GPUs nowadays split their microarchitecture to be entirely different for the Datacenter, to the point of things like SXM form factor which is datacenter only. On ethernet, home is just now starting to see some 2.5g ethernet, datacenter has long been well beyond that commonly.

          More directly, PCIe gen 4

    • make better use of less lanes? have an switch on MB that can take say X16 5.0 or 6.0 and spit out 2 or more full X16 3.0 / 4.0 slots?
      make the chipset link not be over loaded?

      • make better use of less lanes? have an switch on MB that can take say X16 5.0 or 6.0 and spit out 2 or more full X16 3.0 / 4.0 slots?
        make the chipset link not be over loaded?

        It exists already; it's called "PCIe bifurcation".

        Infuriatingly, I've yet to find a motherboard that lets me manually budget this. I would love to have a mobo capable of utilizing 12 NVMe SSDs, even if it's only at x1 speed. However, the only options I see for this will split a PCIe into 4 x4 lanes, which is great, but still makes density problematic.

        It's possible to add a truckload of SSDs to a Threadripper motherboard because those have 64 lanes or something absurd like that, but both the processors and m

        • "PCIe bifurcation" does not let you get 2 full links of X16 3.0 to an 3.0 devices out an 4.0 x16 from cpu.

          Now an switch can take that 4.0 or higher from the cpu and setup outgoing links at 3.0 X16 with out overloading the cpu link.

        • FWIW, the only motherboard manufacturer that I've seen provide control of PCIe bifurcation is SuperMICRO on some of their server motherboards.
    • Maybe they should stop creating a new standard each year and actually try to create devices that can make use of that bandwidth.

      Maybe you should RTFS and see that we have devices which do actively use such bandwidth. There's a reason slots in your motherboard currently have 16x parallel lanes and the complexity that involves.

      now we've jumped to 7.0?

      No we haven't. Read the first quoted sentence in TFS. Unless you're a timetraveller from the future we do not have PCI Express 7.

      Now when you're done talking about the fact that broadband is pointless and 56kbps modems should be enough for everyone simply because you don't see a use case for something many years

      • broadband is pointless and 56kbps modems should be enough for everyone simply because you don't see a use case

        Large Javascript frameworks, ads, and trackers, of course!

        • by bn-7bc ( 909819 )
          The modem/broadband quote was probably made in the late 1990s/early 2000s, when large js frameworks was not that big of a problem, adds where rather smaller. Ok internet over dialup was't ideal but jt was somewhat workable, not like today when anything below 25 Mbps ( or maybe even 50) feels horrible. This was nit meant as a nit pick put context is important. It's just as with the ( possibly fake) comment from Mr Gates " 640K should be enugh for evrybody" obviously this sounds totally bonkers today, but wh
        • Large Javascript frameworks, ads, and trackers, of course!

          A great example of things that didn't exist at the time.

    • Comment removed based on user account deletion
    • by Rhipf ( 525263 )

      They are taking a break from new standards each year. They are only coming out with new standards every three years so they actually take two years off between new standards.
      We haven't moved to PCIe 7 yet since the standard won't even be finalized for another 3 years.

    • by edwdig ( 47888 )

      Even NVMe SSDs are handicapped by old file-system protocols at this point.

      You should look at Microsoft's DirectStorage. It was originally created for Xbox Series X and is making its way over to PCs now. The big gains are less CPU overhead and lower latency. You can do things like load a texture and use it in the next frame. Another gain from it is by making the loads less CPU driven, you can load data directly into GPU memory and have the GPU decompress it. If you're just going to transfer data from one PCIe device to another, the extra bandwidth certainly helps.

  • Maybe it's because it's easier to change in smaller steps and test things out, but why create so many standards so closely together? Or are they artificially limiting progress to allow it later (to keep the pace)?

    I'd be surprised if they "invented" technology to use every 3 years like clockwork. I guess they could be piggybacking on silicon lithography improvements like most chip tech is, and just following their curve.

    Part of me would rather have the same tech for much longer, with a bigger jump so it's

    • by Mal-2 ( 675116 )

      With the highest of high end GPUs currently, say like a 3090, you can expect a single digit percentage performance boost from the latest and greatest PCIe standard. But it will have quite more of an effect on your NVMe throughput, if that's not already saturated in the hardware itself.

      When SATA 3 was still the best way to attach an SSD, nobody bothered making anything capable of more than 600 MB/s. It just didn't make sense. Attaching the SSD directly to PCIe lanes was what was required to bust that situati

      • by Mal-2 ( 675116 )

        Oh, for an example of a product that was too good for its own good, see the IBM T220 and T221 [wikipedia.org]. If today's standards for 4k had existed then, then I think two things would be true now:
        1. The T220 and T221 would still work with modern video cards, and
        2. We would have had 4k on the desktop many years sooner.

        Instead what we had were $15,000 displays running on bespoke hardware that weren't 100x better than what they replaced.

    • Why x8 lanes? Running multiple parallel lanes is nothing more than an admission that the system wasn't fast enough in the first place. The real comparison is 16x vs 1x. And while currently there's no meaningful difference between 8x and 16x there definitely is a bottleneck at 4x and below.

  • By the time PCIe is in consumer products GPUs will need the extra bandwidth. PCIe 4.0 is already six years old, so if the pattern holds we're looking at 2031 or so before seeing PCIe 7.0 on consumer desktop motherboards.

    • By the time PCIe is in consumer products GPUs will need the extra bandwidth.

      What do you mean? They need the extra bandwidth now, already. There's a reason the GPU has 16x PCIe lanes going to it. It uses far more bandwidth that PCIe 4.0 provides. So do NVMe drives.

      It's not always about increasing total bandwidth. Sometimes it's about potential to reduce complexity.

  • This will be great for servers and back-end storage (large back-end storage - think EMC).
  • by jd ( 1658 ) <imipak@ y a hoo.com> on Thursday June 23, 2022 @04:46PM (#62645744) Homepage Journal

    Not bad. You could drive three ports on a 10 gigabit ethernet card simultaneously at full speed off a single lane, or 4 lanes of a quad Infiniband card. Let's say you dedicated 32 lanes to Infiniband, you could drive 12 links of NDR at close to full capacity, or about 75% of 4 lanes of GDR. That's hellishly impressive.So there's networking requirements for a bus this fast.

    There's only one catch. There's bugger all you can do with this much data. True, you could use it to build a very nice NDR Infiniband switch, but there are probably faster switches out there today and therefore they can't be using this approach.

E = MC ** 2 +- 3db

Working...