Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking The Internet

New Internet Standard L4S: the Quiet Plan to Make the Internet Feel Faster (theverge.com) 79

Slow load times? Choppy videos? The real problem is latency, writes the Verge — but the good news is "there's a plan to almost eliminate latency, and big companies like Apple, Google, Comcast, Charter, Nvidia, Valve, Nokia, Ericsson, T-Mobile parent company Deutsche Telekom, and more have shown an interest." It's a new internet standard called L4S that was finalized and published in January, and it could put a serious dent in the amount of time we spend waiting around for webpages or streams to load and cut down on glitches in video calls. It could also help change the way we think about internet speed and help developers create applications that just aren't possible with the current realities of the internet... L4S stands for Low Latency, Low Loss, Scalable Throughput, and its goal is to make sure your packets spend as little time needlessly waiting in line as possible by reducing the need for queuing. To do this, it works on making the latency feedback loop shorter; when congestion starts happening, L4S means your devices find out about it almost immediately and can start doing something to fix the problem. Usually, that means backing off slightly on how much data they're sending... [L4S] makes it easier to maintain a good amount of data throughput without adding latency that increases the amount of time it takes for data to be transferred...

If you really want to get into it (and you know a lot about networking), you can read the specification paper on the Internet Engineering Task Force's website... The L4S standard adds an indicator to packets, which says whether they experienced congestion on their journey from one device to another. If they sail right on through, there's no problem, and nothing happens. But if they have to wait in a queue for more than a specified amount of time, they get marked as having experienced congestion. That way, the devices can start making adjustments immediately to keep the congestion from getting worse and to potentially eliminate it altogether... In terms of reducing latency on the internet, L4S or something like it is "a pretty necessary thing," according to Greg White, a technologist at research and development firm CableLabs who helped work on the standard. "This buffering delay typically has been hundreds of milliseconds to even thousands of milliseconds in some cases. Some of the earlier fixes to buffer bloat brought that down into the tens of milliseconds, but L4S brings that down to single-digit milliseconds...."

Here's the bad news: for the most part, L4S isn't in use in the wild yet. However, there are some big names involved with developing it... When we spoke to Greg White from CableLabs, he said there were already around 20 cable modems that support it today and that several ISPs like Comcast, Charter, and Virgin Media have participated in events meant to test how prerelease hardware and software work with L4S. Companies like Nokia, Vodafone, and Google have also attended, so there definitely seems to be some interest. Apple put an even bigger spotlight on L4S at WWDC 2023 after including beta support for it in iOS 16 and macOS Ventura... At around the same time as WWDC, Comcast announced the industry's first L4S field trials in collaboration with Apple, Nvidia, and Valve. That way, content providers can mark their traffic (like Nvidia's GeForce Now game streaming), and customers in the trial markets with compatible hardware like the Xfinity 10G Gateway XB7 / XB8, Arris S33, or Netgear CM1000v2 gateway can experience it right now...

The other factor helping L4S is that it's broadly compatible with the congestion control systems in use today...

This discussion has been archived. No new comments can be posted.

New Internet Standard L4S: the Quiet Plan to Make the Internet Feel Faster

Comments Filter:
  • by bugs2squash ( 1132591 ) on Sunday December 10, 2023 @09:03PM (#64071927)
    sounds like FECN and BECN. That's why frame relay dominates today.
  • by ArchieBunker ( 132337 ) on Sunday December 10, 2023 @09:37PM (#64071999)

    It's the literal megabytes of scripts embedded in html pages.

    • Yes but no. (Score:4, Informative)

      by Gravis Zero ( 934156 ) on Sunday December 10, 2023 @11:14PM (#64072135)

      You're 100% right when it comes to loading webpages. However, L4S is intended to address is issues with streaming data. Video/audio content comes to mind but it will also help with online gaming.

      • Do people really have problems with streaming content?

        Not that I'd turn down anything g that improves network traffic flow but I never experience streaming issues and don't see others online complaining about it either.

        Is this intended to help end users or smooth out networks on the backbone?

        • I think most people in major metropolitan areas don't have problems with streaming non-interactive video content, but for most people out of range of T1-speed DSL/Cable lines (which believe it or not is still something like 50% of the US population 97% of the geographical territory) streaming anything more than a 128kbps music stream is out of the question, and streaming game platforms are basically inoperable unless you're in one of the half-dozen neighborhoods served by Google Fiber or Verizon Fios.

          • The point here is, "L4S" sounds like dynamic bitrate to balance out traffic congestion. And your rural low speed internet only needs setting the streaming bitrate to a consistent lower quality/resolution. Or improve the hardware to better tier. No need for dynamic.

            The only places one may want dynamic bitrate are video conferencing through mobile internet, or some live sport event that people want "real" live, not heavily buffered "live". But then those live sport event should have gone through traditiona

        • Re:Yes but no. (Score:4, Informative)

          by phantomfive ( 622387 ) on Monday December 11, 2023 @12:35AM (#64072241) Journal
          Streaming video doesn't depend on latency because you can buffer it for minutes or even hours if you want to.
          • Yes exactly. So that goes to the heart of my question. Isn't this a technology for provider network smoothing? Which is a good thing but article tone sort of implies end users should care. That's my take on it anyway.

          • by short ( 66530 )
            Not when you want to quickly scan through the video. But then that is why it is better to download any video first.
          • by dfghjk ( 711126 )

            Latency isn't a virtue, nothing "depends" on it.

            And the problem isn't one streaming video, it's the network. Buffering for "minutes or even hours" is what adds to latency. Network architect you are not.

      • Re:Yes but no. (Score:4, Informative)

        by jonwil ( 467024 ) on Monday December 11, 2023 @01:23AM (#64072295)

        Audio and Video chat seems like it would be one of the biggest beneficiaries of this.

        • Audio and Video chat seems like it would be one of the biggest beneficiaries of this.

          This, and gaming. Basically interactive media consumption. Thus the list of names behind the proposal.

      • Latency != jitter, though.

    • by Waccoon ( 1186667 ) on Monday December 11, 2023 @01:38AM (#64072307)

      Of course latency is the issue. Stupid web developers insist on writing code that relies on 400 round trips.

      Updating the transmission protocol won't fix that.

      • Of course latency is the issue. Stupid web developers insist on writing code that relies on 400 round trips.

        Updating the transmission protocol won't fix that.

        I look at GenZ listening to vinyl albums in the 21st Century and wonder why old-fashioned simple HTML pages haven't made a retro-kewl comeback yet.

        But then I remembered how much they actually pay for all those websites they use, and remember how we got here.

        It's like a Catch-22, but you find the victim insisting that noose is the hottest fashion statement.

        • People who grew up in the 90's haven't hit middle age just yet. Right now, the 80's retro computer scene is still going since my group is now in our late 40s'.

          Give it another 5-10 years. Either kitsch web will make a retro comeback, or Web3.0 will totally collapse under its own weight. Either way, it should be a terrific "I told you so" moment for the old farts like me who know how to make a complete web page fit into 50K.

    • by gweihir ( 88907 )

      Indeed. Deliver lots or crap slow or the stuff people actually asked for fast.

    • by jd ( 1658 )

      That wouldn't be so bad if the Internet was laced with high-speed caching. But, IIRC, plans to do this were nixed by websites who accused web caches and web proxies of violating copyright.

    • by AmiMoJo ( 196126 )

      For random websites where the user doesn't have a strong reason to go there (say instead of some other site), every second of additional load time reduces your audience. Beyond 2 seconds the drop off is quite dramatic.

    • What I don't understand is why the common scripts can't be downloaded and cached. If a site is using React 9.99, reference it from the appropriate site, if my browser already downloaded it from the same or differing site, then no need to download it again
    • Scripts? Anyone without an adblocker will tell you it is the multiple video ads per page that all automatically play.
    • I mean, 640K ought to be enough for anybody, right?

  • L4ST (Score:4, Interesting)

    by dohzer ( 867770 ) on Sunday December 10, 2023 @09:38PM (#64072003)

    L4S stands for Low Latency, Low Loss, Scalable Throughput

    Shouldn't it stand for L4ST, or "LAST"?

  • by Gabest ( 852807 ) on Sunday December 10, 2023 @09:49PM (#64072027)

    Accept cookies, no notifications thank you, press a few Xs to close the ads. Now I can read the damn page. Oh, it's one of those AI generated articles again, with a clickbait title and absolutely no content.

    • Ha that was your first mistake, trying to read something. If its not a snippet of information embedded near the end of a 10 minute unsearchable streaming video its not peak internet.

    • "Oh, it's one of those AI generated articles again"
      Headline / Google search link: "Tim Cook net worth, salary, wife, house EXPLAINED!"
      Relevant part of AI generated article: "With regards to a wife, Tim Cook will not have one, because he identifies as homosexual."

  • Google (Score:5, Insightful)

    by Dan East ( 318230 ) on Sunday December 10, 2023 @09:51PM (#64072031) Journal

    Considering Google is on board, you can be assured that this technology will definitely make ads load faster.

    • Considering Google is on board, you can be assured that this technology will definitely make ads load faster.

      That reminds me of visiting my parents years ago and trying to show them a YouTude video on the 5secondfilms channel. I had a 15 second advert load first, which loaded without delay, but in the middle of the actual content it stopped to buffer. Take note of the name of the channel. The channel name is a bit of a misnomer because each 5 second film has a short title and end card that would put the total time of the video to a whole 8 seconds. Even so they can't put in the effort to get an 8 second video

      • For video advertising, the ad is usually served from a different server than the actual video (or webpage). So whoever paid for the ad hosted it on a really nice ad server.
      • by AmiMoJo ( 196126 )

        Probably because the ads are lower quality. When I occasionally don't have ad blocking enabled, I notice that they are often 720p or 1080p30 low bitrate. They play okay, but the 1080p60 or 4k video has problems.

      • by jonadab ( 583620 )
        I have had very much the opposite experience on YouTube. Whenever it loads an advertisement, performance goes right into the toilet.
  • That requires new physics, and you'll have a VERY pissed off Einstein on your hands.

    • by jd ( 1658 )

      ISPs intend to hijack the TARDIS, download the future Internet into Squid proxy servers, and deliver you the content before it's written. If they time it we'll enough, the the positive latency on the connection minus how far in the future they get the content will total zero.

    • I know you are joking but I think that Einstein would be intrigued rather than mad. Like any good scientist, it's the failures that open new doorways into understanding. Being right is boring.

  • by Arnonyrnous Covvard ( 7286638 ) on Sunday December 10, 2023 @10:49PM (#64072105)

    Provide the bandwidth and don't add "slow down" markers to unauthenticated headers.

    • That's a bit like saying that automobile traffic congestion is due to the road not being wide enough.

      While sometimes true, most of the time it's induced demand for the larger pipe that is causing the congestion.

    • Provide the bandwidth and don't add "slow down" markers to unauthenticated headers.

      Hanging a SLOW DOWN sign every other mile above the 8-lane superhighway, doesn't necessarily mean there's a highway problem.

      I have 300Mb+ coming to my home. That should be more than enough bandwidth, even for today. Perhaps we just focus on getting rid of the SLOW DOWN signs instead of enabling finger-pointing between road-makers and sign-hangers.

  • Whatever the modern megalithic tech "community" doesn't feel like creating, it just ass-backwards a counterfeit result by lowering expectations.

    Piling crap on crap is cheaper and more amenable to quarterly stock reports than doing things right.
  • Can I set these flags in response from my server so they get faster to customer?
  • Seriously. There is no need for any "new" transmission protocols.

    • Non-crappy Internet is about $3500/mo.

      Do you mean "just live with crappy Internet?"

      Or perhaps we should invent around current limitations to reduce the cost of non-crappy Internet?

      • by gweihir ( 88907 )

        Non-crappy Internet is about $3500/mo.

        So you live in the 3rd world? Too bad.
        And no, this protocol will not fix things for you.

  • At least some of the companies involved are known for deliberately charging twice the standard prices for peering and therefore having very bad connections to the rest of the Internet. A router should never have the problem of buffering or throwing away a packet, except for unexpected emergencies.

  • I mean if any packet I send or receive experienced congestion at my ISP.... they are obviously not doing their job. Will I get my money back?

  • What's to keep some ISPs from just setting the "slow down" bit on all the traffic unless you are paying a premium?
    • This is the foot step towards officially killing net neutrality. 'My packet has higher priority than yours'.
  • It is overloaded servers and congested wireless networks. You can have multi-second latency and still stream perfectly fine. But if the packets aren't getting there because of bad wireless or an overloaded server you will reach the end of your streaming buffer.

    Use a wired connection and avoid ad-ridden sites and generally you don't get what this article is calling 'latency'.
    • by PPH ( 736903 )

      and congested wireless networks.

      Oh please stop shilling for the wireless operators trying to monopolize all the bandwidth of other services. They can't even give away the channel capacity that they've got now. Which is why the virtual mobile operators are having such an easy time buying it wholesale.

  • L4S promises to enable all kinds of new snazzy stuff, but it really won't happen until version 5 or 6 or 99 comes out, or maybe never.x

As of next Thursday, UNIX will be flushed in favor of TOPS-10. Please update your programs.

Working...