Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

New Decentralized Routing Protocol Aims To Replace BGP 97

"The Border Gateway Protocol was first described in 1989...," according to Wikipedia, "and has been in use on the Internet since 1994."

But now long-time Slashdot reader jovius reports that a startup named Syntropy "aims to replace BGP as the default routing method of the internet, by using nodes around the world to constantly gather data of the inefficiencies of the current network." The intelligence is then used to route data via the most efficient routes. Actual tests with hundreds of servers have proved that latencies can be reduced by tens to hundreds of milliseconds. The connections are by default encrypted, and jitter is also reduced.

Eventually, the company-run servers are augmented with tens of thousands of nodes run by users/smart devices, who are rewarded for their work.

The team was recently joined by former SVP at Verizon Shawn Hakl and former Chief Product Officer at AT&T Roman Pacewicz. One of the founders of Syntropy is the co-founder of Equinix and NANOG Bill Norton. Syntropy is an Oracle and Microsoft partner, and transforming into a foundation and DAO to govern the protocol work.

Decentralised autonomous routing protocol (or DARP) has just been opened for community testing, and the system is live on https://darp.syntropystack.com/.
This discussion has been archived. No new comments can be posted.

New Decentralized Routing Protocol Aims To Replace BGP

Comments Filter:
  • by mveloso ( 325617 ) on Sunday March 14, 2021 @03:21PM (#61157814)

    Routing isn't really dynamic due to peering agreements and topology issues.

    This is solving a problem that nobody really has, except maybe for the military and some large enterprises.

    • by fermion ( 181285 )
      Not to mention a few rogue company run routers to make sure that packets either go nowhere or government run routers to make sure the go through censors or Google run router to make sue ads are included.
    • except maybe for the military and some large enterprises.

      Maybe that's their target market.

    • by Z00L00K ( 682162 )

      BGP is on top of that maybe not the best protocol, but it does the job.

    • This sounds like a really cool version of TOR, but as a routing overlay network. I'll have to read more, but it sounds like nodes get rewarded based on participation (like bitcoin), and security wise it sounds like it would be pretty immune to route hijacking too. If isps could financialize their participation, it makes peering agreements and commit rates a moot point IMO
      • This sounds like a really cool version of TOR

        This, by the description alone can never possibly exist.

  • The connections are by default encrypted ...

    It's gotta use Blockchain ... :-)

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Sunday March 14, 2021 @03:26PM (#61157832)
    Comment removed based on user account deletion
    • BGP isn't that slow to converge.
      That's more a function of the number of routes it has to transfer and apply rules to.

      But ISPs and Tier 1's will never, ever, give up the direct control on a per-prefix basis that BGP allows us.
      Speaking as someone who manages 5 autonomous systems in 3 states with over 150 individual peerages.
      • Comment removed based on user account deletion
        • What like the hold timers?
          So that BGP is slow to... deconverge?
          I'd rate that statement as true.
          As far as convergence though, again, 800k routes converges considerably faster via BGP than 800k LSAs via OSPF or some other IGP.
          Of course, no well-designed IGP is going to be throwing around 800k LSAs.
    • You are correct. What we need is transparency and path control on the Internet. Why should you ISP decide which way your packet routes (and lie to you when you try to find out your packet's paths)? You as an end host should be able to decide your packets' paths (or at least influence the path).

      SCION [scion-architecture.net] is a BGP-replacement that achieves this. It is an open network and FOSS, not a closed network by a startup. It is driven by researchers, not the latest blockchain hype. I guess that is why it doesn't get as much

    • by ffejie ( 779512 )

      Just what we want is unpredictable behavior constantly changing.

      This is exactly what circuit switched people said about IP. Turns out, dynamically rerouting things using IGPs and EGPs isn't all that hard to figure, and provides a whole bunch of reliability that circuit switched networks couldn't.

  • by Burdell ( 228580 ) on Sunday March 14, 2021 @03:27PM (#61157840)

    Route "optimizers" are basically devices designed to cost you money and screw up your routing table (and if misconfigurations are found, screw up other people's routing tables too).

  • by Nkwe ( 604125 ) on Sunday March 14, 2021 @03:28PM (#61157848)
    Not sure how virtual routing on top of physical routing is going to help. Oh yeah right, blockchain!
  • by 93 Escort Wagon ( 326346 ) on Sunday March 14, 2021 @03:34PM (#61157860)

    One of it's core functions beholden to a start-up.

    BGP could certainly use improvement - but any changes *have* to be consensus-driven and without encumbrance.

    • Man, I gotta take the time to remove "its -> it's" from my autocorrect list - it's almost always wrong.

    • but any changes *have* to be consensus-driven and without encumbrance.

      Literally.
      The AS isn't an Autonomous System in name, only.
      Replacing BGP will go a lot like replacing IPv4, in that it will never fucking happen.

  • I can't be bothered trolling through it all, so my question for anyone that knows about this is - is it an open protocol or another attempt to replace the open internet with a toll?

    • I can't be bothered trolling through it all, so my question for anyone that knows about this is - is it an open protocol or another attempt to replace the open internet with a toll?

      I spent a couple of minutes trying to navigate their web site. I didn't see any reference to an internet RFC or to the availability of source code.

      If this is a proprietary protocol, I see two possibilities: (1) It is no better than BGP, and so will be ignored. (2) It is an improvement over BPG, in which case everyone will wait 20 years for the patents to expire, reverse-engineer it, document it, and implement it.

      • > I see two possibilities: (1) It is no better than BGP, and so will be ignored. (2) It is an improvement over BPG, in which case everyone will wait 20 years for the patents to expire, reverse-engineer it, document it, and implement it.

        Three possibilities because 2b doesn't follow from 2a.
        Actually four because there are other protocols.

        (1) It is no better than BGP, and so will be ignored.
        Seems likely. Without digging too deep into it, it sounds like it has a single point of failure

        • There is also a fifth option, which has happened repeatedly when Cisco introduces a new protocol for whatever:

          4) People see the benefit of the new protocol. They don't use it because it's proprietary. Rather, they design a similar, but open, protocol, that provides the same benefit.

          Several protocols used in networking today are open protocols inspired by Cisco proprietary protocols.

          But anything that requires throwing out BGP (and all of the BGP routers) and starting over isn't going to happen.
          BGP *WILL* ge

          • BGP4 was introduced in 1994.

            https://tools.ietf.org/html/rf... [ietf.org]

          • I can also see a 6th option, the new concept provides an improvement but is mooted by a different technology, like CDNs, which negate some of the value of optimized routes since you more or less don't care about longer haul routes as your content is being delivered more physically locally.

            I can also see a long-term trend where the number of organizations who are not large data center operators or other service providers who run BGP-like protocols is in decline because multi-ISP connectivity is a transparent

        • I imagine the transition to any successor to BPG would be like the transition from IPv4 to IPv6; the old and new protocols would both be active for a period of time, until all the old-protocol-only equipment has been replaced.

          • True, yet even the 30-year IP version 4 to IP version 6 is a transition from one version of the protocol to another.
            It's not starting from scratch inventing a completely new and entirely different protocol.

            The IPv6 standard came out in 1995. The primary change is that addresses got longer. More than 25 years later, we're not halfway done with the transition.

            A completely different protocol, which requires a totally t frame of mind, would take even longer than an update to the current protocol.

            • The problem with IPv6 is that it also changed a lot of other things so it's not easy to transpose from IPv4 to IPv6, at least in the mind of people.

              One example is that 'NAT is bad' with no concerns about the fact that some people/organizations may want to run a private address space for various reasons.

              • > One example is that 'NAT is bad' with no concerns about the fact that some people/organizations may want to run a private address space for various reasons.

                Let's be specific. Some organizations want to run NAT because as an accidental side-effect, NAT provides a really, really crappy simulation of a firewall.

                A crappy simulation of a firewall IS bad. It's bad because it gives them the illusion of some kind of security, while causing various operational problems along with security issues that that they

                • by Z00L00K ( 682162 )

                  That's just one reason, another is to ensure that they have full control over the address space.

                  NAT is usually handled by a firewall anyway and the only reason I see for not having a NAT is to make it simpler to perform scanning/sniffing of traffic patterns for future espionage efforts. You really DON'T want to expose your network layout externally and that's why NAT still is one of the key features. Spy organizations put together maps and lays a puzzle of important information before making an intrusion an

                  • That's just fud.

                    You can easily request and receive a /48 from your RIR and those numbers are officially assigned to you and only you so long as you pay a small fee.

                    That is control.

                    NAT is a hack to work around the limitations of IPV4 and breaks a lot of things.

                    If you want similar functionality you can simply deny any inbound traffic without valid state. It's the exact same "security" function IPV4 nat provides without the additional step of mangling the tcp/udp headers.

                    I suggest you take some time to learn

                    • NAT is a hack to work around the limitations of IPV4 and breaks a lot of things.

                      In general, that is precisely why and how it is used. That however is not its only use.
                      Non-globally routable networks that still need a way out have very real utility.
                      The fact that you *can* used globally scoped addresses in IPv6 due to the abundance of space doesn't mean the necessity is gone.
                      In IPv6, we have RFC4193 ULA addresses for that purpose, which are the same thing: Non-globally scoped address space. If you wish to give a ULA-only network internet access, you use NAT.

                      If you want similar functionality you can simply deny any inbound traffic without valid state. It's the exact same "security" function IPV4 nat provides without the additional step of mangling the tcp/udp headers.

                      Almost. It provides the same

                    • by xous ( 1009057 )

                      Non-globally routable networks that still need a way out have very real utility.
                      The fact that you *can* used globally scoped addresses in IPv6 due to the abundance of space doesn't mean the necessity is gone.
                      In IPv6, we have RFC4193 ULA addresses for that purpose, which are the same thing: Non-globally scoped address space. If you wish to give a ULA-only network internet access, you use NAT.

                      No, this is an abuse of ULA. The whole point of ULA is to ensure they never are reachable externally.

                      If you must have limited Internet connectivity best practice would be to use global space behind stateful inspection that only permits establishing traffic to a restricted subset of services.

                      If you want similar functionality you can simply deny any inbound traffic without a valid state. It's the exact same "security" function IPV4 nat provides without the additional step of mangling the tcp/udp headers.

                      Almost. It provides the same connection-tracked forwarding security that NAT provides.

                      It does not, however, provide the 1:N network obfuscation that NAT provides, and that does have real utility in the real world.

                      Which is practically useless. Nearly all protocols leak the local addresses anyway. If you are worried about the mac address inclusion using stateless auto-configuration you can use RFC4941.

                      If you are worried about stati

                    • No, this is an abuse of ULA. The whole point of ULA is to ensure they never are reachable externally.

                      This is absolute idiocy. The RFC makes no such claim, and you are thus not qualified to either.

                      If you must have limited Internet connectivity best practice would be to use global space behind stateful inspection that only permits establishing traffic to a restricted subset of services.

                      Separate goals. Keep up.

                      Which is practically useless. Nearly all protocols leak the local addresses anyway. If you are worried about the mac address inclusion using stateless auto-configuration you can use RFC4941.

                      Demonstrably false.
                      Few protocols leak those, and generally require helpers.

                      If you are worried about statically assigned servers you can at assignment time obfusticate the IP address. I'd argue, by your measure, that NAT provides significantly less "security" as PAT only has an address space of 2^16 whereas a single /64 has an address space of 2^32. If you really wanted to be paranoid you could assign a /64 per server and randomize the address.

                      There is no moder day distinction between NAT and PAT.

                      NAT66 provides no value and would bring along all the breakage and stupidity that comes from NAT44.

                      And yet it exists.

                      Your opinion is noted, as stupid as it is.
                      Signed, an industry expert and veteran.

                    • by xous ( 1009057 )

                      In order for hosts to autoconfigure Local IPv6 addresses, routers
                      have to be configured to advertise Local IPv6 /64 prefixes in router
                      advertisements, or a DHCPv6 server must have been configured to
                      assign them. In order for a node to learn the Local IPv6 address of
                      another node, the Local IPv6 address must have been installed in a
                      naming system (e.g., DNS, proprietary naming system, etc.) For these
                      reasons, controlling their usage in a site is straightforward.

                      To limit the use of Local IPv6 addresses the following guidelines
                      apply:

                      - Nodes that are to only be reachable inside of a site: The local
                      DNS should be configured to only include the Local IPv6
                      addresses of these nodes. Nodes with only Local IPv6 addresses
                      must not be installed in the global DNS.

                      - Nodes that are to be limited to only communicate with other
                      nodes in the site: These nodes should be set to only
                      autoconfigure Local IPv6 addresses via [ADDAUTO] or to only
                      receive Local IPv6 addresses via [DHCP6]. Note: For the case
                      where both global and Local IPv6 prefixes are being advertised
                      on a subnet, this will require a switch in the devices to only
                      autoconfigure Local IPv6 addresses.

                      - Nodes that are to be reachable from inside of the site and from
                      outside of the site: The DNS should be configured to include
                      the global addresses of these nodes. The local DNS may be
                      configured to also include the Local IPv6 addresses of these
                      nodes.

                      - Nodes that can communicate with other nodes inside of the site
                      and outside of the site: These nodes should autoconfigure global
                      addresses via [ADDAUTO] or receive global address via [DHCP6].

                      They may also obtain Local IPv6 addresses via the same
                      mechanisms.

                      While it doesn't come straight out and say it I'd argue the wording implies that nodes that need access to the Internet should have a second globally scoped address.

                      Demonstrably false.
                      Few protocols leak those, and generally require helpers.

                      Just to name a few FTP,SIP,STUN,ICE. With a bit of java script your browser can be tricked into accessing internal hosts on your LAN.

                      There is no moder day distinction between NAT and PAT.

                      I use PAT to be specific.

                      Your opinion is noted, as stupid as it is.
                      Signed, an industry expert and veteran.

                      And now we are reduced to name-calling and appeals to authority.

                    • While it doesn't come straight out and say it I'd argue the wording implies that nodes that need access to the Internet should have a second globally scoped address.

                      It comes out quite specifically and says all it needs to say.
                      See The RFC definition for SHOULD [ietf.org]
                      Note its prolific use in the RFC for ULA.

                      Should is accurate- from within the RFC definition of that word, but it does *not* mean otherwise is an abuse.

                      Just to name a few FTP,SIP,STUN,ICE. With a bit of java script your browser can be tricked into accessing internal hosts on your LAN.

                      You didn't just name a few. You named all of them of (somewhat) relevance.

                      I use PAT to be specific.

                      That's fair.

                      And now we are reduced to name-calling and appeals to authority.

                      My appeal to authority met your unsubstantiated claim of abuse, a directly incorrect claim, and a conflation of your opinion with best practice.
                      So yes, my appeal to authority

                  • > to ensure that they have full control over the address space.

                    IPv4 NAT lets you control up to 24 bits of the address space.
                    IPv6 gives you 56 bits of address space you control.

                    > put together maps and lays a puzzle of important information before making an intrusion and if they can map geographical location of important servers before doing a physical break.

                    Again, trying to scan / map 56 bits is 2^32 times harder than scanning 24 bits. Four billion times harder.
                    Four billion times harder means we just p

                    • By the way, you REALLY don't want to argue with me about how I do my job.

                      But I do.

                      You CAN'T scan an IPv6 network. Which was a serious problem for one toolkit I developed.

                      Your toolkit sucks.
                      You can. I manage 4 /32s. Scans are constantly happening.
                      Sure it's not as simple as a 24-bit scan, you have to be smart about it, but that's just the thing: people are smart.
                      RIR allocations are freely visible, and nodes tend to end up in "reasonable" places, on subnet boundaries.
                      The lower /64 of your /56 really has a tiny amount of entropy in real life (It's not like organizations are running servers using SLAAC addresses), so it's actually more like an 8-bit range with maybe

                    • by Z00L00K ( 682162 )

                      You don't have to do a port scan, all you need is to sniff for addresses to see which ranges that are in use and by fine tuning you can even figure out the geographical location of them. Perfect for serving "relevant" ads like Facebook and Google do.

                    • by Z00L00K ( 682162 )

                      And if you are running non-NATed then there's no need to scan, you can just sit silently and collect from and to addresses that are passing by on the public net and use that to build a map of the net. So it's going to be relatively easy to see which ranges that are used. Facebook and Google have a great opportunity here - they can just log IP addresses and polish the granularity. For a large company with many branches then it may be of interest to fine tune the geographical target area to serve the "relevan

                • NAT provides a really, really crappy simulation of a firewall.

                  NAT provides a quite good baseline firewall with the rule of "only packets related to an established connection are allowed in"

                  • See, there's the problem. You think you have "quite a good base firewall". You have no idea which packets are actually allowed in, or the mechanisms I can use to let more in, because you didn't explicitly decide what can come in and what can't. It's all just kinda magic that you don't understand. Which is a problem - you don't know what's allowed in to your network or how it actually gets allowed.

                    For example, did you know I can open any hole I want in your "quite a good base firewall" just by you visitin

                    • I should clarify -
                      I think you're smart enough to decide what should be allowed in.
                      I think you're much, much smarter than UPnP.
                      That's why YOU should decide what's allowed in and out. By setting that in your firewall.
                      Not by "allow anyone in if any JavaScript tried to connect it, or if my lightbulb said to allow all traffic on port 736.â

                      I think you are much smarter than to allow traffic bases on the SOURCE PORT. NAT isn't.

                    • Shut up, ray.

                      You have no idea which packets are actually allowed in, or the mechanisms I can use to let more in, because you didn't explicitly decide what can come in and what can't.

                      This is an absurd statement.
                      A NAT forwards based on connection tracking. If there is no connection, and no pre-existing rule, then there is no inbound forwarding to the private network.
                      If you can guess both 1) IP+port tuples, 2) TCP sequencing, you can potentially inject packets into an established stream, but that's astronomically unlikely.
                      With regard to UDP, there's no TCP sequencing in the tuple, so that's slightly easier, but still astronomically infeasible.

                      And either way, both of thos

                    • Javascript does not allow you to specify the source port in a connection attempt. Ephemeral ports are forced.
                      Furthermore, even if javascript itself allowed it, you wouldn't be able to bind the socket to a source port that was already in use, making your punched hole quite pointless.
                    • > If you can guess both 1) IP+port tuples, 2) TCP sequencing, you can potentially inject packets into an established stream, but that's astronomically unlikely.

                      A) NAT doesn't track TCP sequence numbers AT ALL. See that's the problem with this faith in NAT, you're putting faith in something you don't understand rather than setting up a few simple rules that you do understand.

                      B) I can open a TCP hole in the NAT without actually establishing any TCP connection at all. SYN on one side that happens to match

                    • A) NAT doesn't track TCP sequence numbers AT ALL.

                      Entirely false.
                      That's implementation dependent.
                      The most common implementation in the universe, netfilter, does.

                      See that's the problem with this faith in NAT, you're putting faith in something you don't understand rather than setting up a few simple rules that you do understand.

                      Well that's just silly.
                      I run a network of 4000 customers on a CGNAT implementation (M:N) via netfilter... that I wrote.

                      B) I can open a TCP hole in the NAT without actually establishing any TCP connection at all.

                      No, you can't.

                      SYN on one side that happens to match one of the thousands of ICMP type 3 packets I'm sending over.

                      This is fucking stupid.
                      Unreachables have no such behavior to the connection tracking table.

                      Your NAT assumes that the type 3 packets is in response to the syn.

                      All you've done is broken a connection attempt. That's a DOS. That's not a forwarding hole.
                      Now if you meant a type 5, that might actually at least be somewhat fucking point

                    • Ah, I see. You spent a lot of time building something based on NAT, a creation you are proud of. So it's pride; that's why you'd rather defend your idea than secure it.

                      Here's the netfilter connection tracking source code.
                      Let me know where you think you see a sequence number in nf_conn.
                      https://elixir.bootlin.com/lin... [bootlin.com]

                      What's actually tracked is a very short HASH of the source and destination up and port. Very short as in there are only a few thousand buckets. (128-65536 depending on RAM). Send a few thou

                    • Ah, I see. You spent a lot of time building something based on NAT, a creation you are proud of. So it's pride; that's why you'd rather defend your idea than secure it.

                      No, accuracy. In that I demonstrated knowledge of factual workings of a system that you demonstrated incorrect knowledge of.
                      You can call it pride to try to discount it, but that doesn't make you look smart.

                      Sure I'm proud of it. The task was daunting. Have you ever written a NAT implementation in the kernel? Netfilter does most of the work for you, but it requires that you become intimately familiar with it. Do I think it was an ideal deployment? Hell no. But when you're in the business of fiber deploymen

                    • Thanks for the link. I may read that code later if I get time.

                      I know that in some cases, such as active FTP, connection tracking has to change the payload. In those instances it has to deal with the recurve windows because it's building a new packet, with a new checksum.

                      Just glancing at the link, I don't know if that's the sequence number code you're seeing, or if it's actually being used for NAT connection tracking (outside of FTP etc that changes the payload). Hopefully I'll get time to read it later.

                    • That's the layer-4 handler for TCP. It tracks the entire state of the TCP session. Look in the header [bootlin.com] if you want the exact structure that holds the data, that is associated with the ct.
                      an nf_conn struct holds a union (proto) which contains structures for the different tracked session protocols: DCCP, SCTP, TCP, and GRE.
                      This is in addition to the layer 3 tuples pointed to by nf_conntrack_tuple_hash.

                      I don't know if that's the sequence number code you're seeing, or if it's actually being used for NAT connection tracking (outside of FTP etc that changes the payload)

                      I do not know what this means.
                      It is the sequence code, and it is used for NAT connection tracking- I'm unsu

                    • Thanks for that. I'll probably dig into it more later.
                      Sorry for getting personal earlier.

          • You are 100% correct, as long as you accept that "for a period of time" equates to indefinitely.
        • Or it competes with a Cisco subscription service, so Cisco doesn't implement it.

          I recently tracked an old Cisco 7600 I threw away years ago. It's being used as a route reflector for an African ISP... which is a transit AS.

          Even if we wait 20 years, it will take 20 more to propagate it.

          BGP is highly extensible, no point replacing it.
      • I can't be bothered trolling through it all...

        I spent a couple of minutes trying to navigate their web site...

        Yeh, that's when I gave up :). Thanks for the helpful reply.

    • It has Oracle and Microsoft involved, so there's built in IP trolling.
  • I looked at their site and was trying to figure out who needs this and Syntropy's business model, etc then realized the company is built to sell to some mega Corp with a huge network like Akamai or Google to assist them with internal routing.

    This is not meant to be sold as a product/service.

    • That's how Cisco gets a lot of their technology. They don't seem to innovate internally at all.

      • by Luthair ( 847766 )
        If you think about it, it might make business sense not to. R&D is expensive, and you pay whether the avenue is successful or not. Ultimately I guess it depends on your success rate and the cost of buying successful stuff.
      • by TWX ( 665546 )

        that's not entirely correct.

        Cisco funds startups and loans engineers with agreements where Cisco has right of first refusal if the startup pans out. The upside is that Cisco doesn't contend with the frustrated users that Google does when a new thing is launched, adopted by a few, but then withers on the vine.

        The downside is that with the extra degrees of separation, the startups that Cisco fosters end up developing things that aren't quite compatible with Cisco's existing product lines, or of the dozen thi

  • From the link in the summary:

    The next step in creating a truly user-centric network layer is to decentralize and democratize. And that is exactly what Simanavicius and the team are focused on. They’ll be turning their API into a smart contract and they’ll be using tokens to fund a truly decentralized network.

    It turns out that DARP is blockchain-based, and it uses what they call the NOIA token. This is interesting because it facilitates a market for connectivity. That means that applications or other services running on Syntropy can pay for network usage in NOIA. Those payments go to the folks who will operate relay routers (nodes), providing an incentive for third parties to expand the overlay network with additional nodes.

  • Hey I am a longtime slashdot member yet I have never been honored with being granted a press release.

  • How coud that possibly go wrong?

  • by PoopMonkey ( 932637 ) on Sunday March 14, 2021 @04:12PM (#61158004)
    Please add the ability to flag an article for spam.
  • by Ungrounded Lightning ( 62228 ) on Sunday March 14, 2021 @10:44PM (#61158974) Journal

    Routing protocols are hard to make work correctly - for a reason analog engineers understand but digital guys don't necessarily encounter.

    A routing protocol in operation has:
      - Negative feedback. (congestion on a link (and less on others) causes route changes to move traffic from more congested to less congested links (reducing congestion on congested links and increasing it on uncongested links).
      - Delay. (It takes a while to figure out which links are how congested, what route changes would improve it, send messages around to adjust the routing, and have routers change behavior.
      - Gain. (A change of routing changes, not just the amount of congestion, but its accumulation or dissipation. It will, after a short time, overshoot the target of balanced traffic flow, making the formerly more congested routers the less congested and vice versa).

    Any analog guy will tell you this is a recipe for oscillation. In routing this is known as "route flapping". There are a lot of ways to try to damp it out.

    One of the things that made actually implementing BGP hard is that the RFC told you what it was legit to do - but did not prescribe WHAT hacks and tuning to do to mitigate route flapping. Instead these were implementation-dependent hacks. Different hacks, even if they played well with other copies of themseves, didn't necessarily play well with each other.

    But the network is a unit when it comes to routing. Adding one player with a different set of hacks or tuning of them can foul up routing for everybody - "breaking the Internet(tm)".

    A decade or so back, when I was at a networking startup, our biggest hurdle to breaking into the Internet market was to get the ISPs to accept that we could do BGP compatibly. At that time there were only TWO BGP implementations that were accepted by the industry: One out of academia, and a proprietary one from Cisco - (our main rival and thus not available for license and a lawsuit magnet if we ended up doing it in a similar way). Coming up with a third one, getting it to work with itself and the other two, and getting the ISPs to trust it, took YEARS. We came very close to bankruptcy before making it happen.

    So these guys have a new routing protocol to replace BGP? I'll be REALLY interested in seeing how well they handled the route flapping issue.

    • - Gain. (A change of routing changes, not just the amount of congestion, but its accumulation or dissipation. It will, after a short time, overshoot the target of balanced traffic flow, making the formerly more congested routers the less congested and vice versa).

      Also: Routing changes are granular, with few options. They tend to either do too little or do a whole lot. The first alternative isn't enough and the second gives gain > 1.

    • A decade or so back, when I was at a networking startup, our biggest hurdle to breaking into the Internet market was to get the ISPs to accept that we could do BGP compatibly. At that time there were only TWO BGP implementations that were accepted by the industry: One out of academia, and a proprietary one from Cisco - (our main rival and thus not available for license and a lawsuit magnet if we ended up doing it in a similar way). Coming up with a third one, getting it to work with itself and the other two, and getting the ISPs to trust it, took YEARS. We came very close to bankruptcy before making it happen.

      I've managed a quite large network with multiple ASNs, and multiple POPs for about 14 years now, and I have no idea what you're talking about.
      I work with BGP implementations from Cisco, Juniper, and Quagga. I've never encountered a single interop issue other than rule differences between the routing OSes regarding what kinds of routes can be modified in what ways, and announced with what IGP synchronization requirements.

      • by Lennie ( 16154 )

        Pretty certain the OP is using a fictional idea of what BGP could be (what the company is trying to do is improve BGP from least hop to less congested or less latecy, etc.) or a misunderstanding of what BGP does.

      • by jsailor ( 255868 )

        Back in the day ... 15-20 years ago, there were challenges with BGP compatibility between Cisco's implementation and the various open source and commercial BGP stacks that startups would use/buy to accelerate time to market. At this point I can't even remember the code vendors or open source projects. There were some subtle differences between implementing it per RFC and the default behaviors and timers that Cisco and, later, Juniper used. Similarly, both primary vendors had features that weren't in the RF

        • There were some subtle differences between implementing it per RFC and the default behaviors and timers that Cisco and, later, Juniper used.

          Those still exist today. There are very different paradigms in place for how BGP is "looked at" on a Cisco vs. a Juniper. That still doesn't present any problem with interop.
          As I said, I can state for certain that at least 14 years ago, there was no hard barrier to interoperability.
          Timers can be adjusted (or ignored, really).
          As I said, I have been using Ciscos and Junipers, and Quagga (open source routing system) without any problems.
          Sure, you have to figure out weird things like origination paradigms a

    • Routing protocols are hard to make work correctly - for a reason analog engineers understand but digital guys don't necessarily encounter.

      Funnily enough it's the same problem with smart grids too. Talk to a digital person and the problem seems easy and obvious. Lots of talk of encryption and whatnot for security, and networking and even fancy mesh protocols for communication, but then you just turn loads on and off based on average demand or something. And the remaining people with analogue background wonde

    • by amorsen ( 7485 )

      Negative feedback. (congestion on a link (and less on others) causes route changes to move traffic from more congested to less congested links (reducing congestion on congested links and increasing it on uncongested links).

      None of the major routing protocols care one whit about congestion. They do not move traffic in response to congestion. They happily keep routing the traffic through the congested link, even if 99% (or even 100%) is dropped on the floor.

      Any analog guy will tell you this is a recipe for oscillation. In routing this is known as "route flapping". There are a lot of ways to try to damp it out.

      Since congestion does not play a part in routing decisions, route flapping is also not caused by congestion avoidance.

      A decade or so back, when I was at a networking startup, our biggest hurdle to breaking into the Internet market was to get the ISPs to accept that we could do BGP compatibly. At that time there were only TWO BGP implementations that were accepted by the industry: One out of academia, and a proprietary one from Cisco

      This is completely bollocks. Ten years ago there was both Quagga and BIRD and OpenBGPD, plus commercial implementations from Juniper and MikroTik and lots of

  • Heck, the Internet hasn't even managed to globally and uniformly rolled out IPv6 yet, after being available for 15+ years ... granted, v6 requires support on CPEs, while Internet routing will "only" affect ISPs and the likes ... but still ...unless the advantages are REALLY great (and I don't mean in some fringe cases, but more or less through the board ...), AND they can get the major players like Cisco, Juniper, Huawei, etc. on board with it, it has not much of a chance succeeding. Plus, even given all th

    • by ledow ( 319597 )

      IPv6 is entirely held up by home ISPs.

      Datacentres have it.
      The global infrastructure has it.
      Routers, OS, etc. are all supporting it.
      4G and DOCSIS3 both mandate it as a prerequisite.

      30% of all Google queries come in over IPv6.

      https://www.google.com/intl/en... [google.com]

      The only barrier, ever, is home ISP support. Who say you don't need it because you can do everything you need on IPv4 and there aren't any limits on using that. Which, to be honest, they're pretty right on as we're all using the Internet today without p

"It's the best thing since professional golfers on 'ludes." -- Rick Obidiah

Working...