Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Networking

Paul Vixie On the Unevenly Distributed Intelligence of Internet Infrastructure 96

CowboyRobot writes "Writing for ACM's Queue magazine, Paul Vixie argues, "The edge of the Internet is an unruly place." By design, the Internet core is stupid, and the edge is smart. This design decision has enabled the Internet's wildcat growth, since without complexity the core can grow at the speed of demand. On the downside, the decision to put all smartness at the edge means we're at the mercy of scale when it comes to the quality of the Internet's aggregate traffic load. Not all device and software builders have the skills and budgets that something the size of the Internet deserves. Furthermore, the resiliency of the Internet means that a device or program that gets something importantly wrong about Internet communication stands a pretty good chance of working "well enough" in spite of this. Witness the endless stream of patches and vulnerability announcements from the vendors of literally every smartphone, laptop, or desktop operating system and application. Bad guys have the time, skills, and motivation to study edge devices for weaknesses, and they are finding as many weaknesses as they need to inject malicious code into our precious devices where they can then copy our data, modify our installed software, spy on us, and steal our identities."
This discussion has been archived. No new comments can be posted.

Paul Vixie On the Unevenly Distributed Intelligence of Internet Infrastructure

Comments Filter:
  • fb (Score:1, Offtopic)

    by hachikyu ( 798080 )
    fb.
  • Classic Slashdot (Score:4, Insightful)

    by dknj ( 441802 ) on Saturday February 08, 2014 @07:23PM (#46199257) Journal

    I'm sorry, this is off topic, but I was getting a warning at the top of Slashdot that classic is going to be going away soon (looks like in 6 months).

    How many readers are going to leave if slashdot classic is cut off completely?

    • by umafuckit ( 2980809 ) on Saturday February 08, 2014 @07:33PM (#46199305)

      How many readers are going to leave if slashdot classic is cut off completely?

      Good question. Maybe Timothy should set a poll?

      • You'll all tell the poll you're leaving, but come back every day to complain.

      • "Insightful"? "Interesting"? "Off-topic"? Come one guys! I was hoping for at least one "Funny"
      • by kbahey ( 102895 )

        Dice's management have already made up their mind, and they are determined to kill Classic Slashdot. They may entertain some changes to the beta, but they will not kill it.

        They will not setup a poll, because they have already decided. Done deal.

        The part I am not sure of, is: do they know the extent of revulsion against beta? Or are they just chalking it up to a vocal minority, trolls, and whatnot?

  • It's TCP/IP, baby. (Score:2, Interesting)

    by Anonymous Coward

    It's just the way TCP/IP was designed, back in the ARPANET days, you know.
    Putting all the intelligence in the hosts allows for more resiliency, since it takes a lot to the bring the whole infrastructure down this way.
    Mobile networks are quite the opposite, though (smarter infrastructure, a little more dumb terminals).
    Software defined networks are definitely a way to bring some intelligence back in the infrastructure of IP networks.
    We'll see if it will enable a smarter Internet or not.

    • by fuzzyfuzzyfungus ( 1223518 ) on Saturday February 08, 2014 @08:14PM (#46199469) Journal
      Probably more than resilience, moving the intelligence to the edges of the network allowed for innovation. It's not as though POTS is a quagmire of reliability issues (indeed, it stacks up pretty well compared to any internet connection not expensive enough to have a proper SLA); but it's an ossified wasteland because essentially any change had to run the gauntlet of "Is it worth making the necessary modifications and upgrades to the intelligence at the center of the network and will doing it make AT&T more money?" If something new couldn't be squeezed through the network as though it were a voice call, or officially blessed by Ma Bell (as with 1-900 numbers and billing for them), it just didn't happen. Even with the introduction of mobile phones, and the opportunity to hammer out huge swaths of new spec, they added what, SMS? Virtually all the features of today's "phones", with the exception of voice calls and maximum-compatibility SMS snippets have gone IP because that is where the versatility is.

      With intelligence at the edges, if you want something done, all you need is two or more endpoints with the right software and there you are. This goes for malice as well, of course, which is part of why the internet is kind of a rough neighborhood; but it's also why IP-based capabilities have changed so radically, while systems with more centralized intelligence have largely stagnated(even more impressive 'dumb endpoint' arrangements, like Minitel, have been eclipsed).
      • AFIK SMS was a testing mode, rides along the signal path anyway, essentially free to implement, Yet they monitized the crap out of it.
    • by skids ( 119237 ) on Saturday February 08, 2014 @10:51PM (#46200031) Homepage

      Putting all the intelligence in the hosts allows for more resiliency, since it takes a lot to the bring the whole infrastructure down this way.

      It's the way to go. Any intellegence added to the core should merely be simple tweaks to enable more intelligence at the edges. For example, one might plausibly argue that making core routers select second/third most-preferred destination routes for a packet based on a TTL % on IP packets would allow end-systems to experimentally find the fastest performing route through the internet by trying different values on their TTLs/option fields. One could not reasonably argue for expecting core devices to maintain per-connection or even per-client/netblock state in an attempt to find alternate routes for each client connection.

      Software defined networks are definitely a way to bring some intelligence back in the infrastructure of IP networks. We'll see if it will enable a smarter Internet or not.

      From what I've seen of SDN it's a bunch of people who think they can abstract network services in a simple model, but who have no compreshension of the intrinsic differences in the heterogeneous mixture of devices employed, so they haven't even scratched the surface of being able to build a taxonomy/capabilities-enumeration for things like, for example, how many CAM entries are available for edge switch filters on a given switch model. Without that information, SDN applications have no way of doing any serious budgeting before launching a request into the network gear, and since the device might happily take the commands and provision a halfway-functional service that is dropping 5% of packets, rather than reject the request, and SDN has no real provisions for testing services before putting them in production, SDN is doomed to be confined to data centers where equipment has been carefully kept homogeneous.

      Most people using SDN that I;'ve seen are doing so for enterprise (including server farm) LAN, not core internet.

  • Paul Vixie can pontificate on the Unevenly Distributed Intelligence at Dice that has resulted in this abomination known as Beta Slashdot...

    • Re: (Score:2, Informative)

      by Anonymous Coward

      apparently they have infinite mod points to give everybody -1 for trash talking beta

    • by RR ( 64484 )

      Paul Vixie can pontificate on the Unevenly Distributed Intelligence at Dice that has resulted in this abomination known as Beta Slashdot...

      I don't think so. Beta Slashdot is a consequence of the idiot staff that Dice has hired to run Slashdot, considering that the headline and summary have nothing to do with Paul Vixie's argument. The quotes are taken from the article, but in a stupid way, like CowboyRobot is some sort of robot...

      The article is actually about the need for the addition of minimal state to stateless protocols in order to thwart DDOS amplification techniques.

  • by martin-boundary ( 547041 ) on Saturday February 08, 2014 @07:37PM (#46199321)
    The internet consists of hardware and software and things worth stealing. The first has long development cycles, and is more difficult to modify than the second. The second is extremely varied and full of vulnerabilties that are often easy to patch one instance at a time, but hard to patch simultaneously and comprehensively across the network. The third are things that shouldn't be accessible from the Internet in the first place, like our real names just so we can have a Google account, our credit card numbers just so merchants don't have to ask us when they want to charge us, our activity records just so we can be manipulated through ads, etc.

    We can't change the first two without destroying the Internet, but there's no reason why computers should contain so much valuable information to steal.

  • that are the cause of breaches and insecurities of the Internet. Long ago that was not the case, because simply connecting a computer to the Internet would get it infected with malware. Computer and browser makers have learned how to largely avoid this, but no one has yet figured out a way to prevent trusting or stupid human beings from giving permission to install programs that subsequently are able to do severe damage. This is part of human nature and will never change.

    • by fuzzyfuzzyfungus ( 1223518 ) on Saturday February 08, 2014 @08:20PM (#46199495) Journal
      Some aspects of software security have improved; but the decline in 'just put a computer on the internet and it gets rooted in about 15 seconds' attacks, at a population level, probably owes more to the prolific spread of nasty little plastic NAT boxes.

      Those things are hardly real security(and more than a few have shipped with nasty flaws of their own); but they do tend to eat unsolicited inbound traffic pretty enthusiastically, which has really cut down on the number of totally helpless computers that end up being given a brutal taste of the open internet before they've even had time to patch.
      • And let's be honest, back in 2002, Microsoft wasn't even trying. Their OS was essentially an open door. Remember Nimda and Code Red?
  • by davecb ( 6526 ) <davecb@spamcop.net> on Saturday February 08, 2014 @08:00PM (#46199397) Homepage Journal

    http://queue.acm.org/detail.cfm?id=2578510

    Complaints about beta go here (;-))

  • by Anonymous Coward

    Yes, but fuck beta?

  • by Karmashock ( 2415832 ) on Saturday February 08, 2014 @08:11PM (#46199459)

    Complexity is a vulnerability. Simplicity is a strength.

    If something is just too simple to be modified or hacked or manipulated by anyone including the rightful owners then its too simple to be perverted by a hostile agent. Simplicity is frequently a virtue.

    • by skids ( 119237 )

      Don't make wide generilised sweeping statements as they are most often wrong. For example, properly implemented SAV would be complexity, yet also a strength.

      • Re: (Score:2, Insightful)

        by Karmashock ( 2415832 )

        Wrong. It isn't impossible to hack it. And therefore it will be hacked.

        Systems too simple to be hacked can't be hacked. They are secure. Everything else is second class.

        People need to stop cutting security corners. This chicken shit security no longer an option.

        Perfect security is possible. It requires sacrifice. You need to limit complexity. You need to limit what can and cannot be done. Do that and you leave little wiggle room for hackers to exploit. Anything short of that and you're better that you are s

  • Not Just Bad Guys (Score:4, Insightful)

    by Jane Q. Public ( 1010737 ) on Saturday February 08, 2014 @08:16PM (#46199481)
    "Bad guys have the time, skills, and motivation to study edge devices for weaknesses..."

    But you know, it's funny... I would have thought the giant corporations that are behind manufacturing these devices (and in many cases the software for them) have just as much skill to look at these things from the other end.

    Apparently what they have lacked is the motivation to do so. That should change.
  • I'm sorry, what? (Score:2, Informative)

    by whoever57 ( 658626 )

    DNS is an example of a UDP (User Datagram Protocol),

    DNS can use UDP, yes, but it can also use TCP, so as an example of "a UDP", it is quite poor.

    • DNS is an example of a UDP (User Datagram Protocol),

      DNS can use UDP, yes, but it can also use TCP, so as an example of "a UDP", it is quite poor.

      He was talking about DNS reflection attacks, which is done via the primary DNS protocol, which is UDP-based. The attacker puts the victim's IP address in the source IP portion of the packet and requests a large quantity of information so that the DNS server will send it to the victim. Scale this up for DDOS on the victim. Since the attack is UDP-based, there's no requirement for the sender's IP to match the packet's sender ID.

      I spent a lot of time last summer fending off that stuff, since my older machines

      • by sjames ( 1099 )

        That actually could be solved with proper router configuration. For example, don't route traffic sourced from a router that has no route back to the source address. Case by case exceptions if well justified by the source.

        • That actually could be solved with proper router configuration. For example, don't route traffic sourced from a router that has no route back to the source address. Case by case exceptions if well justified by the source.

          Who says there's no route back? The route back is merely bogus.

          If you mean that the response address doesn't match the source address, well, it wouldn't the minute it made its first hop. Which means that every router in the world would have to be 100% trustworthy.

          • by sjames ( 1099 )

            Getting a route announced is more difficult than spoofing a source address. Also, if you manage to convince the routers between the multiplying DNS server and you that there IS a route back, you will get the flood, not your victim.

            Note that MOST providers already discard spoofed source packets from their customers.

            • Getting a route announced is more difficult than spoofing a source address. Also, if you manage to convince the routers between the multiplying DNS server and you that there IS a route back, you will get the flood, not your victim.

              Note that MOST providers already discard spoofed source packets from their customers.

              Unfortunately, as my logs amply demonstrated, on a network the size of the Internet, "most" isn't nearly enough. And if the "provider" was a military or rogue ISP installation, they would likely be part of the attack.

              • by sjames ( 1099 )

                They still likely have an upstream or a transit provider. It gets more complicated at major peering points to decide who should be sending packets for what range, but it's not impossible even there.

                As I said, MOST ISPs are conscientious about that, but certainly not all (or this problem wouldn't exist). It may be time to step up the game and deal with the few exceptions.

  • A different view. (Score:3, Interesting)

    by hackus ( 159037 ) on Saturday February 08, 2014 @08:46PM (#46199615) Homepage

    "they need to inject malicious code into our precious devices where they can then copy our data, modify our installed software, spy on us, and steal our identities."

    Not on my networks, which comprise about 1 million people at the moment.

    All of our infrastructure is open source and we don't have those issues. Been opeperating a standatf 3.x kernel on 25 routers with millions of people accessing them, along with the server software, also LINUX based running Apache, Tomcat Servlets, and PostGRES...OpenLDAP and TLS for the internal key management infrastructure.

    so I don't see a problem with the internet as designed, works very well. It doesn't need change.

    You are trying to change the internet for your own malicious purposes, in my opinion, than actually address the problem:

    1) Internet security as far as functionality is concerned, works extremely well. I travel and I go to many places, and there has only been once in the past two years I couldn't access my VPN server due to a real internet outage. I say outage because the local admin at your so called "smart edge" made a few bad investment decisions, proprietary gear bankrolled with back doors.

    2) Most of the problems you do see with sites, internet infrastructure is entirely not related to the internet as designed per se, but a frustration with governments who don't like what the internet is doing. That is, an obstruction to their spheres of power and political and industrial espionage which they require to gain an edge to stay in power.

    The internet has a nasty habit of revealing the connections of two sets of laws that normally can't be seen by the plebs: That is the ones that say you have to spend 5 years in prison for 1 ounce of pot, complete with a criminal record so you will never be hired again vs. If you're say a Banker, and rob whole countries you get a pay raise and pat on the back or send you send the plebs to thier doom. For example, when the French found they couldn't get any of their gold back from the Fed they invaded Mali to stabilize their banks.

    So I don't see any problems with the internet.

    I do see a problem with governments and the internet coexisting together though, but that is not a technology problem.

    As I see it, one or the other has to go and so far the internet is fighting a losing battle.

  • by Anonymous Coward

    "it's not ready" as you say, so can we please stop use it until it is ready?

    PLEASE stop redirect us to this not ready thing
    PLEASE let users themselves choose if they want to betatest this not ready thing.

  • "the Internet core is stupid, and the edge is smart"

    That is true on SO many levels.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...