Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google Networking Software IT

A Peek At Google's Software-Defined Network 75

CowboyRobot writes "At the recent 2013 Open Networking Summit, Google Distinguished Engineer Amin Vahdat presented 'SDN@Google: Why and How', in which he described Google's 'B4' SDN network, one of the few actual implementations of software-defined networking. Google has deployed sets of Network Controller Servers (NCSs) alongside the switches, which run an OpenFlow agent with a 'thin level of control with all of the real smarts running on a set of controllers on an external server but still co-located.' By using SDN, Google hopes to increase efficiency and reduce cost. Unlike computation and storage, which benefit from an economy of scale, Google's network is getting much more expensive each year."
This discussion has been archived. No new comments can be posted.

A Peek At Google's Software-Defined Network

Comments Filter:
  • by Anonymous Coward on Thursday May 16, 2013 @03:26AM (#43738641)

    "it provides logically centralized control that will be more deterministic, more efficient and more fault-tolerant."

    I'll agree with deterministic and efficient, and perhaps even less likely to fault, but more fault-tolerant seems like a stretch. SDN might get you better fault-tolerance, but that is not because the control is centralized. I suspect the control has more information about non-local requirements and loads, and that can get you better responses to faults. That happens because the controllers can communicate more complex information easier, since that is pure software, not because its centralized. You can have these fault tolerance gains via non-centralized SDN too.

    • by bbn ( 172659 ) <baldur.norddahl@gmail.com> on Thursday May 16, 2013 @06:53AM (#43739183)

      Compare it to the alternative such as the good old spanning tree protocol. You have a number of independent agents who together have to decide how to react to a fault. This is complex and requires clever algorithms that can deal with timing issues and what not.

      With a centralised controller the problem is much easier. One program running on one CPU decides how to reconfigure the network. This can be faster and possibly find a better solution.

      Of course you need redundant controllers and redundant paths to the controllers. Apparently Google decided you need a controller per location.

      • by bill_mcgonigle ( 4333 ) * on Thursday May 16, 2013 @07:32AM (#43739337) Homepage Journal

        With a centralised controller the problem is much easier. One program running on one CPU decides how to reconfigure the network. This can be faster and possibly find a better solution.

        I can see how centralizing the control can be easier. But if the history of Internet networking has taught us anything, we should expect somebody to come up with a more clever distributed algorithm (perhaps building on OpenFlow) that will make SDN's a footnote in history while the problem gets distributed out to the network nodes again, making it more resilient.

        That's not to say that trading off resiliency for performance today isn't worthwhile in some applications.

        • by Alomex ( 148003 )

          ut if the history of Internet networking has taught us anything, we should expect somebody to come up with a more clever distributed algorithm

          The internet has moved from centralized to decentralized to centralized again. It is not the case that it has moved one-directionally towards a distributed system. Currently big parts of the internet are centrally managed (e.g. SuperDNS/GoogleDNS, IBGP, MPLS routing, most of network provisioning).

          Current view is that centralizing BGP would be a "good thing" (TM).

          • by AK Marc ( 707885 )
            Networks connected to the Internet being centrally managed was universal. You are right for non-Internet things (NNTP), but DNS is just as distributed as always, and MPLS doesn't cross network boundaries, and BGP *is* somewhat centralized, as it always was. You can't just make up your own AS to use (well, you can, but only from the private range).

            There may be a move to concentrate traffic in fewer large networks, but that's not the same as the Internet getting more central management.
            • by Alomex ( 148003 )

              but DNS is just as distributed as always

              Google DNS is centralized.

              BGP *is* somewhat centralized, as it always was

              The change is that now many organizations drop centrally computed routing tables on the routers as opposed to the OSPF+manual tweaks that used to dominate before.

              • by AK Marc ( 707885 )

                Google DNS is centralized.

                Well, yes. Every network has "centralized DNS" it's how DNS operates. That this is a sudden and startling discovery to you indicates nobody should listen to you.

                The change is that now many organizations drop centrally computed routing tables on the routers

                That's always been relatively common. Especially if you have only one or two peers, dynamically learning the entire Internet routing table was a massive waste of resources. Many holders of a single class-C run BGP to advertise their route, not to learn routes. They default out, and advertise, so that their block is reachable if a link goes dow

                • by Alomex ( 148003 )

                  It is clear you do not know what Google DNS is. It is not the DNS that serves the "google network" but a global provider of DNS services for all and people are encouraged to use it instead of their local DNS. This makes your comment

                  That this is a sudden and startling discovery to you indicates nobody should listen to you

                  rather ironic.

                  That's always been relatively common. Especially if you have only one or two peers, dynamically learning the entire Internet routing table was a massive waste of resources.

                  I'm talking AS level organizations including internal routers as well as border routers.

                  • by AK Marc ( 707885 )

                    It is clear you do not know what Google DNS is. It is not the DNS that serves the "google network" but a global provider of DNS services for all and people are encouraged to use it instead of their local DNS.

                    Ah yes, the traditional "you must not have all the information, or you'd agree with me" argument. It's proof your logic is flawed, not proof of my ignorance. You do realize that "back in the day" there were people encouraging others to use things like 198.6.1.3, the DNS server for the largest (by volume, not reach) and fastest growing (by $ per day spent on infrastructure) ISP on the planet, rather than local ones because local ones were much more prone to failure than link failure to 198.6.1.3, right?

                    • by Alomex ( 148003 )

                      You are funny trying to play the I'm older and wiser card. You are likely to lose that one too.

                      And all you prove with your 198.6.1.3 example is what I said in my original posting: there have been waves or centralization (such as that one) and waves of decentralization and back again (e.g. Google DNS).

                    • by AK Marc ( 707885 )
                      I never played the "I'm older and wiser" card. I played the "you're dumb" card.
      • by Anonymous Coward
        When I was asking around about what's the main thing about SDN, I got back because it's programmable. If there is a bug 10 years from now, it can be fixed. With a regular router, you're stuck hoping for support from the manufacturer.
  • by Viol8 ( 599362 ) on Thursday May 16, 2013 @07:03AM (#43739213) Homepage

    A network is physical infrastructure - software isn't going to be rerouting cables or installing new wifi nodes anytime soon.

    If all they mean is routing tables are dynamically updated then how is this anything new?

    This isn't a troll, I genuinely don't see where the breakthrough is.

    • You're missing the point. The summary describes it as a 'Software Defined Network Network', a true innovation.

    • by DarkOx ( 621550 ) on Thursday May 16, 2013 @07:23AM (#43739297) Journal

      Its not what they are doing here exactly but there is not reason you can't have a logical topology over top of a physical one. Actually its very useful, especially when combined with a virtual machine infrastructure. Perhaps you want to have two machines in separate data-centers to participate in software NLB they need network adjacency, for example, yet I doubt you want a continuous layer two link stretched across the country. Sure if its just two DCs maybe a leased line between them will work, what if you have sites all over the place and potentially want to migrate the hosts to any of them at any time? That would allow for maintenance at a facility, or perhaps you power on facilities during off peak local electrical use, and migrate your compute there?

      People are doing these things today but once you get beyond a singe VM host cluster it gets pretty manual. With admins doing lots of work to make sure all the networks are available where they need to be hard coded GRE tunnels, persistent ethernet over IP bridges, etc. They all tend to be static, minimal overhead when not in use sure, but overhead and larger attack surface non the less. A really good soup to nuts SDN might make the idea of LAN and WAN as separate entities an anarchism. Being able to have layer two topology automatic wherever needed would be very cool.

    • by bbn ( 172659 ) <baldur.norddahl@gmail.com> on Thursday May 16, 2013 @07:31AM (#43739333)

      There is no routing as such. For each new "flow" the switch needs to ask a computer (controller) what to do. The controller will then program the switch with instructions for the new flow.

      You claim that the flow table is just a glorified routing table. Maybe it is but much more fine grained, you can match on any fields in the IP packets, including layer 2 and 3 such as MAC, IP, port numbers, IP TCP packet types (syn packets) etc. Also you can mangle the packets, for example modify the MAC or IP address before forwarding the packet.

      With this you can build some amazing things. The switch can be really dumb and yet it can do full BGP routing: RouteFlow: https://sites.google.com/site/routeflow/ [google.com]

      The other canonical use case is virtualisation. No it will not be rerouting physical cables. But it can pretend to do so. Combine it with VMs you can have a virtual network that can change at any time. If you migrate a VM to another location, the network will automatically adapt. And still the switches are dumb. All the magic is in the controllers.

      Before OpenFlow you would need to make a vlan (or MPLS). When moving the VM to a new location, you would need to reconfigure a number of switches to pass around this vlan and there is no standard protocol to do so.

      OpenVSwitch supports OpenFlow so you can pretend your virtual network with virtual switches includes the VM host itself: http://openvswitch.org/ [openvswitch.org]

    • by swb ( 14022 )

      Sometimes it seems that SDN is just a new dress on an old pig, sometimes it starts to make sense.

      When I'm feeling enlightened or charitable about the concept I envision it as an encapsulation system for layer 2 on layer 3, allowing layer 2 networks to be created independent of the physical constraints of actual layer 1/2 topologies.

      I imagine the goal is to define a layer 2 switching domain (ports, VLANs, etc) and connect systems to it regardless of how the systems are physically connected or even located.

      • And then there's my inherent skepticism about the value payoff relative to the level of complexity added, as well as asking isn't that why we have layer 3 protocols? To define networks above and beyond their layer 2 memberships?

        What once was old is now new again.

      • by Anonymous Coward

        what is the payoff? no Cisco support contracts on gazillions of switches and interconnects (which support really is purchased for firmware updates--support/replacement is or should be quite rare) this will have a very fast payoff, despite the initial complexity curve.

        a lab should be quite cheap for POC testing of production changes.

    • A network is physical infrastructure

      No it isn't. Sure, there's one ethernet cable connected from a server to the rack switch, but even there, the packets coming in could have hundreds of different VLAN tags on them.

      Everywhere else, you have multiple redundant links from everything to everything else, and deciding which one to use for each packet is the complex part.

    • by AK Marc ( 707885 )
      Do you know what VLANs are? It's a logical imposition on a physical network. SDN is an extension of that idea. No reason you couldn't put every computer in its own VLAN, with ARP and DHCP forwarded to the correct server, or configure a full mesh of connections and disable all but the best route spanning tree style, with your explicit rules, no 3rd party decisions required.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...