Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Networking

Intel Announces Avoton Server Architecture and Software Defined Services Effort 41

MojoKid writes "Intel unveiled a number of new data center initiatives this week as part of its broad product strategy to redefine some of its market goals. Santa Clara has begun focusing on finding ways to expand the utility of its low power Atom servers, including the upcoming Avoton Atom products, which are based on the 22nm Bay Trail architecture. Intel isn't just pushing Avoton as as low-power solution that'll compete with products from ARM and AMD, but as the linchpin of a system for software defined networking and software defined storage capabilities. In a typical network, a switch is programmed to send arriving traffic to a particular location. Both the control plane (where traffic goes) and the data plane (the hardware responsible for actually moving the bits) are implemented in hardware and duplicated in every switch. Software defined networking replaces this by using software to manage traffic and monitoring it from a central controller. Intel is moving towards such a model and talking it up as an option because it moves control away from specialized hardware baked into expensive routers made by people that aren't Intel, and towards centralized technology Intel can bake into the CPU itself."
This discussion has been archived. No new comments can be posted.

Intel Announces Avoton Server Architecture and Software Defined Services Effort

Comments Filter:
  • by WaffleMonster ( 969671 ) on Tuesday July 23, 2013 @07:30PM (#44366509)

    When I think about management problems we have today they are almost entirely caused by unaddressed suckage in various layers of existing stack. Rather than fixing underlying problem people insist on adding new layers of complexity to workaround them.

    It started with virtualization. Operating systems lacked the management and isolation features users needed. Rather than fixing the operating system just virtualize everything and run a whole shitload of images on one machine. Now instead of one system image to maintain you have a shitton of them and you have wasted great sums of storage, memory, management and compute resources all because you were too lazy to ask vendors to solve your origional problem.

    Next we have capwap/openflow complex specifications intended to normalize configuration of all your network things. A lot of this is caused by IT chasing architectural fallacies such as "network security" and "redundancy". Layers upon layers of IDS, firewall and god knows what to "secure the network". The very concept of things like "internal network" or load balancers used for application redundancy are flawed, stupid and dangerous. What part of "insider threat" do people not understand?

    Routers should be stupid devices which punt packets between interfaces. The error is placing complexity where it does not belong and then go have to go mask the repercussions of a poor choice with SDN cuz otherwise it is all just too hard to manage.

    What would happen if for example rather than an expensive load balancer for a web farm browsers simply implemented a hueristic to pull multiple IPs out of DNS and use a delay timer to make make multiple connection attempts with term memory of failed requests and RTT feedback. You could effectivly mask a failure in the string group with little to no noticable delay until the failed system can be repaired or yanked from DNS.

    The most detremental error I see repeated constantly is this notion the data tier, the network or the operating system is somehow responsible for the lack of scalability or availability of an application. This is fundementally bullshit. Systems must be DESIGNED to scale. Smoke and magic by a vendor only delay or mask underlying problems. We need smarter software not software defined gimmicks.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...