Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Networking

Intel Announces Avoton Server Architecture and Software Defined Services Effort 41

MojoKid writes "Intel unveiled a number of new data center initiatives this week as part of its broad product strategy to redefine some of its market goals. Santa Clara has begun focusing on finding ways to expand the utility of its low power Atom servers, including the upcoming Avoton Atom products, which are based on the 22nm Bay Trail architecture. Intel isn't just pushing Avoton as as low-power solution that'll compete with products from ARM and AMD, but as the linchpin of a system for software defined networking and software defined storage capabilities. In a typical network, a switch is programmed to send arriving traffic to a particular location. Both the control plane (where traffic goes) and the data plane (the hardware responsible for actually moving the bits) are implemented in hardware and duplicated in every switch. Software defined networking replaces this by using software to manage traffic and monitoring it from a central controller. Intel is moving towards such a model and talking it up as an option because it moves control away from specialized hardware baked into expensive routers made by people that aren't Intel, and towards centralized technology Intel can bake into the CPU itself."
This discussion has been archived. No new comments can be posted.

Intel Announces Avoton Server Architecture and Software Defined Services Effort

Comments Filter:
  • Who are the morons spreading "software-defined" bullshit when there already is a common, well-understood word that perfectly describes the feature?
    • by vux984 ( 928602 )

      Who are the morons spreading "software-defined" bullshit when there already is a common, well-understood word that perfectly describes the feature?

      Well, I can have a "software phone" which is a phone-as-app that runs on my desktop. Or I have have a "programmable phone" which describes pretty much any non-trivial office phone one can buy.

      I think there is a difference there, don't you?

      • That's a crap analogy, man. Try this: "programmable network" vs. "software-defined network". Which describes better, not to mention easier to roll off your tongue?
  • by hackus ( 159037 ) on Tuesday July 23, 2013 @05:54PM (#44366279) Homepage

    Seriously, are these people stupid at Intel or haven't they learned anything in the past 20 some years the Net has been around about software and why it is incredibly bad to centralize that sort of control and power in a single entity in infrastructure.

    Let alone on a computer network?

    There is no possible use for something like this unless you want to centralize power and control over the entire network mesh from one location.

    Bad for business, bad for citizens in every country who deploy it, just plain bad idea. I might be inclined to give it a serious thought if I was Alice in Wonderland and could rely on the Chesire Cat to keep things on the up and up.

    But you have to be kidding me? We have governments that would destroy the entire internet with this crap as it is right now if they had that sort of control.

    Seriously, don't buy that crap and do not contribute to such a open source project.

    -Hack

    • by Tailhook ( 98486 )

      Creating a ubiquitous network is the first step to placing a government camera in every home. Should the Internet have been precluded to avoid centralization of power and control?

      WRT citizens; SDN can only implement policies established by people. The correct approach (for a start) to dealing with these policies is this [defundthensa.com] and this [sopastrike.com], not banning the tools.

      WRT business; you're going to lose the argument hard on this one. One appeal of SDN is cost savings; cheaper hardware, easier management, less net complexi

    • Of course it is great for intel!

      Oh shit these switches are too slow? I guess I have to upgrade them to icore7 extremes etc in 1.5 years instead of waiting a decade to upgrade. It is why Intel makes very shitty integrated graphics. They want casual users to buy icore7 extremes for simple things so they can make more money.

      Little do they know there is this thing called competition.

      My guess is these will be updated all the time for being slow === more revenue! I hope IT managers with a brain wake up and think

    • by Skapare ( 16644 )

      Did anyone say it's good networking? No! This is all about rerouting MONEY to Intel (data going to NSA is a side effect).

    • SDN makes hacking and covering tracks so, so, so, much more potent, quicker, and easier. Now you don't just have the NSA to be afraid of. As with the entire history of the Internet, they will not worry about security until their baby has grown into a giant, and then they will attempt to tack some kind of loincloth on it and declare it secure.

  • SDN (Score:2, Interesting)

    by cosm ( 1072588 )
    SDN can suck it. As a guy that lives in the trenches, between lags, mstp, vlan routing, vrrp/hsrp, trill, and now big routing protocols showing up in the datacenter (think ospf/bgp) and a motley crew of various other l2 and up protocols, we have enough decentralized means for corralling bits to their regularly scheduled programs. SDN is just big content's wet dream, or network odm's looking to get in on the 'app' craze.
  • adding 10GBE and an raid card / pcie based ssd can eat up all of pcie lains fast. Even more if they try to jam an video chip and TB on there as well.

    No QPU kills more then 1 socket. Also not listed is the number of ram channels.

  • by WaffleMonster ( 969671 ) on Tuesday July 23, 2013 @06:30PM (#44366509)

    When I think about management problems we have today they are almost entirely caused by unaddressed suckage in various layers of existing stack. Rather than fixing underlying problem people insist on adding new layers of complexity to workaround them.

    It started with virtualization. Operating systems lacked the management and isolation features users needed. Rather than fixing the operating system just virtualize everything and run a whole shitload of images on one machine. Now instead of one system image to maintain you have a shitton of them and you have wasted great sums of storage, memory, management and compute resources all because you were too lazy to ask vendors to solve your origional problem.

    Next we have capwap/openflow complex specifications intended to normalize configuration of all your network things. A lot of this is caused by IT chasing architectural fallacies such as "network security" and "redundancy". Layers upon layers of IDS, firewall and god knows what to "secure the network". The very concept of things like "internal network" or load balancers used for application redundancy are flawed, stupid and dangerous. What part of "insider threat" do people not understand?

    Routers should be stupid devices which punt packets between interfaces. The error is placing complexity where it does not belong and then go have to go mask the repercussions of a poor choice with SDN cuz otherwise it is all just too hard to manage.

    What would happen if for example rather than an expensive load balancer for a web farm browsers simply implemented a hueristic to pull multiple IPs out of DNS and use a delay timer to make make multiple connection attempts with term memory of failed requests and RTT feedback. You could effectivly mask a failure in the string group with little to no noticable delay until the failed system can be repaired or yanked from DNS.

    The most detremental error I see repeated constantly is this notion the data tier, the network or the operating system is somehow responsible for the lack of scalability or availability of an application. This is fundementally bullshit. Systems must be DESIGNED to scale. Smoke and magic by a vendor only delay or mask underlying problems. We need smarter software not software defined gimmicks.

    • by AK Marc ( 707885 )

      It started with virtualization. Operating systems lacked the management and isolation features users needed. Rather than fixing the operating system just virtualize everything and run a whole shitload of images on one machine. Now instead of one system image to maintain you have a shitton of them and you have wasted great sums of storage, memory, management and compute resources all because you were too lazy to ask vendors to solve your origional problem.

      I've not seen that. I saw clustering for redundancy getting a foot-hold in the late 1990s. Then people would cluster for reliability, and have more processing than they needed, but rather than adding services, which could impact reliability, why not have multiple computers on multiple machines, with more redundancy and almost the same power as a 1:1. The bonus being you could make sub-computers (DNS servers dedicated to DNS/DHCP with OS minimums of resources for their tiny load, remember I'm talking corp

  • So, here's some random, somewhat connected ideas. This is a long winded post, but please bear with me. First, take a look look at the buzzwords, and you can tell where the money will be flowing. Several years ago, the big thing was "Green" right? Then came "Big Data", and the last 2 or 3 years have been all "Cloud". Now if you've been paying attention, this year's buzzword is "Software Defined $TECHNOLOGY", which of course was kicked off with "Software Defined Networking" (SDN).

    This is my notion of what
  • From what I understand the whole purpose of what Intel [hp.com] is doing is along the same lines as the HP Moonshot hardware design. READ CAREFULLY WHAT INTEL IS DOING WITH HP AND WHY

    How the hell did the discussion suddenly get side tracked into blaming Intel and the hardware manufacturers for creating software security issues? But lately any post about hardware that is not 100 percent Microsoft friendly seems to get slagged by idiots.

    The highest rated posts are essentially rants, not a whisper about why going al

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...