Intel Announces Avoton Server Architecture and Software Defined Services Effort 41
MojoKid writes "Intel unveiled a number of new data center initiatives this week as part of its broad product strategy to redefine some of its market goals. Santa Clara has begun focusing on finding ways to expand the utility of its low power Atom servers, including the upcoming Avoton Atom products, which are based on the 22nm Bay Trail architecture. Intel isn't just pushing Avoton as as low-power solution that'll compete with products from ARM and AMD, but as the linchpin of a system for software defined networking and software defined storage capabilities. In a typical network, a switch is programmed to send arriving traffic to a particular location. Both the control plane (where traffic goes) and the data plane (the hardware responsible for actually moving the bits) are implemented in hardware and duplicated in every switch. Software defined networking replaces this by using software to manage traffic and monitoring it from a central controller. Intel is moving towards such a model and talking it up as an option because it moves control away from specialized hardware baked into expensive routers made by people that aren't Intel, and towards centralized technology Intel can bake into the CPU itself."
Re: (Score:2)
You know what would be even cooler? A monthly AI bot that took all of the hot memes in tech and applied them to "prospective Intel tech" that may or not ever appear. That would probably save Intel a billion dollars a year in marketing expense over having it done manually. Seriously Intel, if you're going to do this year after year after year, you may as well automate it and save some money. You're all about automating repetitive stuff, right?
It could enumerate monthly the latest convincing reasons why
Re: (Score:2)
Re: (Score:2)
Do you mean vt-d/AMD-Vi (i and d being the important bits)
It does seem AMD hasn't actually market segmented this feature out to the level of near pointlessness like Intel but I can't seem to find good information on which chips do and don't support it. If e-350 does though that is a good sign.
Re: (Score:2)
"Programmable" (Score:1)
Re: (Score:2)
Who are the morons spreading "software-defined" bullshit when there already is a common, well-understood word that perfectly describes the feature?
Well, I can have a "software phone" which is a phone-as-app that runs on my desktop. Or I have have a "programmable phone" which describes pretty much any non-trivial office phone one can buy.
I think there is a difference there, don't you?
Re: (Score:1)
Incredibly BAD approach to Networking (Score:3)
Seriously, are these people stupid at Intel or haven't they learned anything in the past 20 some years the Net has been around about software and why it is incredibly bad to centralize that sort of control and power in a single entity in infrastructure.
Let alone on a computer network?
There is no possible use for something like this unless you want to centralize power and control over the entire network mesh from one location.
Bad for business, bad for citizens in every country who deploy it, just plain bad idea. I might be inclined to give it a serious thought if I was Alice in Wonderland and could rely on the Chesire Cat to keep things on the up and up.
But you have to be kidding me? We have governments that would destroy the entire internet with this crap as it is right now if they had that sort of control.
Seriously, don't buy that crap and do not contribute to such a open source project.
-Hack
Re: (Score:3)
Creating a ubiquitous network is the first step to placing a government camera in every home. Should the Internet have been precluded to avoid centralization of power and control?
WRT citizens; SDN can only implement policies established by people. The correct approach (for a start) to dealing with these policies is this [defundthensa.com] and this [sopastrike.com], not banning the tools.
WRT business; you're going to lose the argument hard on this one. One appeal of SDN is cost savings; cheaper hardware, easier management, less net complexi
Re: (Score:1)
Of course it is great for intel!
Oh shit these switches are too slow? I guess I have to upgrade them to icore7 extremes etc in 1.5 years instead of waiting a decade to upgrade. It is why Intel makes very shitty integrated graphics. They want casual users to buy icore7 extremes for simple things so they can make more money.
Little do they know there is this thing called competition.
My guess is these will be updated all the time for being slow === more revenue! I hope IT managers with a brain wake up and think
Re: (Score:2)
Did anyone say it's good networking? No! This is all about rerouting MONEY to Intel (data going to NSA is a side effect).
Re: (Score:1)
SDN makes hacking and covering tracks so, so, so, much more potent, quicker, and easier. Now you don't just have the NSA to be afraid of. As with the entire history of the Internet, they will not worry about security until their baby has grown into a giant, and then they will attempt to tack some kind of loincloth on it and declare it secure.
SDN (Score:2, Interesting)
No QPI and only 16 pcie v2? lains (Score:2)
adding 10GBE and an raid card / pcie based ssd can eat up all of pcie lains fast. Even more if they try to jam an video chip and TB on there as well.
No QPU kills more then 1 socket. Also not listed is the number of ram channels.
A whole new level of indirection (Score:4, Insightful)
When I think about management problems we have today they are almost entirely caused by unaddressed suckage in various layers of existing stack. Rather than fixing underlying problem people insist on adding new layers of complexity to workaround them.
It started with virtualization. Operating systems lacked the management and isolation features users needed. Rather than fixing the operating system just virtualize everything and run a whole shitload of images on one machine. Now instead of one system image to maintain you have a shitton of them and you have wasted great sums of storage, memory, management and compute resources all because you were too lazy to ask vendors to solve your origional problem.
Next we have capwap/openflow complex specifications intended to normalize configuration of all your network things. A lot of this is caused by IT chasing architectural fallacies such as "network security" and "redundancy". Layers upon layers of IDS, firewall and god knows what to "secure the network". The very concept of things like "internal network" or load balancers used for application redundancy are flawed, stupid and dangerous. What part of "insider threat" do people not understand?
Routers should be stupid devices which punt packets between interfaces. The error is placing complexity where it does not belong and then go have to go mask the repercussions of a poor choice with SDN cuz otherwise it is all just too hard to manage.
What would happen if for example rather than an expensive load balancer for a web farm browsers simply implemented a hueristic to pull multiple IPs out of DNS and use a delay timer to make make multiple connection attempts with term memory of failed requests and RTT feedback. You could effectivly mask a failure in the string group with little to no noticable delay until the failed system can be repaired or yanked from DNS.
The most detremental error I see repeated constantly is this notion the data tier, the network or the operating system is somehow responsible for the lack of scalability or availability of an application. This is fundementally bullshit. Systems must be DESIGNED to scale. Smoke and magic by a vendor only delay or mask underlying problems. We need smarter software not software defined gimmicks.
Re: (Score:2)
It started with virtualization. Operating systems lacked the management and isolation features users needed. Rather than fixing the operating system just virtualize everything and run a whole shitload of images on one machine. Now instead of one system image to maintain you have a shitton of them and you have wasted great sums of storage, memory, management and compute resources all because you were too lazy to ask vendors to solve your origional problem.
I've not seen that. I saw clustering for redundancy getting a foot-hold in the late 1990s. Then people would cluster for reliability, and have more processing than they needed, but rather than adding services, which could impact reliability, why not have multiple computers on multiple machines, with more redundancy and almost the same power as a 1:1. The bonus being you could make sub-computers (DNS servers dedicated to DNS/DHCP with OS minimums of resources for their tiny load, remember I'm talking corp
A piece of the pie (Score:2)
This is my notion of what
Reading all the posts here what gives? (Score:1)
How the hell did the discussion suddenly get side tracked into blaming Intel and the hardware manufacturers for creating software security issues? But lately any post about hardware that is not 100 percent Microsoft friendly seems to get slagged by idiots.
The highest rated posts are essentially rants, not a whisper about why going al