Can Anyone Suggest a Good Switch? 54
wgadmin asks: "I am a sysadmin for a 500-node Linux cluster. We are doubling the size of our compute nodes, so, as a consequence, we need a new switch. We currently have a Foundry FastIron 1500 -- however, a) it doesn't have enough ports (only 208) and b) it is extremely unreliable. We want something that's solid as a rock. We process mostly serial jobs. And we will probably require ~320 ports. What's everyone using for their HPC clusters? There's so many high performance switches on the market, we hardly know where to start."
Several things left out. (Score:5, Informative)
Or are you looking for something more specialized (HIPPI compliant or something similarly obscure?)
That said, if you're looking for in the ethernet space, we've been really happy with our recent Extreme Networks chassie's. Their black diamond 10k line is the newest release, and it looks awesome. It's really dense, they've got crazy levels of backplane bandwidth, and ours have been really reliable (granted, we have the previous generation of the gear). The chassies have blades (just like everyone else) that can speak 10/100, 10/100/1000 copper, gig fiber, 10 gig fiber, etc.
Re:Several things left out. (Score:2)
The chassis is solid, tho. They've even improved reliability on their smaller switches (48pt 100Mb + 2 gig slots) but when we first ordered, we had about a 15 % failure rate w/in 3 months...
Re:Several things left out. (Score:2, Interesting)
One of our sites tried them.... and practically kicked them straight out of the window. If you care about throughput then you want to stay well clear.
Re:Several things left out. (Score:2)
With quality comes a price...
Re:Several things left out. (Score:4, Informative)
3Com, HP and Dell? (Score:3, Informative)
3Com comes with stackable switches, upto 8 of 48 ports which should be enough. The stacking bus is something like 10gbit or 32gbit for all-gigabit switches.
The switch market is really (1) Cisco (2) 3Com and (3) HP for market share, and I recommend you go with these. Cisco is more than 100% expensive than anyone else for the same stuff, and I've never been impressed with HP, so look at 3Com. Take a look at all those confusing nortel switches too, the number 4 of the market. You'll most likely find your switch between 3com and cisco, unless you want to give up reliability.
Re:3Com, HP and Dell? (Score:3, Informative)
Re:3Com, HP and Dell? (Score:2)
Re:3COMs fall apart (Score:3, Informative)
Which explains all the dying Cisco 3524's and 2900's I have. Random dead ports, complete lockups, etc. Not that our 3com's are doing any better. For now I'm replacing them with Dell Gigabit switches, cause the price is nice, but I'm looking for something that built for the data-center (Dell's lack an effective backplane interconnect is a big strike). Just wanted to poin
Extreme Networks (Score:4, Informative)
The Black Diamond 10808 [extremenetworks.com] would work great for the type of envrionment you have setup from the sounds of it. Also, Extreme is usually 20-40% cheaper then Cisco and Foundry for the equivilant appliance.
We currently use an Alpine 3808 [extremenetworks.com] with 192 100mbps ports and it's never had a problem with uptime and configuration is a simple and straightforward.
Re:Extreme Networks (Score:5, Informative)
Re:Extreme Networks (Score:3, Interesting)
We have two Linux clusters -- Viking, a 512 processor P4 cluster, and Mars, a 400 processor Opteron cluster (being commissioned). We also host a number of large Sun machines, including SunSite Northern Europe.
The department also hosts a 250+ node teaching lab and several floors of staff and research desktops, each with 100Mbit+ to the desk and a 1Gbit uplink from each switch. At the middle of our network are two Black D
force 10 (Score:4, Informative)
Re:force 10 (Score:3, Insightful)
Re:force 10 (Score:1)
AFAIK, Force10 switches may not be good for clusters because of larger latency compared to others. Have not tested Force10, this info is based on papers.
I recently made some performance testing on switches and routers that had large number of GE ports. What I learnt from that, was that a switch/router that claims "wirespeed" switching/routing at GE may start dropping packets at 40% load. And listing rfcNNNN on specs does not imply that implementation supports even minimum of specifications or could sup
For HPC interconnects (Score:2)
Of course if cost is an object - I guess you are stuck on simple GbE, rather than a faster interconnect. You should look at 270 [topspin.com] for high density interconnect... Throw in some 360's for outside connectivity and you are set.
How much better than Cisco? (Score:3, Interesting)
Cisco is the de-facto brand of networking gear for standard stuff like Ethernet. How much better are these high-performance switches people are talking about as suggestions in the comments here? This is not a rhetorical question, I just realy want to know and I'm too lazy and uninterested to look into it myself, but not lazy enough to stop typing this slashdot post. Is it enough to be worth going with non-Cisco for HPC clusters that use Ethernet-based interconnects? I know Cisco isn't infallible, but for all kinds of reasons they're a good bet in networking gear come purchasing time, at least outside this HPC cluster business.
Re:How much better than Cisco? (Score:2)
Re:How much better than Cisco? (Score:2)
When we moved our building, we got to rearchitect our whole network. Great experience, if you can get the company to foot the bill. There was a LOT of political pressure to go Cisco, but it turns out that their lowest end enterprise level switching gear was still twice the cost of Extreme Networks highest end gear. In the end, the CIO/CFO couldn't get past the price tag difference, despite the cisco brand recognition.
That, and the lowest end ex
Re:How much better than Cisco? (Score:1)
Let me tell you though if you need pricing just PM me. I have 41% (that's 1% better than most).
If you need it for the super cluster just let me know.
PS. I'm not a seller. I'm a buyer.
TTYL.
FNN (Score:2, Informative)
Other than that, a Cisco 6513 with 11 10/100/1000 48 port switch cards would fit the bill to provide a single chasis switch for all 500 nodes. Hope you've got a decent budget, because it will cost you.
Re:FNN (Score:2, Informative)
Re:Obvious (Score:1)
Forget ethernet (Score:3, Informative)
Re:Forget ethernet (Score:3, Interesting)
Stackable 48 Ports (Score:4, Informative)
We use to have Foundry ourselves, but their switches were crap, they would suddenly become dumb hubs and lose their ip, etc.
We tried HP, but found their interface cumbersome and unfamiliar with weird networking related issues that would pop up.
Cisco's been rock solid, but very expensive.
Your FastIron is Faulty (Score:1)
If your fastiron is unreliable, it's broken and you need it fixed, it's not normally broken.
The LiNX runs on BigIron 8k and Exreme Networks blackdiamonds, both of which are pretty damn good.
Re:Your FastIron is Faulty (Score:1)
Whatever he's running is broken.
Call your local supercomputing center (Score:5, Informative)
To get you started:
http://www.ncne.org
http://www.psc.edu
http://www.sdsc.edu
http://www.ncsa.edu
Yeah, it's Pittsburgh-centric. Guess where I'm posting from. There's probably somewhere closer to you.
The things you want to figure out before calling:
-What's your budget? (Nice stuff tends to be more expensive)
-How much does latency matter? (Usually, lots. Sometimes, not so much. Put numbers here.)
-What's your architecture (at several levels of technical detail)? Can you use 64-bit PCI? Do you have to work with a proprietary bus? Can you use full-height, full-length cards? What OS -exactly- are you using? (Hint: "Linux" ain't close enough.) What version and vendor of PVM/MPI/whatever are you using, and can you switch?
Cisco 65XX (Score:5, Informative)
The 6513 is basically the same thing but with four extra slots.
The 6509 chassis lists at $9.5K and the 6513 $15.25K. That's completely bare bones. The supervisor modules run anywhere from $6K to $28K at list. The 48 port 10/100/1000 modules list at $7.5K while a 24 port SFP fiber blade lists for $15K. You'll need two power supplies at $2K-5K each.
On the cheap end, to get the port density you're looking for out of Cisco, you'll pay about $70K list. But if you find the right reseller, you can see a discount of 30-40%.
All numbers in this post should be considered best guess, based on quotes I've gotten. They may be out of date. They are not official prices from Cisco. Take with the appropriate grain of salt.
Re:Cisco 65XX (Score:1)
Re:Cisco 65XX (Score:2, Interesting)
obvious, maybe? (Score:1, Informative)
Our 3Coms have served us well, and between them, they work with anything from 10M ethernet to 2gig fibre optic.
I worked with a major brewer for a while, and their Cisco kit was very reliable, but it never had to handle much of a load. It did survive being kicked about, dropped, my boss's driving it several hundred miles unsecured in the back of a van. I doubt out 3com kit would have survived that!
Basically, if you can afford Cisco, go for it. If not, us
Extreme Networks (Score:1)
HP Switches are very reliable, but run HOT! (Score:4, Informative)
One thing is get a switch that's modular (most good ones are), but if something goes out, you'll only loose 8 or 32 nodes instead of the whole switch.
Nortel Passport (Score:3, Informative)
Of course, if you are looking for the typical Ask Slashdot for free solutions answer you can forget it. These puppies cost a bundle.
For a good switch (Score:5, Funny)
Re:For a good switch (Score:1, Funny)
Um... (Score:2, Funny)
but are you hiring?
build a fat tree (Score:3, Interesting)
tree with 12 roots for $8000. spend more to get
more cross-section bandwidth, less for less. it
scales with your budget.
My Network Experience Says... (Score:1)
More info needed (Score:2)
Selecting a switch fabric calls for more information than that. That your jobs are primarily serial implies that latency isn't a high priority, and bisection bandwidth might even be of minimal importance but other factors come into play.
For example, are you using NFS over the network? How large are your data sats? Do you tend to just queue jobs and have them start/finish whenever, or do you tend to launch a lot of jobs at once? Do you have to transfer a data set with each job, or is it more a matter of ru