Can Anyone Suggest a Good Switch? 54
wgadmin asks: "I am a sysadmin for a 500-node Linux cluster. We are doubling the size of our compute nodes, so, as a consequence, we need a new switch. We currently have a Foundry FastIron 1500 -- however, a) it doesn't have enough ports (only 208) and b) it is extremely unreliable. We want something that's solid as a rock. We process mostly serial jobs. And we will probably require ~320 ports. What's everyone using for their HPC clusters? There's so many high performance switches on the market, we hardly know where to start."
Several things left out. (Score:5, Informative)
Or are you looking for something more specialized (HIPPI compliant or something similarly obscure?)
That said, if you're looking for in the ethernet space, we've been really happy with our recent Extreme Networks chassie's. Their black diamond 10k line is the newest release, and it looks awesome. It's really dense, they've got crazy levels of backplane bandwidth, and ours have been really reliable (granted, we have the previous generation of the gear). The chassies have blades (just like everyone else) that can speak 10/100, 10/100/1000 copper, gig fiber, 10 gig fiber, etc.
3Com, HP and Dell? (Score:3, Informative)
3Com comes with stackable switches, upto 8 of 48 ports which should be enough. The stacking bus is something like 10gbit or 32gbit for all-gigabit switches.
The switch market is really (1) Cisco (2) 3Com and (3) HP for market share, and I recommend you go with these. Cisco is more than 100% expensive than anyone else for the same stuff, and I've never been impressed with HP, so look at 3Com. Take a look at all those confusing nortel switches too, the number 4 of the market. You'll most likely find your switch between 3com and cisco, unless you want to give up reliability.
Extreme Networks (Score:4, Informative)
The Black Diamond 10808 [extremenetworks.com] would work great for the type of envrionment you have setup from the sounds of it. Also, Extreme is usually 20-40% cheaper then Cisco and Foundry for the equivilant appliance.
We currently use an Alpine 3808 [extremenetworks.com] with 192 100mbps ports and it's never had a problem with uptime and configuration is a simple and straightforward.
force 10 (Score:4, Informative)
FNN (Score:2, Informative)
Other than that, a Cisco 6513 with 11 10/100/1000 48 port switch cards would fit the bill to provide a single chasis switch for all 500 nodes. Hope you've got a decent budget, because it will cost you.
Forget ethernet (Score:3, Informative)
Stackable 48 Ports (Score:4, Informative)
We use to have Foundry ourselves, but their switches were crap, they would suddenly become dumb hubs and lose their ip, etc.
We tried HP, but found their interface cumbersome and unfamiliar with weird networking related issues that would pop up.
Cisco's been rock solid, but very expensive.
Re:Extreme Networks (Score:5, Informative)
Call your local supercomputing center (Score:5, Informative)
To get you started:
http://www.ncne.org
http://www.psc.edu
http://www.sdsc.edu
http://www.ncsa.edu
Yeah, it's Pittsburgh-centric. Guess where I'm posting from. There's probably somewhere closer to you.
The things you want to figure out before calling:
-What's your budget? (Nice stuff tends to be more expensive)
-How much does latency matter? (Usually, lots. Sometimes, not so much. Put numbers here.)
-What's your architecture (at several levels of technical detail)? Can you use 64-bit PCI? Do you have to work with a proprietary bus? Can you use full-height, full-length cards? What OS -exactly- are you using? (Hint: "Linux" ain't close enough.) What version and vendor of PVM/MPI/whatever are you using, and can you switch?
Cisco 65XX (Score:5, Informative)
The 6513 is basically the same thing but with four extra slots.
The 6509 chassis lists at $9.5K and the 6513 $15.25K. That's completely bare bones. The supervisor modules run anywhere from $6K to $28K at list. The 48 port 10/100/1000 modules list at $7.5K while a 24 port SFP fiber blade lists for $15K. You'll need two power supplies at $2K-5K each.
On the cheap end, to get the port density you're looking for out of Cisco, you'll pay about $70K list. But if you find the right reseller, you can see a discount of 30-40%.
All numbers in this post should be considered best guess, based on quotes I've gotten. They may be out of date. They are not official prices from Cisco. Take with the appropriate grain of salt.
obvious, maybe? (Score:1, Informative)
Our 3Coms have served us well, and between them, they work with anything from 10M ethernet to 2gig fibre optic.
I worked with a major brewer for a while, and their Cisco kit was very reliable, but it never had to handle much of a load. It did survive being kicked about, dropped, my boss's driving it several hundred miles unsecured in the back of a van. I doubt out 3com kit would have survived that!
Basically, if you can afford Cisco, go for it. If not, use 3Com.
Incidentally, if you want your server room to look cool, go for Black Diamond
Re:Several things left out. (Score:4, Informative)
HP Switches are very reliable, but run HOT! (Score:4, Informative)
One thing is get a switch that's modular (most good ones are), but if something goes out, you'll only loose 8 or 32 nodes instead of the whole switch.
Nortel Passport (Score:3, Informative)
Of course, if you are looking for the typical Ask Slashdot for free solutions answer you can forget it. These puppies cost a bundle.
Re:3Com, HP and Dell? (Score:3, Informative)
Re:FNN (Score:2, Informative)
That said, if cost is a factor and you really only need layer-2, we found the Nortel 5510 stackable to be extremely impressive. 48 ports of 10/100/1000, stackable to 8 - manages as a single unit, tons of capacity between switches, great redundancy features and a cost literally less than a quarter of the Cisco's.
Re:3COMs fall apart (Score:3, Informative)
Which explains all the dying Cisco 3524's and 2900's I have. Random dead ports, complete lockups, etc. Not that our 3com's are doing any better. For now I'm replacing them with Dell Gigabit switches, cause the price is nice, but I'm looking for something that built for the data-center (Dell's lack an effective backplane interconnect is a big strike). Just wanted to point out Cisco's fail too.