Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Hardware

Can Anyone Suggest a Good Switch? 54

wgadmin asks: "I am a sysadmin for a 500-node Linux cluster. We are doubling the size of our compute nodes, so, as a consequence, we need a new switch. We currently have a Foundry FastIron 1500 -- however, a) it doesn't have enough ports (only 208) and b) it is extremely unreliable. We want something that's solid as a rock. We process mostly serial jobs. And we will probably require ~320 ports. What's everyone using for their HPC clusters? There's so many high performance switches on the market, we hardly know where to start."
This discussion has been archived. No new comments can be posted.

Can Anyone Suggest a Good Switch?

Comments Filter:
  • by Zapman ( 2662 ) on Monday September 20, 2004 @04:42PM (#10301221)
    What level of interconnect do you want? (gig copper? gig fiber? 10/100?)

    Or are you looking for something more specialized (HIPPI compliant or something similarly obscure?)

    That said, if you're looking for in the ethernet space, we've been really happy with our recent Extreme Networks chassie's. Their black diamond 10k line is the newest release, and it looks awesome. It's really dense, they've got crazy levels of backplane bandwidth, and ours have been really reliable (granted, we have the previous generation of the gear). The chassies have blades (just like everyone else) that can speak 10/100, 10/100/1000 copper, gig fiber, 10 gig fiber, etc.
  • 3Com, HP and Dell? (Score:3, Informative)

    by mnmn ( 145599 ) on Monday September 20, 2004 @04:44PM (#10301241) Homepage
    We've been using 3com switches and theyre rock solid. I was rooting for cisco a while ago because I'm studying for some certs, but the price difference is huge.

    3Com comes with stackable switches, upto 8 of 48 ports which should be enough. The stacking bus is something like 10gbit or 32gbit for all-gigabit switches.

    The switch market is really (1) Cisco (2) 3Com and (3) HP for market share, and I recommend you go with these. Cisco is more than 100% expensive than anyone else for the same stuff, and I've never been impressed with HP, so look at 3Com. Take a look at all those confusing nortel switches too, the number 4 of the market. You'll most likely find your switch between 3com and cisco, unless you want to give up reliability.
  • Extreme Networks (Score:4, Informative)

    by Plake ( 568139 ) <rlclark@gmail.com> on Monday September 20, 2004 @04:46PM (#10301272) Homepage
    Extreme Networks [extremenetworks.com] has a great line of switches.

    The Black Diamond 10808 [extremenetworks.com] would work great for the type of envrionment you have setup from the sounds of it. Also, Extreme is usually 20-40% cheaper then Cisco and Foundry for the equivilant appliance.

    We currently use an Alpine 3808 [extremenetworks.com] with 192 100mbps ports and it's never had a problem with uptime and configuration is a simple and straightforward.
  • force 10 (Score:4, Informative)

    by complex ( 18458 ) <complex@nOsPAM.split.org> on Monday September 20, 2004 @04:51PM (#10301340) Homepage
    http://www.force10networks.com/ [force10networks.com] claim to have the higest port density.
  • FNN (Score:2, Informative)

    by bofkentucky ( 555107 ) <bofkentucky.gmail@com> on Monday September 20, 2004 @04:55PM (#10301404) Homepage Journal
    Flat neighborhood networks [aggregate.org], basically you get to use "cheap" cards and switches in a web configuration to provide a fast interconnect between nodes.

    Other than that, a Cisco 6513 with 11 10/100/1000 48 port switch cards would fit the bill to provide a single chasis switch for all 500 nodes. Hope you've got a decent budget, because it will cost you.
  • Forget ethernet (Score:3, Informative)

    by keesh ( 202812 ) * on Monday September 20, 2004 @04:58PM (#10301440) Homepage
    Give serious thought to FC-IP and director-class fibrechannel kit. Performance-wise it'll thrash Ethernet, and there're various clever tricks you can do with directors clustered together via Open Trunking meaning that a bunch of 160 port boxes (a McData 6140 is your best bet here) will do as well as a larger single box.
  • Stackable 48 Ports (Score:4, Informative)

    by DA-MAN ( 17442 ) on Monday September 20, 2004 @05:06PM (#10301534) Homepage
    I'm a sysadmin for a 3 large clusters in the same league, we use stackable 48 port Nortel switches. Each switch is 1u, and the interconnects don't use a separate port. The switches have wildly expensive support options, however because it just works we've never had to pay for support on them.

    We use to have Foundry ourselves, but their switches were crap, they would suddenly become dumb hubs and lose their ip, etc.

    We tried HP, but found their interface cumbersome and unfamiliar with weird networking related issues that would pop up.

    Cisco's been rock solid, but very expensive.
  • Re:Extreme Networks (Score:5, Informative)

    by unixbob ( 523657 ) on Monday September 20, 2004 @05:11PM (#10301587)
    We also use Extreme Switches and I can vouch for their reliability and performance. Instead of going for the "one big switch" approach though, we've got a pair of Black Diamond 6808's with 1u 48 port Summit 400 edge switches uplinked back to the core switch (excuse the marketing terminology). This makes cabling much tidier when you have a high number of servers as you can locate the edge switches all around the server room then just have the cables from the Summit's in the rack with the Black Diamond. It makes deploying new kit much easier, and tracing cables much easier as well. You don't end up with the switch rack being a massive mess of untraceable patch cables. The only servers that are patched directly into the Black Diamonds are those using the NAS (because they need as much bandwidth as possible)
  • by beegle ( 9689 ) * on Monday September 20, 2004 @05:25PM (#10301745) Homepage
    Send email to a few supercomputing centers. These places have tons of clusters, with lots of vendors throwing hardware at them. They're also often associated with schools, so they're not competitors and they actually -want- people to learn from what they've done.

    To get you started:
    http://www.ncne.org
    http://www.psc.edu
    http://www.sdsc.edu
    http://www.ncsa.edu

    Yeah, it's Pittsburgh-centric. Guess where I'm posting from. There's probably somewhere closer to you.

    The things you want to figure out before calling:

    -What's your budget? (Nice stuff tends to be more expensive)

    -How much does latency matter? (Usually, lots. Sometimes, not so much. Put numbers here.)

    -What's your architecture (at several levels of technical detail)? Can you use 64-bit PCI? Do you have to work with a proprietary bus? Can you use full-height, full-length cards? What OS -exactly- are you using? (Hint: "Linux" ain't close enough.) What version and vendor of PVM/MPI/whatever are you using, and can you switch?
  • Cisco 65XX (Score:5, Informative)

    by arnie_apesacrappin ( 200185 ) on Monday September 20, 2004 @05:34PM (#10301838)
    If you're looking for Gig over copper, the 6509 will probably give you the density you want in a single device. It has 9 slots, one of which is filled by the supervisor module. If you want to upgrade to the 720 Gbps switch fabric, I think that takes another slot, but could very well be wrong. But with 7 available slots at 48 ports per 10/100/1000 blade you would have 336 connections.

    The 6513 is basically the same thing but with four extra slots.

    The 6509 chassis lists at $9.5K and the 6513 $15.25K. That's completely bare bones. The supervisor modules run anywhere from $6K to $28K at list. The 48 port 10/100/1000 modules list at $7.5K while a 24 port SFP fiber blade lists for $15K. You'll need two power supplies at $2K-5K each.

    On the cheap end, to get the port density you're looking for out of Cisco, you'll pay about $70K list. But if you find the right reseller, you can see a discount of 30-40%.

    All numbers in this post should be considered best guess, based on quotes I've gotten. They may be out of date. They are not official prices from Cisco. Take with the appropriate grain of salt.

  • obvious, maybe? (Score:1, Informative)

    by IWX222 ( 591258 ) <rob@ro b r e d p a t h . c o.uk> on Monday September 20, 2004 @05:35PM (#10301857)
    I'd just say go for the most that you can afford.
    Our 3Coms have served us well, and between them, they work with anything from 10M ethernet to 2gig fibre optic.

    I worked with a major brewer for a while, and their Cisco kit was very reliable, but it never had to handle much of a load. It did survive being kicked about, dropped, my boss's driving it several hundred miles unsecured in the back of a van. I doubt out 3com kit would have survived that!

    Basically, if you can afford Cisco, go for it. If not, use 3Com.

    Incidentally, if you want your server room to look cool, go for Black Diamond :D
  • by wgadmin ( 814923 ) on Monday September 20, 2004 @05:48PM (#10302019)
    Sorry, I forgot to mention that we are interested in gig copper. We are exclusively interested in gig copper. And, as far as anyone has told me, we don't care about HIPPI compliance.
  • by scum-o ( 3946 ) <bigwebb@@@gmail...com> on Monday September 20, 2004 @05:49PM (#10302036) Homepage Journal
    We're using the unmanaged HP procurve modular 1Gbps switches in our clusters, but they run VERY HOT when utilized (our switches get hammered 24/7 - like most clusters probably do) and we had some overheating issues with them. Our clusters aren't as large as yours, but I'd suggest going with a major manufacturer (IBM, HP, Cisco) if you're putting all of your eggs in one basket (switch-wise).

    One thing is get a switch that's modular (most good ones are), but if something goes out, you'll only loose 8 or 32 nodes instead of the whole switch.
  • Nortel Passport (Score:3, Informative)

    by FreeLinux ( 555387 ) on Monday September 20, 2004 @05:51PM (#10302064)
    Nortel's Passport 8600 [nortelnetworks.com] 384 ports per chassis, true wire-speed, redundant everything, layer 2-7 switching. Also, if you need more ports simply add another 8600 and use Multi-link-trunking (MLT) between the switches. Wash Rinse Repeat. Networks that use these are smokin!

    Of course, if you are looking for the typical Ask Slashdot for free solutions answer you can forget it. These puppies cost a bundle.
  • by DetrimentalFiend ( 233753 ) * on Tuesday September 21, 2004 @12:56AM (#10305393)
    At work, we have a couple 3com switches and a bunch of HP's. The 3com switches are absolutely horrible, but the HP switches are absolutely awesome. For much less than a Cisco switch, the HP switches have pretty much identical features, but are easier to configure. (They provide a Cisco compatible command line interface, web interface, and a menu driven interface.) I'd really recommend HP switches. From what I hear, 3com switches are better now, but I thought I'd throw my $0.02 in about them.
  • Re:FNN (Score:2, Informative)

    by rnxrx ( 813533 ) on Tuesday September 21, 2004 @10:57AM (#10308402)
    The 6513 won't support 11 48-port 10/100/1000 linecards if you expect anything close to wire speed. The only blades that would even come up in that application would be 6148-GE-TX or 6548-GE-TX. In both cases the interconnect between the blade and the backplane is very oversubscribed. If you want to run at full speed, only a 6509 with supervisor 720's and the 6748 linecards will approach line rate at high density. The 6513 has serious limitations on how many high-speed blades can be employed. I would strongly recommend using a 6509 instead. The practical limit would then be 384 10/100/1000 ports.


    That said, if cost is a factor and you really only need layer-2, we found the Nortel 5510 stackable to be extremely impressive. 48 ports of 10/100/1000, stackable to 8 - manages as a single unit, tons of capacity between switches, great redundancy features and a cost literally less than a quarter of the Cisco's.

  • Re:3COMs fall apart (Score:3, Informative)

    by ePhil_One ( 634771 ) on Wednesday September 22, 2004 @05:36PM (#10323335) Journal
    Cisco's last forever, even long after their technology is obsolete, their hardware is built like army tanks... but you do pay for it too.

    Which explains all the dying Cisco 3524's and 2900's I have. Random dead ports, complete lockups, etc. Not that our 3com's are doing any better. For now I'm replacing them with Dell Gigabit switches, cause the price is nice, but I'm looking for something that built for the data-center (Dell's lack an effective backplane interconnect is a big strike). Just wanted to point out Cisco's fail too.

What is research but a blind date with knowledge? -- Will Harvey

Working...