Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Businesses Networking IT

Cisco Barges Into the Server Market 206

mikesd81 was one of several readers to write in about Cisco's announcement of what has been called Project California — a system comprising servers made from 64-bit Intel Nehalem EP Xeon processors, storage, and networking in a single rack, glued together with software from VMWare and BMC. Coverage of this announcement is everywhere. Business Week said: "The new device, dubbed Project California, takes servers into new territory by cramming computer power into the very box that contains storage capacity and the networking tools that are Cisco's specialty. Cisco's approach could help companies use fewer machines — saving money not only on hardware, but also on power and IT staffing — in building data centers. ... Cisco is well-girded to take this step. It has more than $30 billion in cash, more than any other tech company. The company is moving into no fewer than 28 different markets, including digital music in the home and public surveillance systems." The Register provides more analysis: "Microsoft is, of course, a partner on the California system, since you can't ignore Windows in the data center, and presumably, Hyper-V will be supported alongside ESX Server on the hypervisors. (No one at the Cisco launch answered that and many other questions seeking details). ... The one thing that Cisco is clear on is who is signing off on these deals: the CIO. Cisco and its partners are going right to the top to push the California systems, right over the heads of server, storage, and network managers who want to protect their own fiefdoms."
This discussion has been archived. No new comments can be posted.

Cisco Barges Into the Server Market

Comments Filter:
  • by ShooterNeo ( 555040 ) on Monday March 16, 2009 @08:00PM (#27219269)

    I have to ask : why Nehalem EP Xeons? Those are the absolute bleeding edge chips that Intel manufactures, and as such as the most expensive by a significant margin. Newegg doesn't even have the chip listed on their website, yet carries 91 different server CPU models. While space inside the data center does cost money, and so does electricity, is it really so expensive as to be worth paying for a chip that is probably 10 times as expensive per MIP as cheaper alternatives? The motherboards are more expensive as well, especially when you factor in the huge markup for server grade parts.

    The only advantage of the Nehalem is that it is SLIGHTLY faster per processing thread, but networking is usually an "embarassingly parallel" problem.

  • by SethJohnson ( 112166 ) on Monday March 16, 2009 @08:07PM (#27219347) Homepage Journal


    Not sure that Cisco is such a lone cash giant as suggested. Apple has $28 billion in reserves as of Jan 22, 2009 [wsj.com]. With the recent economic fiasco, both Cisco and Apple might be in different positions.

    It has more than $30 billion in cash, more than any other tech company.

    Seth
  • A good move (Score:4, Interesting)

    by Jjeff1 ( 636051 ) on Monday March 16, 2009 @08:40PM (#27219713)
    Cisco has been quietly working towards this for a while. You can get a server module for the lowly 1800 series router.

    For large networks and satellite office, you have a server or 2, a phone system, network gear, maybe some video surveillance gear. They'll walk into the CIO's office and say:
    "you have all this gear from different vendors, with different support contracts and different departments finger pointing when problems arise."

    "Now here is the cisco way, one box, one department, one vendor to call. Stick it in a closet and forget about it. Let us show you all our management tools which show everything in a single pane of glass"

    If they do it right, it'll make for a very slick demo.

    This is their attempt to do the same in the datacenter.
  • by ShooterNeo ( 555040 ) on Monday March 16, 2009 @08:41PM (#27219731)

    Well, the part was only released a month ago. There's a significant speed boost per processing core versus the older core 2/core quad line, and a new socket type/new ram type. As of right now, there's nothing faster that money can buy in the x86 architecture. Heck, for generalized processing it's probably the fastest chip money can buy. A new supercomputer would likely run best with a massive array of thousands of these things.

  • Re:Blah Blah Blah (Score:3, Interesting)

    by MightyMartian ( 840721 ) on Monday March 16, 2009 @08:53PM (#27219841) Journal

    Not everything they do is perfect, but they broke into the Fibre Channel switching business quite effectively. They can, and do, break into new markets. Servers are a logical step for them since there's a huge advantage to providing a vertical stack of networking, servers, and whatever else they can muster.

    Yes, at prices that will make high-end Server 2008 enterprise installs look cheap.

  • by dweller_below ( 136040 ) on Monday March 16, 2009 @08:53PM (#27219845)

    Yep. That's the Cisco I know and loath. If you can't convince the literate, just move up the org chart.

    Years ago, at my institution (150+ buildings, about 15K active IP addresses,) we did a cost analysis of our Cisco addition and decided that it was unnecessary. We could do everything we needed with cheaper, commodity devices.

    So, for the next couple years, all upgrades/replacements were to simpler structures. To non-proprietary protocols. And to non-Cisco equipment. We have been Cisco-Free for about 4 years.

    The hardest part was beating off the attacks from Cisco Sales. These attacks were vicious. They lied (even more than usual for Cisco sales droids.) They tried their best to discredit us. First they approached the head of IT. Then the VP for Business. Then the president.

    Finally, they went to the Board of Regents. They said we were incompetent. They said our actions were endangering the future of our institution. Fortunately, the Regents decided to let us try it.

    It has worked out great for us. Our capability is up. Our reliability is way up. Our security is up. Our costs are down (about 1/2 the price of equivalent Cisco.)

    But, it only happened because upper management was willing to trust us. I get the impression that most management would fold under the pressure we saw.

    Miles

  • by Zeio ( 325157 ) on Monday March 16, 2009 @08:54PM (#27219857)

    One of the main things that would make Nehalem attractive is a few things for blade servers. The process 45nm moving on to 32nm later will provide the smallest footprint while not giving up any CPU power. Also, the Nehalems (Bloomfield and Core i7) that I have tested don't seem to offer much in terms of better performance, but the power usage is considerably lower. Also, FB-DIMMS (and DDR2) were a bit too consumptive of power, the newer memory technology is an attempt to reduce power consumption. Also, the CPI (formerly CSI) offers the Intel CPUs a Hypertransport-like interconnect system allowing system builders to scale. The footprint of 16+ core systems with Nehalem will be far smaller than the previous generations of Xeon MP processors.

    I think the idea is more memory, more CPU processing power, less power and heat and scalability with a given architecture.

    I noticed in my testing the L1 Data and instruction caches (32KB per core) in the Core i7 is one cycle more latent than the core2 (4 cycles vs 3), the L2 cache (which is 256KB per core rather than the 4-6MB per two cores in Core2) is faster, down from 17 cycles to 12 cycles. With this boost in L2-speed came a cut of 3.75MB-5.75MB in size. The way they mitigated that loss was to give a "large" 8MB L3 cache that runs on a slower clock. This new system, along with a hyper-threading implementation that, unlike the previous one, seems to genuinely enhance performance in nearly every test, allows Intel to make top-performing chips (see CPU 2006 @ spec.org for the latest results) that scale better via QPI, use less power and fit into smaller spaces than previous chips.

    See:
    Sorted SPEC CPU2006 Integer and Floating Point [spec.org]

    CINT2006
    Hardware Vendor System Result Baseline # Cores # Chips # Cores Per Chip Published Disclosure
    1) YOYOtech Fi7EPOWER MLK1610 (Intel Core i7-965) 36.0 32.5 4 1 4 Jan-2009
    2) ASUSTeK Computer Inc. ASUS P6T WS PRO workstation motherboard (Intel Core i7-965 Extreme Edition) 35.2 31.5 4 1 4
    3) ASUSTeK Computer Inc. Asus P6T Deluxe (Intel Core i7-965 Extreme Edition) 33.6 30.2 4 1 4 Nov-2008
    4) ASUSTeK Computer Inc. Asus P6T Deluxe (Intel Core i7-940) 30.8 27.8 4 1 4
    5) Dell Inc. Dell Precision T7400 (Intel Xeon X5492, 3.40 GHz) 30.2 27.6 8 2 4

    CFP2006
    Hardware Vendor System Result Baseline # Cores # Chips # Cores Per Chip Published Disclosure
    1) ASUSTeK Computer Inc. ASUS P6T WS PRO workstation motherboard (Intel Core i7-965 Extreme Edition) 39.3 37.4 4 1 4 Feb-2009
    2) YOYOtech Fi7EPOWER MLK1610 (Intel Core i7-965) 35.7 33.6 4 1 4 Jan-2009
    3)ASUSTeK Computer Inc. Asus P6T Deluxe (Intel Core i7-965 Extreme Edition) 33.6

  • by MightyMartian ( 840721 ) on Monday March 16, 2009 @09:03PM (#27219947) Journal

    At the small ISP I worked at, we pretty much bought into Cisco for several years. We had an AS5200 PRI for our full 56k PRI lines, and a 3000 series model (can't recall which one) as our gateway router. This worked fine until we started rolling out some more advanced networking, such as proprietary 900mhz and 2.4ghz wireless. Suddenly we were faced with either having to upgrade this equipment (some of it not so young), and the costs were not insignificant.

    I asked my boss to give me a couple of weeks to see what I could put together with some of our old Pentium II boxes. Now I fully realize that software routing just isn't as good as Cisco's hardware routing, but damn it all, the price was cheap. Even buying new mini-ATX boxes for up on the towers was considerably cheaper than anything Cisco would offer. Whatever performance boost we'd get from Cisco hardware (or Nortel or whatever) simply couldn't justify the vast difference in pricing.

  • Not a surprise (Score:2, Interesting)

    by monschein ( 1232572 ) on Monday March 16, 2009 @09:16PM (#27220067)
    Cisco might have a good shot at this. Project California might look appetizing to a lot of IT departments. Virtualization and consolidation is on the agenda for a lot of datacenters at the moment. All of these functions in one box, in one rack, AND easily manageable would appeal to a lot of CIOs. Deal with one vender instead of three - and a reputable vendor at that. Knowing Cisco, it will most likely be a bit pricey. But hey, no one ever got fired for buying Cisco right? On the other hand, a lot of people have been fired for blowing the budget.
  • by OddlyMoving ( 1103849 ) on Monday March 16, 2009 @09:22PM (#27220131)

    This is very true. I am currently evaluating a forklift upgrade of one of my POPs, and we're looking at the Cisco vs. Juniper proposition.

    While I'm a VP level operational head at an ISP, the Cisco rep told me straight out that he doesn't typically engage technical people like me when he comes in. He typically talks to the C level people, and it shows, because he's not keeping up with the Juniper rep. The Juniper team has already put me in front of many technical product development people, and the depth of the conversations have been truly refreshing. I'm feeling more and more comfortable with going Juniper as the days go by.

  • by trims ( 10010 ) on Monday March 16, 2009 @09:26PM (#27220193) Homepage

    OK. My bias up front - I work for Sun.

    That said, there were several pre-Cisco-announcements from HP, IBM, and Sun about how the California system is a no-go. Admittedly, they're the competitors for Cisco, but after having looked at the existing rack blade/switch systems from those three vendors, I really don't see any difference worth mentioning from current product lines.

    Here's some thoughts:

    • IBM and Sun make much more Open systems, able to run a wide variety of vmWare, Linux, Solaris, and even AIX on all sorts of hardware (SPARC, POWER, PPC, all sorts of AMD and Intel x64). Their systems are much more flexible and honestly, much more powerful overall in what can be accomplished.
    • HP has much of the HW flexibility of Sun and IBM, plus the leading management tools.
    • Cisco has no clue as to how to run a systems support organization, which, frankly, is considerably different than running a network hardware support organization. The other big three have decades each in doing this kind of thing.
    • Sun in particular has extremely competitive pricing. HP and IBM are slightly more expensive, but nothing compared to the margins Cisco charges. So, exactly WHAT are people going to get for the 20-40% premium Cisco is charging over IBM?
    • Even for the Virtualization craze, building a completely proprietary solution flies in the face of what everyone else in the industry is doing: commoditization.
    • Cisco doesn't have integrated solutions. All the others provide storage, network, and compute integration with large, well-trained Professional Services orgs. Cisco has CCIEs in piles, but what do they know about anything but network gear?

    Overall, this looks like a stupid move. I realize that Cisco needs to look for more revenue streams in the face of the commoditizing of most network gear, but this seems like an '80s solution to a 2010 problem.

    -Erik

  • Re:Blah Blah Blah (Score:1, Interesting)

    by Anonymous Coward on Monday March 16, 2009 @09:42PM (#27220297)
    If the Cisco Video Surveillance Media Server [cisco.com] is anything to go by the statement "They suck at everything else." is 100% correct.

    We bought one of these at work and spent WAY too much money on it. I can't even begin to tell you how much this system sucks ass and was a HUGE waste of money.

  • Because.. (Score:4, Interesting)

    by Junta ( 36770 ) on Tuesday March 17, 2009 @12:34AM (#27221497)

    There is exceedingly fat margins in storage, *even* by cisco standards, and that's a high benchmark. The SAN market is a vendors dream, where nickel and diming every little feature and even every little port on every switch is the status quo, except the nickles and dimes are more like 5 and 10 thousand dollars.

    As far as technical reasons, they are mostly not there. One exception is that Cisco is pushing to replace FC with Ethernet, presumably with the promise of an escape from the painful FC market practices. Though assuredly they will bring some of the market behaviors over, they will make it somewhat easier to make the sale. They tried to just release a product into that market against the likes of Brocade and QLogic, but I think Cisco has realized their only substantial chance to stay vital is to suck in storage infrastructure into their fabric they have some reputation in, ethernet.

    People are already starting more and more to consider other vendors 'good enough' for traditional networking needs. Cisco wants to own the whole mess so that people will be more afraid to move off.

  • by scientus ( 1357317 ) <instigatorircNO@SPAMgmail.com> on Tuesday March 17, 2009 @03:23AM (#27222217)

    yeah both AMD and Intel have direct IO from virutal machines in their newer chips, AMD in Phenom, but you need a expensive motherboard. Intel I guess was later with these Nehalems.

  • by NotBornYesterday ( 1093817 ) * on Tuesday March 17, 2009 @09:31AM (#27224191) Journal

    I have to ask : why Nehalem EP Xeons?

    Because if you are a networking hardware vendor looking to crash the server hardware vendors' party, you don't engineer your product with yesterday's technology. Cisco is in a cash position where they could sell this as a loss leader to get their foot in the door, or at least until the price on the components come down as the Nehalem becomes more competitive.

    Not to mention, the cost of the cpu components themselves is a relatively small portion of the overall cost of a system this size. Maybe in a low-end sub-$2k 1RU server, the Nehalem won't make sense right away, but I think it's probably a good fit here.

  • by faedle ( 114018 ) on Tuesday March 17, 2009 @10:51AM (#27225225) Homepage Journal

    This.

    I work for a company that resells an application that recommends DoubleTake for hot failover. While it does indeed work, it is an administrative nightmare and very difficult to set up PROPERLY. Plus, failback never works: it's much easier to just fail "forward".

    Now, the fault of a lot of this is the application, not DoubleTake. However, the solution always appears brittle, and the cost of "false failovers" is very high.

  • by medelliadegray ( 705137 ) on Tuesday March 17, 2009 @11:29AM (#27225831)

    You dont seem to be too familiar with vmware, and it's lack of single points of failure when implemented correctly. Sure, something can fai, but everything else should be able to pickup the slack.

    Also, when you're paying per CPU 3K for Vmware licenses, another 3k for MS datacenter licenses, and who knows how much for each license on on each virtual server instance.... that extra 30 watts you're worried about is NOTHING if you can cram 2 more virtual servers onto a CPU.

  • by JAlexoi ( 1085785 ) on Tuesday March 17, 2009 @02:45PM (#27229637) Homepage
    As they done it before, they will just BUY their way in. They will have competent people. They will have good products. The question is, will they beat the giants there. Cisco is big and strong. It is also not a mega giant, like HP or IBM or even Fujitsu. Cisco has basically only one revenue stream - networking equipment. For HP and IBM those servers are not their main product, but it's their centrepoint product lines. Will Cisco be able to separate their main networking division and goals from their new server division? On that depends their success.
  • by Chase_09 ( 1502989 ) on Tuesday March 17, 2009 @08:09PM (#27235191)

    Bias up front I work for Cisco.

    A couple of comments on your points.

    "IBM and Sun make much more Open systems, able to run a wide variety of vmWare, Linux, Solaris, and even AIX on all sorts of hardware (SPARC, POWER, PPC, all sorts of AMD and Intel x64). Their systems are much more flexible and honestly, much more powerful overall in what can be accomplished."

    - SUN/IBM have more support on for various platforms than Cisco at present - yes. Cisco is fully supported by VMware, Microsoft & RedHat at this present. We're in the process of validation from additional platforms (Solaris & other flavors of Linux). This is Cisco's debut - give it some time. IBM & SUN didn't reach the level of open support they have today overnight.

    "HP has much of the HW flexibility of Sun and IBM, plus the leading management tools."

    - A single management instance of UCS can manage 40 Chassis, and up to 320 servers. HP supports 4 Chassis (64 servers) per Management instance. IBM supports only 1 Chassis (14 servers). We've partnered with BladeLogic to make a very powerful managment system. Our management system handles Switching, Storage & Chassis configuration. IBM/HP have separate points of management for each.

    "Cisco has no clue as to how to run a systems support organization, which, frankly, is considerably different than running a network hardware support organization. The other big three have decades each in doing this kind of thing."

    - True. But we're pretty bright and we learn fast :) That being said there's areas where the "Big 3" fall short, which we can improve on. If nothing else it will raise the bar for competition & features by all vendors. Competition should be welcomed into the market. It only endorses for improvements to technology

    "Sun in particular has extremely competitive pricing. HP and IBM are slightly more expensive, but nothing compared to the margins Cisco charges. So, exactly WHAT are people going to get for the 20-40% premium Cisco is charging over IBM?"

    - Initial cost is higher than some of the competition. If you make your decisions solely on this fact it would appear that we're inflated. Take into account the consolodation savings, simplified & increased scope of management and ease of server deployment using stateless computing & profiling, the ROI easily outweigh the initial premiums.

    "Overall, this looks like a stupid move. I realize that Cisco needs to look for more revenue streams in the face of the commoditizing of most network gear, but this seems like an '80s solution to a 2010 problem."

    It probably looked like a stupid move when Cisco entered the Switching market too. Understand Cisco isn't looking to take over the blade market share here. The services-orientated design of this system is target for specific areas of the market - Service Providers, Financial etc. We're taking a new approach to deployment, virtualization & stateless computing.

    Cheers,

    Chase

  • by Antique Geekmeister ( 740220 ) on Tuesday March 17, 2009 @09:30PM (#27236053)
    I'm unfortunately familiar with VMware, both Workstation and ESX. ESX has real limitations on what hardware it runs on, and is usually sold as part of a package with that hardware. So what is a modest virtualization environment in a pizza box with a built-in SATA drive and maybe a Terabyte of external USB device for cheap storage for half a dozen OS's on a single server with a dumb SCSI array is suddenly a much more expensive, dual-supply system, with fiber channel back end, and another server to run the managementn tools. So yes, it draws more like 100 or even 300 watts more. You've a point that for such an expensive license, the power draws may be lost in the noise. But I've been forced to upgrade power supplies, cooling, and other facilities to deal with VMware installations when lighterweight servers, running Xen, were cheaper for licenses _and_ a lot cheaper for the physical facilities, especially where I've successfully talked facilities out of using very expensive dual-homed SCSI, fiber channel, or expensive dedicated iSCSI hardware and simply provided 500 Gig external USB drives for large scale storage, and moved the connectors when needed.

Living on Earth may be expensive, but it includes an annual free trip around the Sun.

Working...