Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Technology

RLX Gets Denser 74

A reader writes: "There's story about RLX Technologies shrinking their "blades" server on Linuxgram." Knowing how much we pay for our "floor space" at the colo, the notion of having multiple blade machines is pretty cool - and shrinking this to a 1U form factor with 6 blade of the Transmeta Crusoe 5800 line of chip is pretty cool.
This discussion has been archived. No new comments can be posted.

RLX Gets Denser

Comments Filter:
  • 404.... (Score:2, Funny)

    by decaying ( 227107 )

    there is a missing link...... goodbye.

  • Comment removed based on user account deletion
  • the page is broken... link verification would be a good thing
  • by ekrout ( 139379 ) on Thursday November 08, 2001 @09:20PM (#2541390) Journal
    Hemos: ...the Transmeta Crusoe 5800 chip is pretty cool...

    No, it really isn't. As much as we all wanted Transmeta to kick some royal ass in the chip market, they haven't, and may throw in the towel very soon. Here's a quote from an article posted here on Slashdot less than a week ago, I believe:

    Meanwhile, Transmeta was courting Taiwan Semiconductor Manufacturing Co. to produce its next chip, the Crusoe 5800. IBM had been making the chips, but Transmeta wanted a lower-cost manufacturer. In February, Transmeta struck an exclusive deal with TSMC. But the switch didn't end the delays. Samples of the 5800 chip that Toshiba received had problems, which seemed destined to push the project to November and prompted Toshiba to kill the notebook for the U.S. market. "We'd get products and then find an anomaly. You can put in a workaround but the only way to fix it is through silicon," said Steve Andler, Toshiba's vice president of marketing. Before he was forced out last month, Transmeta CEO Mark Allen said the company was still completing "long-term operating life" tests on the 5800. Sources familiar with the situation said that some of the problems stemmed from the complex design of the chip as well as from Transmeta's testing procedures, which were not weeding out inadequate chips but were giving the company an early, erroneous impression of success. Others, however, blamed TSMC's manufacturing processes. Early on, many of the faulty chips consistently came from the same section of the wafer, which sources said indicated a manufacturing flaw. Normally tight-lipped TSMC blames the 5800's design.

    So, Hemos, I'm not sure if you have actual experience with a 5X00 Transmeta chip, but from what I read, they're nothing to brag about.

    I'm as pissed-off about this as any of you, but the truth is the truth.

    • He didn't say that the Transmeta chip was cool, he said it was cool that you could pack 6 machines in 1 rack unit.
    • by jandrese ( 485 ) <kensama@vt.edu> on Thursday November 08, 2001 @11:24PM (#2541832) Homepage Journal
      One thing that is overlooked in rackmount (especially 1U) servers is power drain. Computer centers don't have unlimited power (in fact since you have to both power the equiptment AND power the refrigeration units to draw the heat from your machines out of the CC the power requirements of your servers becomes something of an issue). The ability to run dozens of servers in a rack without creating a power situtation is a big bonus for computing centers.

      One of the big trends in computing recently has been for servers to grow smaller and consume lots more power. Just look at your average P4 or Athlon compared to the old Pentiums and Pentium Pros. It's getting so bad that PCs are drawing more power than even energy hogging Alpha based machines in some cases. This problem is compunded by everybody putting servers in the smallest boxes prossible, 1U frequently, so that these energy hogs can be stacked all the way to the top of the rack and draw an enormous amount of power from the circut designated for that tile.
    • please (Score:3, Interesting)

      by ddt ( 14627 )
      Oh, stop it. You sound like a gossipping fishwife, piecing incomplete data from speculative press into silly conclusions. "The truth is the truth." Give me a break.

      The 5800 is the 5600 at TSMC's 0.13u fab instead of IBM's 0.18u fab. I agree with Hemos that the 5800 is cool. And if the 5800 is cool, it's only because it's identical to the 5600, and the 5500 and 5400 are the exact same design with half the L2 disabled (less L2 => less die area => better yield => cheaper parts). It's all the same chip design. Any claims that the 5800's design is unmanufacturable just doesn't wash with me. IBM manufactured the same parts just fine at 0.18u.

      Transmeta's mistakes are myriad, but Transmeta's accomplishments are unbelievable. The press keeps covering them because they have a an incredible technology and it's a David vs Goliath story.

      Yeah, the 5x00, at full bore, is probably only 60% as fast (depends very much on the load) as an Intel part at that clock-speed, but Transmeta with its 418 employees beat out AMD with its 14,400 employees in the ultra-light mobile space, and Intel with the bloat and burn of 86,100 employees may be next.

      Transmeta has only had product in the channel for just over a year. 60% isn't half bad, and they are learning at an absurd rate. Intel has been at this over 20 years, and with all that experience, even they make some terrific blunders, like RAMBUS and the Itanium.

      I am an avid fan of x86 microarchitectures. There are only two out there which I consider interesting, bold, and the kinds of designs which will take us to the next stage of microprocessor development, the P4 and Crusoe. Which is more advanced and has a more promising future for growth? Crusoe, definitely.

      The P4 with its unscheduled trace cache is like Crusoe's inbred second cousin. The P4 lacks a sensible place to put its translations, has to reschedule its translations on every execution, has not the brains to apply even a fraction of the advanced CMS optimization techniques, and all that comes at a big cost in transistor count, yield, and heat.

      It is clear that virtually every Intel engineer knew the P4 architecture was a big gamble with its much lower decoder bandwidth, puny L1 cache, huge branch penalties, and deeply retarded choice of RAMBUS. When the P4 came out, it was slower than the P3. The Itanium was way slower. That has never been true of a single release, internal or external, at Transmeta.

      Even though it is changing radically, no one at Transmeta thinks the next-generation Crusoe architecture is much of a gamble. Viewed from virtually every angle, it's an almost undeniable improvement over the p95 architecture.

      The challenge for Transmeta will be finding a manufacturing partner that can keep up with Intel's undeniably strong track record in advancing process technology. Intel is pushing the process and MHz hard, but their microarchitectures are going nowhere fast.
  • Some RLX caveats (Score:5, Informative)

    by Wee ( 17189 ) on Thursday November 08, 2001 @09:28PM (#2541417)
    We played with an eval of the RLX deal in April. They were very nice. There were a couple issues with them. Not show-stoppers, but definitely things to think about before deploying them, however, which make the inappropriate in some situations. You'd have to do some work-arounding if you intended to replace "real" servers. Imagine an IBM Thinkpad as a server and you get an idea of what you'd need to do. Things we found:
    1. They "slept". I had a server monitoring process which listened on a TCP port. If you didn't connect in a while (seemed like more than a couple hours, but we never timed it), a connection would take like 30 seconds. We figured out that the blade went into a suspend-like mode during periods of inactivity. Since we were looking at these as some sort of low-end round robin-ish thing, that wasn't a desireable feature. Probably could have been adjusted/conf'ed not to happen, but something to think about.
    2. And that brings me to the other caveat: they have weasely little IBM microdrives. They're IDE, and slow and completely ill-suited for use in a server. However, you could easily boot off the network (or even the tiny drives; it has slots for two of them, so you could set up failover or something I guess) and then attach a NAS deal to it. But you could probably never get by with just the stock drives in anything but an extremely low-traffic situation.

    If you had a shared server web hosting company, could spring for the net storage, and didn't mind a tweak or three, then you could probably host quite a few customers in a quarter rack as opposed to a full rack (the power savings alone would let you pack them in). Throw in another full-powered app/db server and you'd be golden. All in all, the RLX is very interesting in the right applications. Clustering is another possibly appropriate role, now that I think about it (no, I did not say the "B" word).

    One more probably uninteresting note: The eval unit we had came with Debian pre-installed. The newer ones have Red Hat (and Win2K, I think). So if you only do Debian, you could probably get the older images and stick 'em on there. Might have to waive support or something though...

    -B

    • no, I did not say the "B" word

      You didn't say it, but you did imagine it.

    • We are the borg. You will be assimilated. We will add your technological and biological uniquness to our own. We are what you would call, the ultimate Beowulf Cluster. We will mount your puny internet into our collective. Except for anything that deals with France. Bad things come from France. Words with too many vowels, bad accents, the coneheads, and that big metal fallic symbol.

      We are the borg. You will be assimilated. Unless you are French.
    • The problem with the IDE drives is what really turned me off of the whole RLX idea. Until they come up with something better I won't consider them. Maybe something like the option for an external scsi/fibrechannel drive bay?
    • Re:Some RLX caveats (Score:4, Informative)

      by taliver ( 174409 ) on Thursday November 08, 2001 @10:46PM (#2541701)
      They "slept".

      We also just worked up a cluster of these, and I dealt with the people there (also very positive experience). We asked about the ability to power cycle and send the blades into and out of sleep mode.

      The guy there told me that they had fixed a problem with the sleep state recently (I was talking to him July/August), and that there were some other issues with power cycling they were working with.
      • Re:Some RLX caveats (Score:3, Interesting)

        by Wee ( 17189 )
        I dealt with the people there (also very positive experience).

        Yes, I forgot to mention that part. The rep (who was actually like a director of engineering something, I think) who we dealt with was extremely refreshing. He answered all our idiotic questions, listened to our suggestions, worked with our guys. RLX were very nice to deal with. I was sorta sad we never hooked up with them business-wise (our business was just starting to move away from smaller web hosting type customers -- which is who we were eval'ing for in the first place)

        The guy there told me that they had fixed a problem with the sleep state recently

        Ahhhh, so it wasn't a setting so much as an unintended feature. We had wondered about that. My bet was on setting. :-) I bodged up a couple perl scripts which kept them all alive (and sent back host data while they were at it). Kind of a hack, but it worked.

        There were some other issues with power cycling they were working with.

        Hmmmm. That we didn't see. Not so good to have some of the blades bounce themselves...

        Actually, I really want a set of blades for home. I work with lots of OSes, and one RLX and a decent KVM would make working from home very nice. -B

        • I was thinking that a little rack of RLX's would make a nice "Load Generator" for a stress-testing some given cluster configuration. If you have 24 clients pumping requests as fast as they can, depending on the service, you should be able to flood a few machines with that.

          Maybe RLX should look into providing this as an alternative service. "Need 1000 clients to pound your system on a local network? Call us and we'll drive over with a van and a cable to hook into your system."
    • It'd be nice if they had fiber channel HBAs built into the blades. That would fix the IDE problem.
    • They use 2.5" notebook drives, not Microdrives.
  • by dave-fu ( 86011 ) on Thursday November 08, 2001 @09:29PM (#2541418) Homepage Journal
    Nifty that Transmeta's finally (belatedly?) living up to its hype by showing that they can stack mo' CPUs into a smaller space as they run so much cooler... but how much computing power (bus bandwidth and everything) does a Transmeta have when compared to an Intel/Sun/whoever solution? Does it stack (no pun intended) up?
  • Tandy RLX? (Score:4, Funny)

    by mmol_6453 ( 231450 ) <short DOT circui ... OT grnet DOT com> on Thursday November 08, 2001 @09:32PM (#2541432) Homepage Journal
    Is this RLX related to my old Tandy RLX 1000 I got from Radio Shack ten years ago?

    The RS sales guy made my parents bring it back to prove I had installed Win3.1 on it. They kept saying it was impossible.

    Well, it did take up 16MB out of a 20MB hard drive...

    That was also the same technician that told my mom that I'd "never, ever need more than a meg of RAM."
    • Whoops...didn't catch it in 'preview' ... When I said 'technician', I meant 'salesman'
    • remember getting a visit from a local Microsoft sales guy, because I'd managed to get Windows 3.1 (maybe it was 3.0 - not sure) to run on the schools 286's. Just to add insult to injury, I'd also had to run doublespace on the harddrive to have enough room for apps ... slow as hell, but it ran ... well crawled.
  • by man_ls ( 248470 ) on Thursday November 08, 2001 @10:02PM (#2541525)
    Maybe this would be a good market for the metaclustering described a few days ago. Splitting the Linux OS into a user-runable process, and having many "virtual" servers on one physical hardware. For a hosting company, the lower power requirement of the RLX, the lower space, and the lower cost of the Transmeta hardware might make this an attractive option, especially if they are inclined to do metaclustering.
  • Why are these guys depending on a Transmeta emulation of an Intel processor, if you want to run NT/2K OS then maybe this is a good reason, but Linux/FreeBSD/??? are a better solution for a server farm especially with rlogin capabilities they would be that much easier to maintain. A StrongARM or other processor, maybe one of the newer Network processors, hopefully from one of those companies that hasn't died in the past few months that is a fast MIPS (doesn't have to be MIPS but networking folks seem to prefer or at least be used to the MIPS architecture). And build a server arround that. These Network processors include on-chip DRAM support (no northbridge/southbridge chips required), have on chip 10/100 support often atleast 2 ports, boot from ROM, have low power modes etc... everything that you want in a server that doesn't exist in the standard PC architecture. PC's are useful as commodity platforms, and that makes life easy, but that also brings alot of headaches and overhead that aren't appropiate to the dense sever market. PMC Sieria has a single chip device (RM9000x2) that has 2 count'em 2 MIPS cores running at 1GHz each, I think each is dual issue superscalar (IIRC) so something like 4 Billion instructions/sec and not the lame old x86 arcitecture, they support DDR SDRAM on chip, having L2 cache on chip so with not much more than a couple of DDR-DIMMs this chip, ethernet and a few other do-dads, you have kick-ass performance well beyond what Transmets could do, maybe even this single chip could replace the 6 Server boards, and still have better system performance and this solution would be considerably smaller than the PC compatible boards discussed here. If you want a dense server build a dense server, don't make small PCs and hope for the best. It seems incrediable to me that people seem to have rolled over and died and accepted the status quo of Wintel and don't seem interested in pushing the envelope even in competive markets with the economy down. The processor only uses about 5W and you could easily put 10 of them in a 1U chasis, the disk drives would take significantly more power anyway, which is one of the problems that I see with the Transmeta solution, they do have a great technology but in general the DRAM, Disk and backlighting (if a laptop) use more power each as compared to the CPU, so the end benifit is lost or not noticable, so you need to rethink the architecture and move away from the legacy PC to a streamlined solution.
    • A reason which is blantantly obvious to those of us who use non-x86 processors in our computers is that software originally written for x86 processor takes a bit of work to port to other processors and then the ports aren't always as functional as the original piece of software. RLX boxes run a slightly customized Linux installation but for the most part are using off the shelf GNU software. Were a company to come out with boxes using they'd have to invest alot of money into making sure their software was up to snuff on their hardware. Transmeta and those using Transmeta chips rounded the corners by making their product compatible with a shitload of software already existing.
  • I saw a KVM over IP switch at an enterprise vendor expo.

    Imagine having a KVM switch running something like VNC on it that would let you control the switch by software and also allow you to configure the BIOS and preboot stuff over the internet.

    Hell, you could even do it wirelessly! I was blown away by the whole thing.

    The one problem with it, though, is that they're giving away a free flat panel monitor with it. Something expensive enough that a flat panel becomes schwag is too expensive for me!
  • RLX is solid gear (Score:5, Informative)

    by asah ( 268024 ) on Thursday November 08, 2001 @10:13PM (#2541560) Homepage


    I work at an ISV building an MPP application, and
    we started eval'ing the RLX 324 back in the
    summer, and have had 100% success with them:
    in a nutshell, each blade is about half the
    performance and half the price of our 1U servers.
    Overall, the blades are nicerly "balanced" in
    terms of performance.

    The claims about density, manageability etc. are
    all true (divided by 2, ie. comfortable margin).
    Beyond sheer density, with bladed servers, you
    can deliver scalable apps in a single box, which
    removes the big objection that they're "hard to
    manage".

    We wrote our own cluster admin tools (perl) and run
    on redhat 7.1, which they'll pre-install, so it's
    been cake-- but this also means that we didn't
    try out their management tools.

    Having been burned by other vendors on 1U boxes
    esp. heat/vibration causing reliability problems,
    I've been doubly pleased that the RLX gear hasn't
    had any problems-- exactly the sort of stuff
    you'd expect from former Compaq execs... but
    without all the proprietary crap-- as a test,
    we reinstalled redhat from the retail CDs, and
    it just worked.
  • RLX Gets Denser

    And Leon is getting laaaaaaaaaaaarger! [freeservers.com]

  • Knowing how much we pay for our "floor space" at the colo, the notion of having multiple blade machines is pretty cool.

    But a soon as everyone starts using less floor space, the colo will need to increase the price per unit of floor space - floor space is often used as an accounting allocation unit, with the tough-to-measure costs, like power, ac use, security, and some personnel costs split by floor and rack space. Those costs aren't going away - they will need to be split among a smaller number of units. (Not to mention in the non-expansion dot.bomb era, the capital used for the building is pretty much a sunk cost).
  • If your colocation space requirements drop, they will need to remain profitable somehow... my guess is simply that colocation prices will increase.
  • Their older model had 24 blades in 3U and now they have 6 blades in 1U. How is this denser?
  • The old format was 24 blades in a 3U chassis, which works out to 336 blades per standard 42U rack. The new format is 6 blades per 1U chassis, which works out to 252 blades per rack. Maybe it's a more appealing form factor, but it's not denser.

  • Can you imagine a racks worth of these running virutal beowolf clusters?

    Seriously:

    200 servers per rack. Let's say 2 virtual nodes per box, that's a 400 node cluster in a rack! Bammmmmm!!!!

    And with a NAS on the backend you could use 200 small unique root partitions, and just about shared everying for content and common binaries.

    "quote from article" Richeson said the architecture provides for clustering - Racemi expects to be popular in Beowulf circles - and fault tolerance - a spoiled blade falls over to its mates and can be hot-swapped out - and reliability - there are no moving parts except the two fans per blade. "/quote from article"

    "off topic, but I hope you agree"

    I have always wished for something different though. I use a standard desktop environment: GUIs, an OS, storage, connectivity, input and output devices... But as I upgrade my machine I am tied to the hardware that I can fit in to one box. I wish I could just add some new stuff to my desktop without being limited to one machine.

    I know client/server does all this, but I want to jsut add a mahcine to my "desktop" and get better perfomance and capacity,etc.

    What I really want is this: A desktop environment which behaves like a single machine. A desktop which uses local and remote resources seamlessly. A desktop which I can add processing-power/storage/etc to by simply turning up another box and adding it to my environment.

    [#mydesktopconfig
    groupmembership=yes
    headless=yes
    stealconfighost=10.10.10.3
    stealhostauth=""
    #cpu-power - auto will try to find cpu speed automatically and guess rating (1-100000)
    cpu-power=233
    #netcon - auto will try to rate network speed automatically
    netcon=125
    continent=africa
    locale=921
    #remotestatehost=
    maintainlocalstate=true
    #insertfakesettinghere
    all_your_cpu_are_belong_to_us=true
    ]


    Then I would be happy. I could always by another mahcine and ADD it to my current config gaining everything I bought...

    "/end off topic"

  • "RLX Shrinks Dense Server" --- Headline

    they aren't trying to make it denser they are trying to sell more servers...

    "Blade pioneer RLX Technologies, saying that its initial product was too big for most people, has cut its 3U 24-blade dense server down to 1U that houses only six blades.

    It figures the new second-generation product, called simply the RLX System 1U and a more popular form factor than the 3U, will move in greater volumes than its predecessor and sales of course are exactly what the struggling start-up that has gone through two layoffs in the last few months needs right now."
  • How is 6 servers in 1U denser than their previous 24 servers in 3U? Anyone? Anyone?
  • But now even Hemos doesn't read the articles anymore to check up on them ?

    geez.
  • They charge you a specific rate, by Rack or by foot if you require the caged locations. and finally per rack-U if you are only using a few rack units.

    If I can fit 72 servers in a regular floor rack then I would buy 1 rack... and If I need say 60 webservers ,two switches and 2 load balancers I could really care less about how much co-lo floor space costs... I care about being raped on bandwidth as i am probably paying for a DS3.

    Floor space cost is the lowest expense, it's like switching toilet paper brands to save money, you feel good when you save money, but bitch about it when you actually have to use it.

    Besides, IDE based servers are not for mission critical uses, and I guess that a website really isnt mission critical, but why isnt there 1U racks using 2.5" SCSI hard drives? why dont we have 20G U160 scsi drives?

    Oh well, I'll stick with my custom built 4U servers.
  • Is shrinking equipment like this actually useful to most companies? I'm curious, because my prior employer, had lots of rack space, but the colo site only had ~20amps per rack available. With 9' racks, it was very easy to run out of power BEFORE space was an issue. They (above.net) claimed their colo was designed for 6-8U "servers", not 1U "pizza boxes".

    Aside from power, how about heat dissapation? After all, a bunch of laptops theoretically would make EXCELLENT servers, since they even have 1-2 hour battery backups built right in, are extra small, and don't even require a KVM (they could just be pulled out as needed). BUT, a laptop runs waaaaaaay too hot to be used as a server, and their price/performance ratio is aweful, even if you do save on rack space.

    Finally, do these have any redundancy built in? Is there anything else special about them? Personally, I'd rather have a 1U rack unit which is dual CPU, dual NIC, and dual power supply than two of these. It would probably be cheaper, last longer, and be less hassle.

    Just my $0.02 from my past colo experience.

I'd rather just believe that it's done by little elves running around.

Working...