RLX Gets Denser 74
A reader writes: "There's story about RLX Technologies shrinking their "blades" server on Linuxgram." Knowing how much we pay for our "floor space" at the colo, the notion of having multiple blade machines is pretty cool - and shrinking this to a 1U form factor with 6 blade of the Transmeta Crusoe 5800 line of chip is pretty cool.
Re: (Score:1)
404.... (Score:2, Funny)
there is a missing link...... goodbye.
Correct link (Score:5, Informative)
Re:Correct link (Score:3, Informative)
(with kh being karma whore, of course!)
Re:Correct link (Score:2)
JOhn
Re: (Score:1)
404 error (Score:1)
Crusoe Schmuso (Yeah, I know I'll get flamed...) (Score:4, Redundant)
No, it really isn't. As much as we all wanted Transmeta to kick some royal ass in the chip market, they haven't, and may throw in the towel very soon. Here's a quote from an article posted here on Slashdot less than a week ago, I believe:
Meanwhile, Transmeta was courting Taiwan Semiconductor Manufacturing Co. to produce its next chip, the Crusoe 5800. IBM had been making the chips, but Transmeta wanted a lower-cost manufacturer. In February, Transmeta struck an exclusive deal with TSMC. But the switch didn't end the delays. Samples of the 5800 chip that Toshiba received had problems, which seemed destined to push the project to November and prompted Toshiba to kill the notebook for the U.S. market. "We'd get products and then find an anomaly. You can put in a workaround but the only way to fix it is through silicon," said Steve Andler, Toshiba's vice president of marketing. Before he was forced out last month, Transmeta CEO Mark Allen said the company was still completing "long-term operating life" tests on the 5800. Sources familiar with the situation said that some of the problems stemmed from the complex design of the chip as well as from Transmeta's testing procedures, which were not weeding out inadequate chips but were giving the company an early, erroneous impression of success. Others, however, blamed TSMC's manufacturing processes. Early on, many of the faulty chips consistently came from the same section of the wafer, which sources said indicated a manufacturing flaw. Normally tight-lipped TSMC blames the 5800's design.
So, Hemos, I'm not sure if you have actual experience with a 5X00 Transmeta chip, but from what I read, they're nothing to brag about.
I'm as pissed-off about this as any of you, but the truth is the truth.
Re:Crusoe Schmuso (Yeah, I know I'll get flamed... (Score:1)
Re:Crusoe Schmuso (Yeah, I know I'll get flamed... (Score:4, Informative)
One of the big trends in computing recently has been for servers to grow smaller and consume lots more power. Just look at your average P4 or Athlon compared to the old Pentiums and Pentium Pros. It's getting so bad that PCs are drawing more power than even energy hogging Alpha based machines in some cases. This problem is compunded by everybody putting servers in the smallest boxes prossible, 1U frequently, so that these energy hogs can be stacked all the way to the top of the rack and draw an enormous amount of power from the circut designated for that tile.
please (Score:3, Interesting)
The 5800 is the 5600 at TSMC's 0.13u fab instead of IBM's 0.18u fab. I agree with Hemos that the 5800 is cool. And if the 5800 is cool, it's only because it's identical to the 5600, and the 5500 and 5400 are the exact same design with half the L2 disabled (less L2 => less die area => better yield => cheaper parts). It's all the same chip design. Any claims that the 5800's design is unmanufacturable just doesn't wash with me. IBM manufactured the same parts just fine at 0.18u.
Transmeta's mistakes are myriad, but Transmeta's accomplishments are unbelievable. The press keeps covering them because they have a an incredible technology and it's a David vs Goliath story.
Yeah, the 5x00, at full bore, is probably only 60% as fast (depends very much on the load) as an Intel part at that clock-speed, but Transmeta with its 418 employees beat out AMD with its 14,400 employees in the ultra-light mobile space, and Intel with the bloat and burn of 86,100 employees may be next.
Transmeta has only had product in the channel for just over a year. 60% isn't half bad, and they are learning at an absurd rate. Intel has been at this over 20 years, and with all that experience, even they make some terrific blunders, like RAMBUS and the Itanium.
I am an avid fan of x86 microarchitectures. There are only two out there which I consider interesting, bold, and the kinds of designs which will take us to the next stage of microprocessor development, the P4 and Crusoe. Which is more advanced and has a more promising future for growth? Crusoe, definitely.
The P4 with its unscheduled trace cache is like Crusoe's inbred second cousin. The P4 lacks a sensible place to put its translations, has to reschedule its translations on every execution, has not the brains to apply even a fraction of the advanced CMS optimization techniques, and all that comes at a big cost in transistor count, yield, and heat.
It is clear that virtually every Intel engineer knew the P4 architecture was a big gamble with its much lower decoder bandwidth, puny L1 cache, huge branch penalties, and deeply retarded choice of RAMBUS. When the P4 came out, it was slower than the P3. The Itanium was way slower. That has never been true of a single release, internal or external, at Transmeta.
Even though it is changing radically, no one at Transmeta thinks the next-generation Crusoe architecture is much of a gamble. Viewed from virtually every angle, it's an almost undeniable improvement over the p95 architecture.
The challenge for Transmeta will be finding a manufacturing partner that can keep up with Intel's undeniably strong track record in advancing process technology. Intel is pushing the process and MHz hard, but their microarchitectures are going nowhere fast.
Some RLX caveats (Score:5, Informative)
If you had a shared server web hosting company, could spring for the net storage, and didn't mind a tweak or three, then you could probably host quite a few customers in a quarter rack as opposed to a full rack (the power savings alone would let you pack them in). Throw in another full-powered app/db server and you'd be golden. All in all, the RLX is very interesting in the right applications. Clustering is another possibly appropriate role, now that I think about it (no, I did not say the "B" word).
One more probably uninteresting note: The eval unit we had came with Debian pre-installed. The newer ones have Red Hat (and Win2K, I think). So if you only do Debian, you could probably get the older images and stick 'em on there. Might have to waive support or something though...
-B
Re:Some RLX caveats (Score:3, Funny)
You didn't say it, but you did imagine it.
Re:Some RLX caveats -- bastard Frenchmen (Score:1)
We are the borg. You will be assimilated. Unless you are French.
Re:Some RLX caveats (Score:1)
Re:Some RLX caveats (Score:4, Informative)
We also just worked up a cluster of these, and I dealt with the people there (also very positive experience). We asked about the ability to power cycle and send the blades into and out of sleep mode.
The guy there told me that they had fixed a problem with the sleep state recently (I was talking to him July/August), and that there were some other issues with power cycling they were working with.
Re:Some RLX caveats (Score:3, Interesting)
Yes, I forgot to mention that part. The rep (who was actually like a director of engineering something, I think) who we dealt with was extremely refreshing. He answered all our idiotic questions, listened to our suggestions, worked with our guys. RLX were very nice to deal with. I was sorta sad we never hooked up with them business-wise (our business was just starting to move away from smaller web hosting type customers -- which is who we were eval'ing for in the first place)
The guy there told me that they had fixed a problem with the sleep state recently
Ahhhh, so it wasn't a setting so much as an unintended feature. We had wondered about that. My bet was on setting. :-) I bodged up a couple perl scripts which kept them all alive (and sent back host data while they were at it). Kind of a hack, but it worked.
There were some other issues with power cycling they were working with.
Hmmmm. That we didn't see. Not so good to have some of the blades bounce themselves...
Actually, I really want a set of blades for home. I work with lots of OSes, and one RLX and a decent KVM would make working from home very nice. -B
Re:Some RLX caveats (Score:1)
Maybe RLX should look into providing this as an alternative service. "Need 1000 clients to pound your system on a local network? Call us and we'll drive over with a van and a cable to hook into your system."
Re:Some RLX caveats (Score:2)
Not Microdrives (Score:1, Troll)
What about total power? (Score:4, Insightful)
Tandy RLX? (Score:4, Funny)
The RS sales guy made my parents bring it back to prove I had installed Win3.1 on it. They kept saying it was impossible.
Well, it did take up 16MB out of a 20MB hard drive...
That was also the same technician that told my mom that I'd "never, ever need more than a meg of RAM."
Re:Tandy RLX? (Score:1)
Ahh ... those were the days. (Score:1)
Re:Ahh ... those were the days. (Score:1)
The coolest thing of all remains Yggdrasil (albeit on an SX16 with a Mitsumi 1X CD drive)
Re:Ahh ... those were the days. (Score:1)
Re:Ahh ... those were the days. (Score:1)
Small wonder, then, that HP shut down their calculator division in protest!
MetaBeowulf on RLXes? (Score:3, Interesting)
Why use a PC like architecture??? (Score:1)
Re:Why use a PC like architecture??? (Score:3, Interesting)
Re:Why use a PC like architecture??? (Score:1)
Cool thing I saw today: (Score:1)
Imagine having a KVM switch running something like VNC on it that would let you control the switch by software and also allow you to configure the BIOS and preboot stuff over the internet.
Hell, you could even do it wirelessly! I was blown away by the whole thing.
The one problem with it, though, is that they're giving away a free flat panel monitor with it. Something expensive enough that a flat panel becomes schwag is too expensive for me!
RLX is solid gear (Score:5, Informative)
I work at an ISV building an MPP application, and
we started eval'ing the RLX 324 back in the
summer, and have had 100% success with them:
in a nutshell, each blade is about half the
performance and half the price of our 1U servers.
Overall, the blades are nicerly "balanced" in
terms of performance.
The claims about density, manageability etc. are
all true (divided by 2, ie. comfortable margin).
Beyond sheer density, with bladed servers, you
can deliver scalable apps in a single box, which
removes the big objection that they're "hard to
manage".
We wrote our own cluster admin tools (perl) and run
on redhat 7.1, which they'll pre-install, so it's
been cake-- but this also means that we didn't
try out their management tools.
Having been burned by other vendors on 1U boxes
esp. heat/vibration causing reliability problems,
I've been doubly pleased that the RLX gear hasn't
had any problems-- exactly the sort of stuff
you'd expect from former Compaq execs... but
without all the proprietary crap-- as a test,
we reinstalled redhat from the retail CDs, and
it just worked.
RLX Gets Denser (Score:1)
And Leon is getting laaaaaaaaaaaarger! [freeservers.com]
Saving floor space won't necessarily save money (Score:2, Insightful)
But a soon as everyone starts using less floor space, the colo will need to increase the price per unit of floor space - floor space is often used as an accounting allocation unit, with the tough-to-measure costs, like power, ac use, security, and some personnel costs split by floor and rack space. Those costs aren't going away - they will need to be split among a smaller number of units. (Not to mention in the non-expansion dot.bomb era, the capital used for the building is pretty much a sunk cost).
Yeah, but... will prices really drop? (Score:1)
Denser? (Score:2)
Do the math (Score:2)
The old format was 24 blades in a 3U chassis, which works out to 336 blades per standard 42U rack. The new format is 6 blades per 1U chassis, which works out to 252 blades per rack. Maybe it's a more appealing form factor, but it's not denser.
Can you imagine? (Score:1)
Seriously:
200 servers per rack. Let's say 2 virtual nodes per box, that's a 400 node cluster in a rack! Bammmmmm!!!!
And with a NAS on the backend you could use 200 small unique root partitions, and just about shared everying for content and common binaries.
"quote from article" Richeson said the architecture provides for clustering - Racemi expects to be popular in Beowulf circles - and fault tolerance - a spoiled blade falls over to its mates and can be hot-swapped out - and reliability - there are no moving parts except the two fans per blade. "/quote from article"
"off topic, but I hope you agree"
I have always wished for something different though. I use a standard desktop environment: GUIs, an OS, storage, connectivity, input and output devices... But as I upgrade my machine I am tied to the hardware that I can fit in to one box. I wish I could just add some new stuff to my desktop without being limited to one machine.
I know client/server does all this, but I want to jsut add a mahcine to my "desktop" and get better perfomance and capacity,etc.
What I really want is this: A desktop environment which behaves like a single machine. A desktop which uses local and remote resources seamlessly. A desktop which I can add processing-power/storage/etc to by simply turning up another box and adding it to my environment.
[#mydesktopconfig
groupmembership=yes
headless=yes
stealconfighost=10.10.10.3
stealhostauth=""
#cpu-power - auto will try to find cpu speed automatically and guess rating (1-100000)
cpu-power=233
#netcon - auto will try to rate network speed automatically
netcon=125
continent=africa
locale=921
#remotestatehost=
maintainlocalstate=true
#insertfakesettinghere
all_your_cpu_are_belong_to_us=true
]
Then I would be happy. I could always by another mahcine and ADD it to my current config gaining everything I bought...
"/end off topic"
Who ever said they were making it denser? (Score:2, Informative)
they aren't trying to make it denser they are trying to sell more servers...
"Blade pioneer RLX Technologies, saying that its initial product was too big for most people, has cut its 3U 24-blade dense server down to 1U that houses only six blades.
It figures the new second-generation product, called simply the RLX System 1U and a more popular form factor than the 3U, will move in greater volumes than its predecessor and sales of course are exactly what the struggling start-up that has gone through two layoffs in the last few months needs right now."
Re:Who ever said they were making it denser? (Score:1)
*confused look*
This is not denser (Score:1)
I know it's bad on /. (Score:1)
geez.
not mch of a change for co-lo costs (Score:2)
If I can fit 72 servers in a regular floor rack then I would buy 1 rack... and If I need say 60 webservers
Floor space cost is the lowest expense, it's like switching toilet paper brands to save money, you feel good when you save money, but bitch about it when you actually have to use it.
Besides, IDE based servers are not for mission critical uses, and I guess that a website really isnt mission critical, but why isnt there 1U racks using 2.5" SCSI hard drives? why dont we have 20G U160 scsi drives?
Oh well, I'll stick with my custom built 4U servers.
Is this actually useful? (Score:2, Insightful)
Aside from power, how about heat dissapation? After all, a bunch of laptops theoretically would make EXCELLENT servers, since they even have 1-2 hour battery backups built right in, are extra small, and don't even require a KVM (they could just be pulled out as needed). BUT, a laptop runs waaaaaaay too hot to be used as a server, and their price/performance ratio is aweful, even if you do save on rack space.
Finally, do these have any redundancy built in? Is there anything else special about them? Personally, I'd rather have a 1U rack unit which is dual CPU, dual NIC, and dual power supply than two of these. It would probably be cheaper, last longer, and be less hassle.
Just my $0.02 from my past colo experience.