Serial ATA Technology Explained 468
Mike Parsons writes "Explosive Labs has an interesting article on Serial ATA . Here is a quote: 'In the rapidly moving computer industry, there are rarely the kinds of revolutionary changes like what is about to take place in secondary storage segment. Soon the hard drives and configuration methods that have existed since the origins of the personal computer will change forever. The basic IDE technology has been around for nearly twenty years. When the lifetimes of other computer components like CPUs and video are measured in months, twenty years ago seems like prehistory.'"
Prehistory? Depends on context (Score:5, Insightful)
Re:Prehistory? Depends on context (Score:5, Interesting)
Re:Prehistory? Depends on context (Score:2)
Three modifications. Most recent in 1991
Does anyone still use this? In the past 6 years, any printing I've done has either been off a network printer or at Kinko's
Here comes DVI!
USB!
Re:Prehistory? Depends on context (Score:5, Informative)
Re:Prehistory? Depends on context (Score:3, Informative)
The (16-bit) IDE physical interface is an extension of the AT bus (1984).
Re:Prehistory? Depends on context (Score:5, Funny)
Re:Prehistory? Depends on context (Score:2, Funny)
Re:Prehistory? Depends on context (Score:2)
SCSI? (Score:4, Insightful)
All of IDE's shortcomings are fixed by SCSI (except for a small degree of added complexity). SCSI hardware is more expensive, and rarely does it come built-in to motherboards.
If more people used it, it would be a cheaper solution, and would fix all of IDE's problems without re-inventing the wheel--it's a solution that, right now, works.
15k rpm scsi drives get seek times in the low three range--that's three times faster than your average 5400 rpm ide hdd.
Re:SCSI? (Score:5, Funny)
You mean the oversized 40-conductor ribbon cables are solved by 68 conductor ones?
> If more people used it, it would be a cheaper solution
Heh, sure it would.
> 15k rpm scsi drives get seek times in the low three range--that's three times faster than your average 5400 rpm ide hdd.
And 10x more expensive.
You are the first SCSI fanboy I've ever seen.
Now bring on the cheap!
Re:SCSI? (Score:2, Informative)
Nope, they're solved by using high-density cable connectors, which IDE still hasn't figured out with 80-pin cables (instead IDE just cheats).
>And 10x more expensive.
Because... why?
Nobody is buying, that's why. Lower speed SCSI drives are still available, but are still expensive because IDE is stuck in everyone's heads as the only storage method for a PC.
>You are the first SCSI fanboy I've ever seen.
You haven't been here very long. Let me be the first to say welcome!
Re:SCSI? (Score:5, Insightful)
In short, it's expensive because they like it that way. There's no shortage of scsi drives, they're not particularly more expensive to produce.
Re:SCSI? (Score:5, Insightful)
In other words, they are expensive because people want to keep them that way!
Re:SCSI? (Score:4, Insightful)
Not at all. Diamonds are a controlled cartel and price-fixing is par for the course. SCSI in mass production anywhere near how much IDE stuff gets made will drop the price to the point where it would be affordable to sell it to the home market. Or so the theory goes.
If diamonds weren't price-controled they would be incredibly cheap. Read about it here: http://www.professionaljeweler.com/archives/news/
Re:SCSI? (Score:5, Interesting)
That was true in 1980. It isn't now. I mean, it costs no more to build a RAID card (witness Promise Ultra Hack) than it does to build an IDE card, and that's massively more complicated than SCSI.
In fact, since all the smarts are moved onto the controller (which is dead simple to make today -- I bought a cheap one for $35 CDN two years ago -- it was cheaper than a cheap IDE card!) the drive itself is less complicated.
>Also, the preformance hit you take going from 7200rpm SCSI to 7200rpm IDE is not noticable at most times, but I suppose i am tolerant because i can wait more than 3 ms seek.
Agreed, but there's more to it than that. IDE requires new interfaces every drive (unless you want to take the horrible performance hit master/slave arrangements incurr). SCSI doesn't. IDE cables can only be 18". SCSI can be quite a bit longer. IDE only works for hard drives and CD-ROMS (and one or two other things). SCSI is for anything. There's more reasons than this to support SCSI over IDE (at the same price), but I think these three would be enough to sway users at the right price.
>SCSI is loud and hot and expencive, just like all preformance computing componants, and thus will never be a consumer standard.
Only because no manufacturer thinks there's a good market if they slap a SCSI controller on their current consumer hard drives. I think there is, and I'd be game to buy one, if they existed, for my next upgrade.
Re:SCSI? (Score:3, Insightful)
On a single user computer, you will be unusual if you need the performance of a scsi disk. You may like the reliability (but there is always IDE based raid). For a central server, however, the scsi difference matters.
Re:SCSI? (Score:4, Interesting)
No, I'd guess the 80 pin ones that include power and configure the drive's ID, and allow you to just slap the drive in and let it go. IDE has NOTHING with that configurability.
> You are the first SCSI fanboy I've ever seen.
Now you've seen two.
Don't get me wrong - for home use, SCSI is overpowered. But if you're talking anything bigger than a desktop, make mine SCSI.
Re:SCSI? (Score:2, Insightful)
But my new desktop (as I've already posted) has an onboard SATA RAID controller.
Personally I cant wait to get some workstation-level performance at desktop-level prices.
And no more big ugly cables uglying up my side window. These little cables that came in the mobo box are going to look pretty good in there.
It's an improvement over PATA no matter how you slice it. It promises to be cheaper, faster, easier, hot-swappable, self-powered.
I dunno why people are complaining. Well, yes I do. This is slashdot, and any new technology is equated with a bid for world dominance from [CORPORATION].
Re:SCSI? (Score:2)
Re:SCSI? (Score:5, Funny)
SCSI devices last forever, or until they break,
I can top that:
I will live forever, until I die.
What was your point again?
Re:SCSI? (Score:2, Interesting)
Yeah, even if your statement were true. Price is a huge shortcoming in technology today, especialy when most people can't use the preformance hardware that they own.
I strongly support the development of IDE standards. There are many situations when you need lots of hard drive space without bleeding edge preformance. Try to find a solution for doing a small (350GB) backup server to add to a tape backup system, you find 200GB IDE drives for the price of a 18GB SCSI drive. A thousand people will try and explain why the SCSI is a better deal because its more reliable and faster, but backing up 350 gb on IDE costs about $650, and on SCSI costs aboyt $4000. I don't believe the demand is the reason for the premium price for SCSI, but the hardware... It's just more expencive to make.
Re:SCSI? (Score:2)
Here's why your point is moot: First, IDE drives are only cheaper because it's more widely used. Secondly, SCSI can scale to low end needs--you're not locked into having to be fast. What has happened is that SCSI has found its nitch in the server market. SCSI could downscale very well
That's only true because SCSI hard drive makers have to make fast server-oriented drives. If they had a desktop nitch, they could sell slower drives for similar prices--as it stands they couldn't compete.
SCSI can serve both low-end and high-end needs, if folks would give it a chance. The article's mentioning of SCSI being too loud is ridiculous--internally they all work the same. 15k rpm hard drives by nature make a lot of noise.
Re:SCSI? (Score:4, Insightful)
SCSI does have a faster spin time, and is in general much faster, but for small, random chunks of data, it can be slower (but it's a small penalty in a rare case). The width of the SCSI bus is much better (320 MB/s for SCSI 320), assuming you have alot of drives in big striped arrays.
Don't forget - most IDE drives (except for a few premium models) just had their warranty chopped from 3-years to 1-year. Not a reflection on declining quality of IDE drives, but rather the economics of the market place. SCSI still has decent warranties, and they last longer regardless.
SCSI is (much) better; could they be as cheap as IDE if everyone used them? Probably pretty close. But it's a chicken and an egg thing. Home-users don't buy them because they are expensive, and they are expensive because consumers won't buy them.
Ironically, SCSI stands for Small Computer Standard Interface, but SCSI is most frequently found in Servers (large, not small computers), in large RAID arrays. And more ironically, the SCSI drives usually used in RAID (Redundant Array of Inexpensive Disks) usually are not that inexpensive at 3x the cost of IDE.
Re:SCSI? (Score:3, Informative)
You have to look at when these terms where invented.
The following definitions applied back then:
Small Computer = a computer that did not require an entire building to house it.
Inexpensive = cheaper than solid state.
Re:SCSI? (Score:3, Informative)
The problem with SCSI is heat and noise, things that work fine in a server room, but are bad for the home user.
I had SCSI for a while in college. It was cool to show off, but having to turn up my TV to hear over the jet engine in my PC was annoying.
Re:SCSI? (Score:4, Insightful)
Huh? (Score:4, Insightful)
Re:SCSI? (Score:3, Informative)
Heh. Here's a list of IDE's shortcomings SCSI makes worse:
The bandwidth per drive thing is one of the great things that SATA brings to the table. With a modern large SCSI setup it seems like you have a lot of bandwidth but on a per drive basis you really don't. 160MB/s divided by 12 drives = 13MB/s (1980's speed). To contrast that look at a 12 drive 3Ware SATA controller. That has a full 150 MBytes/Sec to each of the 12 drives.
To see the usefulness of this take the example of a 12 drive RAID 5 array doing a rebuild while the server is trying to read from the drives. The controller has at it's disposal 1800 MB/s worth of bandwidth that it can use. It can run those drives as fast as they can go keeping the write buffer full on the drive it's rebuilding and using the leftover bandwidth to service the server's requests. Modern ATA drives [storagereview.com] can read at up to 56 MB/s. With 12 drives you get a total of 672 MB/s throughput which is far more than even the new Ultra320 SCSI is capable of. With newer faster drives and 16 drive RAID controllers this problem gets even worse.
> If more people used it, it would be a cheaper solution
SCSI is quite widely used. There is a lot more SCSI out there than SATA and yet a motherboard with a SATA raid controller costs about the same as one without it whereas a motherboard with a SCSI raid controller on it costs about 3 times as much. SCSI is simply an expensive, complicated technology to implement.
>15k rpm scsi drives get seek times in the low three range--that's three times faster than your average 5400 rpm ide hdd.
The low seek times are a result of expensive server class drive technology, not of the interface. Seagate could just as easily drop a SATA interface on those 15K Cheetah drives and I suspect in the near future they will because:
All of SCSI's shortcomings are fixed by Serial ATA
Yea, I know, it's a cheap shot but really SATA is poised to replace SCSI in most of the markets SCSI still occupies. SCSI was mostly popular in server systems because of it's hot swapablility and plug and play operation (no jumpers to set on 80 pin sca drives.) These are advantages that Serial ATA shares. Motherboard integrated SATA RAID will take over for SCSI RAID in server class systems because of cost, size, power and bandwidth issues. 8 - 16 drive SATA RAID arrays will take over the low to mid-size storage array market. (If you can count 4.8 Terabytes as mid size.) Fiber channel will be left for SANs and large storage arrays. SCSI will be relegated to connecting external drive systems but I imagine fiber channel will eventually take most of that market.
People who like SCSI will probably like SATA even more. It will be faster, much cheaper, more reliable, more compatible, and easier to maintain and troubleshoot. True, you won't be able to run a printer or scanner off it but I doubt there will be a lot of people missing that particular piece of SCSI functionality.
Re:SCSI? (Score:4, Informative)
Which is another point:
They botched SATA by not including power on the bus, having no standardized connector locations, and (at least so far) having no facility for connection of external devices.
The 1-device-per-wire rule of SATA is another detriment: Sure, the cables are fairly small, but can you imagine the rats nest that would be a 12-device SATA system?
SCSI daisychaining is an easy fix for that, and now-common LVD SCSI is quite able to support this number of devices with a single ribbon. And LVD, by its differential nature, is quite resistant to the electrical problems introduced by slice-and-jacket cable rounding techniques that are all the rage these days..
Oh. And you've got your math wrong, Son:
It doesn't matter if 3ware makes a controller with 12 150MBps ports. The 12-port Escalade 8500 you speak of has a 64-bit 33MHz PCI interface, topping out at no more than a paltry 264MBps to be shared by all connected devices.
If you're serious about throughput, try something like this [adaptec.com]: Two 320MBps channels on a 64-bit, 133MHz PCI bus. Good for real-world transfer rates in the realm of 640MBps, more than twice that of the 3ware product, while keeping a good portion of the PCI bus free the -other- 30 SCSI devices you've got plugged into it.
And none of this says anything about the benefit of SCSI for the home user:
Just bought a new DVD-R, but don't want to toss your old but dead-solid Plextor? SATA requires you to buy another port. SCSI just requires you to plug it in.
Serial Vs. Parallel.... (Score:2, Funny)
Shame it took the original IDE developers 20 years to cotton on. I'm sure this has been on the verge of becoming mainstream tech for about three years now. Some people must really love their ribbon cables.
The Tab! You forgot the Tab!
Is transfer rate that important? (Score:2, Informative)
Only if... (Score:2, Insightful)
Re:Only if... (Score:2)
even cheaper (Score:3, Interesting)
yeah, yeah, yeah enough with the sales pitch (Score:3, Funny)
I got a shiny new SATA RAID controller on my new motherboard, now when the hell am I gonna get a couple of 80 gig cheap, fast SATA drives to put into a striped set?
huh? huh?
Re:yeah, yeah, yeah enough with the sales pitch (Score:3, Informative)
FIREWIRE? (Score:4, Interesting)
Re:FIREWIRE? (Score:5, Informative)
Re:FIREWIRE? (Score:4, Informative)
Re:FIREWIRE? (Score:3, Insightful)
Also, signalling bits that are thrown away are often counted. Otherwise we would have 98 Mb/s ethernet instead of 100
Firewire Drive != Pure Firewire (Score:4, Insightful)
This just looks expensive. (Score:5, Interesting)
The author then goes on to note that the 'roadmap' calls for the 2006 version to run at 600mb/s, which fits nicely with my roadmap to world domination in 2005. ...Ummmm, yeah, we'll see.
Although looking at the list of upcoming products and the manufactures making them, I don't doubt we'll all be useing this in a few years.
Re:This just looks expensive. (Score:4, Informative)
I was a bit confused by this article. They talk as if this thing is the Second Coming of Christ, but then they talk about how desktop pcs are just going to keep taking baby steps. Also at the beginning of the article they say that serial seems to be a step back from parallel (ya think?) but it is faster and better and Oh! Look! An elephant!
Re:This just looks expensive. (Score:5, Informative)
The battle between serial and parallel communications is neverending. Show me a Serial WAN connection like a DS3, and I can say "Well, since you never send partial bytes, we could strap 8 of these side by side, send one byte at a time with the bits split up over the 8 DS3s in parallel frames, and we get an 8x speed improvement that's usable by a single connection and no additional latency".
Or show me a parallel bus like IDE, and I can say "Look, having all those data lines next to each other causes additional interference we have to account for, and they're bulky, cost more, overly complex, blah blah. If we just put a serial bitstream on a pair of wires, it would be so much simpler that for the same cost we can turn up the bitrate more than enough to make up for the lost parallelism."
It's all the same. Various communication technologies tend to rise and fall, serial replacing parallel replacing serial replacing parallel ad infinitum. In some cases (like PCI busses) parallel just makes a lot more sense, but in a lot of cases (network stuff, storage stuff especially) there's a tradeoff where both are better and worse than the other in different ways. You could just pick one and stick with it and do you incremental improvements, but the occasional switcheroo provides upgrade revenues and more user "wow" factor and buzzwords.
Re:This just looks expensive. (Score:3, Insightful)
Re:This just looks expensive. (Score:3, Insightful)
Another interesting thing about the technology is that drives that are currently using the parallel SCSI interface will be moving to either SAS (Serial-Attached-SCSI) or Fibre Channel. SAS will use the SCSI protocol over the Serial ATA cables, so you can get rid of those nice giant ribbons.
Re:This just looks expensive. (Score:3, Insightful)
Does it matter? At all? No.
Frankly it could be 150 GB/s and it wouldn't matter in the least.
Go look at the manufacturer specs. Read the line that says "drive to host, sustained throughput". Note that no manufacturer claims more than 52 MB/s. Reality is closer to 48 MB/s for the fastest IDE drive. That's right! We're not even exceeding ATA/66 bandwidth yet. And still people are talking about 600 MB/s in a few years. Who cares? You can't reach that throughput anyway. Not to mention that the PCI bus is limited to 133 MB/s.
Ok, the bus speed does make a bit of difference. If the data you need is in cache then you can use the maximum theoretical bandwidth while reading from cache. So dumping a 8 MB cache via ATA-133 saves you about half a millisecond over ATA-66. You noticed that, right?
The advantages of SATA have nothing to do with the bus speed. The longer cable is useful in a select few tower cases. The hot swap will be nice for a small percentage of enthusiasts and idiot admins (I'm not a SCSI fan boy, but if you're running a server you really should be running SCSI). The small, thin cable is useful for everyone though -- the air traps created by ribbon cables are causing more and more problems as everything runs hotter. Most drives fail due to poor cooling. Want to bet that SATA drives have a significantly lower failure rate?
Personally, I can't wait until it comes out... (Score:5, Insightful)
*cringes* (Score:4, Funny)
too bad that (Score:3, Interesting)
Ever read the actual throughput specs on a drive? The media throughput is not much more than 40MB/sec!!! Read the data sheets, people!
Add this all up and what do you get? Ripped off is about it!
Re:too bad that (Score:2)
Re:too bad that (Score:2)
2. That 40MB/s is sustained throughput, HDs can burst from cache much faster than that. This is going to become more of an issue as HD manufactures have finally started putting more cache on their drives.
Re:too bad that (Score:5, Informative)
No it doesnt. Data goes through the PCI bus if the address is not claimed by something else along the way. That means that everything from the southbridge up is not limited by the PCI bus bandwidth. That means that integrated SATA controllers (not available until next year) are only limited by the bandwidth between the northbridge and southbridge.
Ever read the actual throughput specs on a drive?
Drive throughput has been steadily increasing, and it is predicted to pass up PATA within a few years, and that is not counting RAID striping or the 8 MB drive caches. Its always desirable for the bottleneck to be the drive rather than the controller.
Re:too bad that (Score:5, Insightful)
That being said, it is entirely possible to reach throughputs in excess of 133 MB/s using a PCI bus... though currently most desktop motherboards do not support anything faster than 133 MB/s. In time this will change as NICs, hard disks, and other gear requires it.
And your hard disk performance is barely par by today's standards. IDE drives are currently topping 50MB/s, while SCSI gear is hittin > 70MB/s. Though I am a SCSI man, i can see the future need for SATA. Right now it may be mainly a marketing ploy... But in a couple years it will be a necessity. Parallel cabling is nearing the end of the road.. all those wires in a cable allow for too much signal interference. Serial is the answer. Though it has less wires, the dramatic increase in signal strength allows for insane transfer rates.
Anyhoo.. personally I don't see any reason to go out and buy a new system just to have SATA. At the current it offers few advantages.. but in the not so distant future it will be a necessity for desktop systems. As for me, i plan on going Fibre-Channel SCSI
Re:too bad that (Score:5, Insightful)
This won't be an issue since SATA is strictly point-to-point, every drive gets it's own 150MB/s link.
More Information (Score:5, Informative)
Cnet [com.com]
SATA and ISCSI [infoworld.com]
Intel Dev Paper [intel.com]
Maxtor White Paper [maxtor.com]
Re:More Information (Score:3, Insightful)
I really love how the Maxtor paper compares SATA to parallel ATA, USB (?!) and Firewire...but not to SCSI or FC. I wonder why that is. Actually, no I don't. ;-)
Not important yet (Score:3, Insightful)
RDRAM Redux (Score:5, Interesting)
This sounds remarkably like the plugs we got for Rambus RDRAM: serial interface is better than parallel, first gen won't see real performance gains, stick with us kids, this is gonna be really good.
I see a decided lack of Sun, IBM, AMD, or HP listed in the adopters, which leads me to believe that this is much like the above. Sorry guys, I'm not riding the first wave of any new tech on my salary. I'll sit on the sidelines for awhile and see how this pans out.
Re:RDRAM Redux (Score:2)
SATA == Future (Score:3, Informative)
SCSI TROLLS: READ THIS (Score:4, Interesting)
For the time being, IDE isn't going anywhere.
NOISE & HEAT will tend to outweigh (relatively) minor performance gains in consumer systems. (Enterprise hardware is another matter entirely)
sigh....we need to start using those annoying javascripts that make people read the article BEFORE posting.
Re:SCSI TROLLS: READ THIS (Score:3, Insightful)
IDE TROLLS: READ THIS (Score:5, Insightful)
Absolutely. But this has nothing to do with SCSI, it has to do with the high spindle speeds at the bleeding edge. The card on the underside of the drive is not making that ear shattering racket. They even acknowledge that in your quote.
SCSI is better than ATA. Even SATA. ATA has been trying to catch up by stealing some of the best parts of SCSI (like TCQ). But it just isn't quite as good yet. Quite frankly, I agree with the majority of SCSI zealots: if the damn PC makers would embrace SCSI, then the cost of SCSI would come down to near parity from the volume of sales.
Now, is SCSI better for your average Joe? Maybe not significantly. Neither is 7200 vs 5400, 2MB vs 8MB buffers, or 8.9 vs 9.1 ms access times.
However, if they could use one cable to connect 15 devices in their tower, they'd be alot happier than having the 8 cables they'd need to do it with current IDE tech (let alone IDE's relative inability to be used externally).
Re:SCSI TROLLS: READ THIS (Score:5, Insightful)
Is IDE appropriate for the desktop? absolutely.
Will retards continue using IDE in applications where SCSI is far more appropriate? definitely.
Does your post make any fucking sense at all? nope.
command queueing (Score:2)
Re:command queueing (Score:4, Informative)
whats new? (Score:2, Funny)
on the other hand: installing more ram/new-cpu wouldn't be such a pain
I want the connector! (Score:4, Informative)
Is it worth upgrading for? No, probably not. But id damn sure is worth waiting an extra few months for that next machine to save the hassel of those f'ing ribbon cables.
ridiculous (Score:5, Funny)
This is a GOOD thing. FINALLY (Score:2, Interesting)
NO MORE RIBBON CABLE. My favorite Linux configuration is 1 whatever IDE drive for the OS, 1 IDE CDROM, and two (RAID-1) large IDE's for data and configurations. Quick and cheap for non-critical type functions/services. I rolled through a complete failure on the core OS drive, CD died -- while trying to roll up in size on the RAID-1 and hit *FOUR* defective WD drives...while never losing data _and_ configurations. IBM sits in there right now...
High end servers and workstations? Yeah, Serial-ATA is nice with the coming 40M/sec IDE type drives...but I'm also going to go after that 320M/sec SCSI technology too. Same IDE game, just a different connector basically.
NO MORE RIBBON CABLE.
Try stuffing four drives in a case. Not only is the IDE chain full, but cabling is a complete joke. Not anymore. Kind of like Firewire in the box, if you will. Except I think their screwing it up and keeping power separate where Firewire _can_ cary power to the devices.
So instead of tiny IDE connectors in the current Firewire and external type drives there will be tiny Serial-ATA hookups. So what. Now get inside a PC (and/or Mac) and do a little work.
With this and pricing for LARGE amounts of data
I could record so many hours of anything I wanted and never worry about losing it
Of course when I have a few extra thousand lying around (not likely any time soon with the current economy outlook) I'd love to try SCSI-320.
Now, IDE is rolling into ~40M/sec. Firewire *has* been ready for those speeds for a while. At least USB2 can keep up for a bit as well. Even faster drives is a must though. Firewire-2 is just around the corner (either 800 or 1.6Gbit's).
It's sad that your typical/standard Mac type network (1Gbit) is faster than the typical drive being hosted. Your typical Windows network at 100Mbit is pretty muched caqpped by the current typical drives top performance at 10M/sec.
Serial-ATA, oh yeah. One card (1Gbit) in the Linux box and I could saturate their bandwidth. Why not?
Microwho?
Re:This is a GOOD thing. FINALLY (Score:5, Funny)
SATA Linux Support (Score:2, Informative)
And this made me wonder... how long will it take until Linux (and the *BSDs) support this new standard? Will it happen after Longhorn's release? Or has it already been done?
Re:SATA Linux Support (Score:3, Informative)
Bandwidth (Score:4, Insightful)
I don't get it ... I quite agree that, as a serial bus, it'll be clocked a lot faster than IDE ... but a simple back-of-the-envelope calculation tells us that it has to be at least 8 times as fast as the current devices (it'd have to be 533 MHz to be on par with ATA-66)
It looks like a technology whose main purpose is to make things incompatible, and thus require people upgrade more stuff. And anyway, it's not the speed of the bus the limiting factor (for the vast majority of users), but the mechanics of the harddrive (SCSI hardrives are faster than IDEs because they almost always are top-of-the-line products with higher rotational speeds - anybody saw a 15000 RPM IDE ?)
The Raven
Am I missing something here??? (Score:5, Insightful)
The article seems immensely biased and lacking in technical detail. It also raises some "dubious" points IMHO. Let's see:
- P-ATA cables cannot be longer than 40cm. S-ATA cables can be up to 1m long:
Granted, those cables are annoying. But really, how many times have you felt the need for a cable much longer than 40cm? People with full-sized cases may benefit, but then the author says that the current trend is "small footprint machines". So, why do I need a cable that is bigger than my server?
Also, if you dislike flat cables, buy "rounded" P-ATA cables (available today, just google for it).
- P-ATA connectors are big!
Yes, they are! But you'll require at least twice as many S-ATA connectors, as only one device is supported... In the end, the real state on the mobo is going to be similar.
- One device per controller is an "Advantage".
C'mon... This guy must be joking. I couldn't believe my eyes when I read it! One device per controller is an *advantage*???? Why??? I wish I could add more devices (like SCSI and Firewire) to my curreny P-ATA technology. And then he says ONE is good for me? Don't think so...
- High transfer rates are useful for multi-disk RAIDS.
What kind of RAID? RAID 5 is slow in writes due to the computational power needed to calculate the XOR. Adding bandwidth won't help. And I can't see why or how only RAIDs will benefit from higher throughput.
- Speed:
Granted. It may be faster than P-ATA. But what about established technologies like SCSI and Firewire? I *think* (not sure) Firewire can go much faster than S-ATA in its initial version.
I'm disappointed...
A never-ending game of leapfrog (Score:3, Insightful)
Of course, no discussion of Serial ATA would be complete without mentioning the answer from the SCSI camp - Serial Attached SCSI [serialattachedscsi.com]. SAS will use the same connector as SATA, but will support longer cable lengths, multiple initiators (if you don't know what an initiator is you don't even belong in this discussion), full SCSI semantics instead of lame-o ATA semantics, etc. Even so, the SAS folks are still ceding the high end to Fibre Channel and talking about three coexisting technologies for the low-end/midrange/enterprise market segments. Sorry, kiddies, but SATA is still low-end.
If there's one mistake you should try not to make more than once in this business, it's that competitors have been standing still since their previous generation. Announcing something brand new and having it be less than half a generation ahead of the competitor's last version is a failure.
SATA propagates all the crap of PATA (Score:5, Insightful)
Firewire (1394) was killed by Apple's licensing fees and Intel's sudden backstabbing policy change on building it into south-bridge, along with their NIH attitude. There existed working 1394 Device Bay drives over 6 years ago, with OS support from m-soft. 1394 was an attempt to keep the good parts of SCSI protocol, while leaving out as much of the useless stuff as possible (MODE SELECT).
Fibre-channel is still Real Pricey, for the same reason that SCSI is -- "just because". Or, as the hardware vendors say "harrumph, well, it's all about volume".
Re:SATA propagates all the crap of PATA (Score:3, Informative)
Yes. We should run out of space in the latest incarnation in roughly 50 or 60 years.
Unless, of course, you're expecting to implement a single drive with more than 144,115,188,075,855,872 bytes (that's 128 petabytes or 131072 terabytes) anytime soon.
Yes, previous extentions have been poor. Maxtor got it right for ATAPI-6 which has been adopted by the industry. 48-bit addressing of 512 byte sectors.
S-ATA is the Ghetto FC (Score:5, Interesting)
So, here's how it is...
Fibre Channel - 2Mb/s(10Mb coming very soon), 126 drives, 10+ mile range, better then SCSI.
S-ATA - 1.2Mb/s(2.4Mb in 2004), 18" range?, IDE protocols for all your write-only data needs.
S-ATA is the Ghetto FibreChannel, just like IDE is crappy SCSI, expect similar suckiness and low quality to go with the low price and cheaper cables (to make, to buy they will cost more I'm sure).
But again, this is all about the creaper cables, since lets face it 95%+ of the machines out there only have one drive anyway.
S-ATA exists for one reason... (Score:3, Interesting)
To get rid of those damn ribbon cables.
Don't believe the marketing hype. SATA isn't about faster speeds, or more advanced features, or any of that crap. S-ATA is about cables.
IDE is crippleware. At some point in the past there was probably a need for a simpler, less expensive counterpart to scsi for desktop systems, but frankly that need is gone. The price distinction between IDE and SCSI has long been totally artificial. Drive manufacturers make a drive, and then slap on whatever control board they need, IDE or SCSI. Makes no difference to them, except that they get to mark up the SCSI version. Pure marketing: they need to stratisfy their technology so the enterprise guys don't feel like they're sullying their hands with the same tech as those Walmart PC-consumer lusers.
Frankly I wish SCSI had those neat little connectors (and they soon will, with Serial attached SCSI), and I hate ribbon cables as much as the next guy, but I'm not going to be fooled into thinking this is any real improvement over IDE.
But even as little as this is, it's long overdue. Those ribbon cables are the enemy of all that is good and just and true in the world.
Remember folks, SATA is only one letter away from SATAN. Q.E.D. Evil.
Serial vs Parallel (Score:3, Insightful)
This is all fine and good, but why not just treat the wires in a parallel cable as individual serial wires? Sure, if you increase the signal frequency, it becomes next to impossible to guarantee that all the signals arrive at exactly the same time, but I don't see the need for bit-level synchronization. If each wire has its own protocol, its own synchronization, and its own buffers, then as long as there is synchronization at the packet level, there should be no need to worry about synchronizing at the bit level. This would allow both high frequencies, and lots of wires.
buyer beware (Score:3, Interesting)
This seems to say something that I've never seen admitted about serial ATA: that it has DRM built in! If you want to buy hard drives that get to decide what you can and can't store on them, go ahead, but I'm not going to buy into any DRM technology. Extra speed and a smaller cable will not tempt me into doing it; I'll stock up on the last of the regular ATA drives as the serial ATA's replace them.
More info (Score:3, Informative)
It, um, reads less like a press release than does the Explosive Labs piece :-).
A compromise between SCSI and IDE (Score:3, Informative)
http://www.acard.com/eng/product/scside.html
Microland sells them in the US:
http://www.microlandusa.com/microland/
Some downsides:
- The hard disk has to be formatted while cabled to the SCSI-IDE brige. You can't move a drive from a regular IDE controller to the SCSI-IDE bridge without getting geometry errors.
- The interface is ATAPI only, so not all commands for the device may work. FE, firmware updaters and vendor utilities designed for the hard disk probably won't work the bridge.
- The utility to update the bridge's firmware is only for DOS/Windows.
There will probably be LVD-SATA bridges too in the future, if SATA truly catches on.
Re:SCSI? (Score:2)
Re:SCSI? (Score:3, Insightful)
Actually, even Macs don't use SCSI anymore. SCSI is strictly a "build-to-order" option on their PowerMac line. Even their Xserve server and storage products use multiple IDE channels instead of a SCSI bus.
</nitpicking>
You're right, though. SCSI only really survives in servers because it's just too darn expensive for everything else. Shame.
Give me Firewire! (Score:2)
And haven't we discussed this before [slashdot.org]?
Xix.
Re:Give me Firewire! (Score:3, Informative)
150 MB/sec?
Re:Give me Firewire!-here we go again! (Score:3, Informative)
400Mb/sec vs 150MB/sec
Pay attention to case, it does matter. As for what's in development, call me when its actually available.
Matt
Re:Give me Firewire! (Score:2, Informative)
Bingo, it is exactly what Serial ATA does not have over firewire that makes it so desirable.
Namely the $50 or so price premium. . . .
Actually, no. (Score:3, Insightful)
Second, the reason why Betamax died (well, didn't actually die, but didn't take off, either) was Sony kept it a proprietary format, while JVC let pretty much everyone make VHS products.
Serial ATA is one of the most unrevolutionary evolutions ever made. Basically it just changes the cables. The drives stay pretty much the same, the controllers stay pretty much the same, the drivers can stay exactly the same. Instead of wide, flat cables and two disks per channel you now get thin round cables and just one disk per channel (but since the connectors are so much smaller, you can have many on the same board). It's a good thing.
There are basically three reasons for having multiple standards. The first is a purely commercial one. Brand A invents the A-link and patents it, and brand B decides to create B-link so they don't have to pay a fee to Brand A. The second is evolution. Sometimes, a standard needs to be replaced or updated to cope with new demands (ex., ATA-33 becomes ATA-66). The third is that some standards are specifically suited to some situations (ex., SCSI lets you connect a lot of drives, and has support for other kinds of peripherals, but IDE is cheaper to make, and enough for most people).
RMN
~~~
Actually, no. (Score:3)
You are probably thinking of the situation where the motherboard has a (p)ATA controller and a converter is used to connect it to SATA cables. In this situation, one SATA channel is assigned to the (p)ATA master and another to the (p)ATA slave. But from a SATA point of view, the two channels are completely independent, and only support one device each.
RMN
~~~
Re:More "standards" (Score:2)
This is the one to watch (Score:3)
The only thing I haven't seen is any noise about chip sets that support in on the system side. As soon as these are available, you'll see MBs and systems. SCSI will probably stay important for larger faster arrays, but scaling bandwidth seems to look pretty good for this as well.
As soon as mainstream MBs are there, these will quickly become the commodity drives for all the manufacturers, and they will phase out Parallel ATA stuff.
Re:Parallel vs Serial? (Score:3, Insightful)
Yes, but just like with memory, serial [digikey.com] is cheap, parallel [digikey.com] costs. Those extra wires just ain't free.
Re:Parallel vs Serial? (Score:3, Informative)
On paper, parellel can be made to be faster than serial. However, in the practical world it is very difficult to make a high bandwidth parallel bus. It is even more difficult to run that bus any considerable amount of length. By using a serial bus with embedded clock you only need 2 signal wires. If those signal wires are a differential pair (perferrable Low-Voltage) then you can run them a considerable length at an extremely fast rate. If you have a parallel bus, you don't have the option of embedding the clock. If you are running with out an embedded clock you must send a clock syhncronous with the data. Now you have to deal with skew issues between each individual channel as well as all channels relative to the clock. Not to mention other aspects such as crosstalk between the channels. If all you have to worry about is two signals (which if are differential can be considered 1) then many of those issues ago away.
There's a lot of physical behind why its extremely difficult to run long lengths of parallel lines. Yes a parallel bus is faster, but it is almost impossible to implement a reliable parellel bus running at 1.5Gb/s, through cables, and with connectors. Take a look at a bus like hypertransport. It can be up to 16bit wide and run 1.6Gb/s. However, it is a point to point protocol, is run over a control impendance, and is run over very short distances.
I hope that helps.
Re:i dont see a huge rush for people to upgrade (Score:2, Funny)
Don't look now, but you're momma dont drive the hi-tech industry.
I do.
Re:what the fuck? (Score:3, Informative)
Try plugging your 7200RPM 120Gb IDE drive into a 386 era IDE controller and see what sort of performance you get. You'll probably only be able to access 8Gb of its capacity also.
IDE hasn't "just been there" it has been constantly evolving.
Re:IDE Technology -- What really needs to be fixed (Score:4, Informative)
1. You can't HOTSWAP an IDE drive without risking blowing your drive, crontroller, or upsetting the powersupply.
With SATA you can.
2. You can't WARMSWAP an IDE drive, without risking blowing your drive, controller, or upsetting your powersupply.
With SATA you can.
3. IDE still only supports 2, yes 2 drivers per controller, which makes it impossible to do hardware RAID-5. That leaves us with software RAID-5 as our only option.
Who cares when you can get hardware RAID-controllers with 12 ports on one card? What is the great advantage of having the cable be the single point of failure for your whole RAID, like SCSI does?
4. IDE cables can only stretch so far, so even if you could somehow manage to get 8 IDE controllers into a box, for a total of 16 drives, there would still be cable length issues. I think 1 m is max. We need differential IDE :)
Ok, 1m can be a problem for some people. However most people do not have cases larger than 1m.
5. IDE drives are just now able to verify data integrity, but thats good since we can start using IDE drives in servers that don't need 100% uptime.
Err, why is it a problem when it is already fixed as you say?
6. ATA/100 Round IDE cables are already available. In fact I just ordered some that have a UV reflective coating for my next case mod which features a black light. Airflow isn't a big issue, in fact Compaq has been slicing up IDE cables for a long time now to increase airflow.
Round IDE-cables are expensive to produce and still large and inflexible. SATA solves it.
7. The SUSTAINED TRANSFER WRITE RATE of IDE drives is still not fast enough to store uncompressed NTSC video at 60 frames per second, or store high bandwidth Satellite streams.
So get the hardware RAID-controller and start streaming away. Oh wait, hardware RAID for SATA doesn't exist. 3ware [3ware.com] is a figment of my imagination.
8a. Size increase (GB's) are not keeping pace with read/write access speeds and simply adding cache RAM and tweaking seek algorithms isn't going to remedy this problem.
You can't blame the interface for that. 150MB/s per drive for 12 drives on one card is way more than any SCSI solution supports -- and way more than current drives need.
8b. As, internal volatile write caches grow larger, the risk of uncommitted writes being lost in a power outage or crash increases.
So turn off the write cache. ATA supports Transaction Command Queueing although not all drives support it yet. By the time SATA drives become available, TCQ should be common.