
The End Of The Road For Magnetic Hard Drives? 111
Phase Shifter wrote to us about the limits of conventional hard drives, which Scientific American is discussing. The article talks about the history of hard drives, and why sometime soon, due to the limitations of the superparamagnetic effect, we will need to find a new storage type. It's a cool background read on hard drives and what goes into them.
maybe. (Score:1)
Its luck has to run out sometime, but I am willing to bet that it will still give the best value for money for some years to come.
Re:Ummm. . .. (Score:1)
Physical Size of disk is not important! (Score:2)
To double storage capacity it is not essential to double density - you could double the physical size of disk (or add another).
Just because they cannot get smaller doesn't mean they get ditched - Want double the storage, get double the number of disks...
Bubble Memory (Score:2)
Just wondering... (Score:1)
2) Do non-logarithmic graphs that don't start at zero suck, or what?
3) "kilometers per square inch" is a sweet unit.
What's next in storage? (Score:2)
Same old same old... (Score:5)
(a la Book-A-Minute [rinkworks.com]).
Scientists: OH NO! Hard drives can't get any better!
Engineers: Wait! Your science is WRONG! (Writes some new equations).
Computer industry: You have SAVED us!
Geeks: YAY!
Re:Physical Size of disk is not important! (Score:2)
The point is how to get the most storage for the lowest cost.
But last week... (Score:2)
Unlike other computer technologies, the hard disk market consistently finds some revolutionary way to make their products faster, bigger, and cheaper, while staying in business. With that kind of competition, I don't think the hard drive is going away...
--
Re:Bubble Memory (Score:1)
It'll be years (Score:1)
Re:Bubble Memory (Score:2)
Hmmmmm... (Score:1)
Either that or...
Bigger hard drives = more pr0n and mp3s.
Man, I miss my C64...who needs hard drives when I've got my trusty cassette drive
Re:Physical Size of disk is not important! (Score:2)
Optical 3D harddrives (Score:3)
Running out of fingers. (Score:1)
And we're rapidly running out of limbs.
Next we'll have probabilistic memory based on quantum theory. (Such as the latest secure communication proposals)
And to think I used to send email out continuously onto the internet back to myself in the days when my university limited us to 200K network space...
I had dreams to write a virtual disk driver using mail servers across the world.
(I have better sense now though...)
zUE65db/j/nDUFJYb6i88bhwJz26I1SMdr78iB6VjqA+tp6
p7L98Z/QCg/7JD
-grin-
It is not a practical limitation. (Score:1)
I mean, can you imagine attempting to eat a burger that size ? It is quite simply ridiculous.
thank you
dmg
Re:Ummm. . .. (Score:1)
argh (Score:1)
Re:Bubble Memory (Score:1)
All you need is RAM connected to a battery and Bob's your uncle. Of course, RAID is a lot cheaper. It would be cheaper to daisychain 8 20 gig drives to get a cheap terabyte and then RAID those bad boys. If one goes bad, plug another and keep moving.
All this so some secretary can gangbang our inboxes with promises of GAP clothing. Why are we doing this again?
Performance issues. (Score:2)
This is EVIL EVIL EVIL idea HAHAHAHAHHAAHAHAAAAA!
Re:What's next in storage? (Score:1)
The problem is that RAID (at level 3 and/or 5) just lets you use more space for one big volume without taking as much of a risk as you otherwise would. Instead, you take a performance hit relative to "RAID 0" (striping--which, not being redundant, should really be called AID :-).
The catch is that the larger the disks in your RAID 3/5 RAIDset, the longer your window of vulnerability when (not if) one fails. Remember, the idea of RAID is that if one disk goes bad, you can reconstruct the data using the parity blocks from all the other drives. That was fine in the days of 2GB drives, but it takes a long time to reconstruct 36GB or more of data, even if you have a dedicated hardware RAID controller. (Even an 18GB disk takes a while.)
If a second disk fails during this time (and it's more likely to, since you're now hitting it heavily to read off all the blocks you need to recalculate the missing disk with), you're hosed.
Also, RAID doesn't protect you from software. Directory corruption or an accidental rm/newfs will result in you having a nicely protected, redundant copy of your useless or empty filesystem.
"RAID 10" or RAID 1+0 or whatever the marketers are calling it is just striping across mirror pairs; that requires twice as many disks as you'd otherwise need for the same amount of storage, but it does give you reliability without the same level of speed hit.
(Yes, folks, the faster/better/cheaper trio is still pick two. Just ask the Mars Polar Lander team.)
IBM's 70 gig is plenty (Score:1)
Not to mention all my AppleScript letters. The biggest problem is getting UDMA/SCSI drives working on the Apple ][. The C64 is a whole different interface issue....
(Gonna run out of space? Bah, go on a data diet. Do you NEED to be storing HDTV DVD's on your hard drive?)
Hard drives technology doing just fine, thank you (Score:3)
Processor, 4.77 MHz -> 600 MHz: 126 times
(let's say 1000 times, because the P III does a lot more with each MHz than the 8088)
RAM, 64 KB -> 64 MB: 1024 times
Modems, 9600 baud -> 56K: 6 times
(even 1.5M for cable modem is only 156 times)
Hard disk drives, 10 MB -> 20 GB: 2000 times
Hmmmm, seems like the much-poo-poo'ed electro-mechanical technology has easily kept pace with the straight electronic technologies, including the breathtaking advances in chip density.
Now, when it looks like CPU speed and RAM density really ARE about to reach a plateau for a while, or at least lower its slope of advance, hard disk technology is poised to really rocket ahead. Look at the news from IBM research, foretelling VAST advances in the fairly near future.
Re:Performance issues. (Score:1)
Liquid bearings? How about spinning on a magnetic field like one of them Japanese trains? Of course, your data would suffer slightly.
Why Bubble Memory never went anywhere (Score:3)
It seems to be a bit of a trend in this industry that whatever works early on gets a lot of resources put into incrementally improving it and making it cheap, such that competing technologies have to be _hugely_ better to have any chance of taking over.
That is (IMO) partly why:
- we still use hard drives,
- CPU's still use CMOS rather than one of the faster switching methods,
- the x86 architecture is still dominant,
- the UNIX model is the base of nearly all operating systems.
There may be potentially 'better' technologies than these out there, but there has been so much engineering and optimisation gone into these technologies that it is really hard for anything to compete.
The case of the Exponential PowerPC is an example of that - it used ECL rather than CMOS to get substantially higher clock speeds, but before it had really got up to speed, the incremental improvements in CMOS had passed made it look less attractive, and Exponential was dead.
I expect someone to reply to this and say how much better CMOS (or whatever) is better than anything else
Re:Just wondering... (Score:1)
2) Yes.
3) I thought I was the only one to notice this! "Miles per square centimeter," anyone? Gigajoules per cubic Celsius? Huh?
Tape supply. (Score:3)
Since everyone will be replacing their hard drive with rolls of scotch tape, I'll corner the market!
Hard Drives, Modems, Palimsets and other trivia (Score:5)
Then, one day, someone realised that - hey! If you throw away the assumption that baud == bps, you can actually drive up speeds to 56Kb/s!
Then, as modems went up in speed, the same engineers moaned and groaned. The 56Kb/s limit was near, and without a total rewiring of the phone network, an act of Congress in the US (an act of God elsewhere in the world), and more money than anyone had, the 56K barrier would never be breached! Calamity!
Then, one day, another bright spark realised that if you had modems at the junctions, you could shove REALLY high-speeds down the wires without either Congress -or- God having to do anything. (Much to the relief of both.)
The Doomsday Crowd, defeated once more, lurked on the fringes. Until, one day, redemption! Hard Disks can't pass a certain density!
This, of course, is as bogus as all the other claims. If it's possible to read the past ten writes on a given sector, then you can you can increase the density of the disk by AT LEAST an order of magnitude. You just have to remember to read/write all ten layers at one time, and you're fine.
Then, of course, there's no rule which says you have to use 2-state logic. It's easy, but it's not mandatory. Magnetic fields can have any orientation and any strength. So long as the maximum strength isn't so high that you get bleeding, you're fine. Recognise 256 possible states (using any combination you like of orientation and strength), and you've "encoded" a single byte into a bit - a x8 gain in disk capacity.
Combine the two, and you've increased the capacity by over 80 times! This can be increased still further, by increasing your ability to scan over-written layers, and by increasing your ability to distinguish magnitudes and orientations. You have two degrees of freedom for rotating the magnetic field, which means that by doubling the ability to distinguish, you quadrouple the number of possibilities available.
The scientists may be correct about the density, but the density is NOT the only variable open to hard drive manufacturers. In the future, it may become one of the least significant, as others are explored.
Imminent death of the hard drive predicted! (Score:2)
I'm much more concerned about two other relevant factors:
1: The I/O bottleneck inherent to IDE and SCSI interfaces. All this horsepower, and all this storage, and we can't transfer it fast enough.
2: In case nobody's noticed, tape drive technology has gotten faster, but it has not kept up anywhere near hard drives from a capacity standpoint. In a network server setting, this can be a real problem! The data sizes and drive sizes are growing, tape speeds have increased somewhat, but the network speeds are still mostly at 100 Mbps or slower, and the backup window times are shrinking quickly. That's a bigger problem. We need faster interfaces and bigger tapes - or cheaper jukeboxes.
- -Josh Turiel
New storage tech could kill software companies (Score:2)
Durable storage without moving parts could easily be three orders of magnitude faster than magnetic disk tech.
With permanent storage that fast, PostgreSQL 7.0 would perform on a par with, if not faster than, Oracle 8i. All the work Oracle has done to optimise around magnetic disks would be rendered worthless or worse-- imagine how annoying it could be for a newly hired developer to slog through all of that newly-obsolete disk "wizardry" just to fix a bug...
always been ten to hundred times cheaper (Score:1)
I see that commodity retail core memory is running
about $0.75 a megabyte today, and commodity disk
about $8 a gigabyte. That is a factor of a 100.
The "limits" of both have been decried for decades
without much effect.
lead shielding (Score:1)
Speed, size, reliability (Score:3)
Size is the only dead end in site for hard drives.
Re:But last week... (Score:1)
--
Capacity isn't the issue any more (Score:2)
The thing which would be valuable to consumers would be a sharp increase in data throughput. It's true that disk drive capacity has grown faster than CPU speed over the past few decades -- but data transfer rate has not. The result? The CPU is data-starved, both by the bus and the swap speed.
Re:Imminent death of the hard drive predicted! (Score:2)
If you look at expensive server systems, let alone at mainframes, they already have solutions for these problems -- for I/O you do pretty well with U160W SCSI and 66MHz/64bit PCI; for backups, you put your storage and your backup device on a SAN, for which Gigabit ethernet is pretty much entry-level, and your backup device is a tape jukebox (or you just mirror your disks heavily and forget conventional tape backups).
The anomaly is that top-end disk technology has come out very cheap, thanks I guess to the huge volumes that are shipping, and so you have 2000 dollar PCs with disks that really "belong" in 10000 dollar servers.
Re:Imminent death of the hard drive predicted! (Score:1)
Don't remember 3.5s, but I do remember 5.25s. In fact, I have a friend who has a couple of them whirring away in his room....
backup window times are shrinking quickly
QAD solution: Don't backup. Use RAID-5 or some other RAID that gives you redundancy with minimal cost. You could even do RAID-5 with a hot backup, so if one disk does die, another comes to life and takes its place. Giving you double redundancy! Of course, if BOTH disks die, or if a second disk dies before all the information is copied to the backup, then you're SOL!
Re:Tape supply. (Score:1)
Long road ahead (Score:2)
Null results have a place in engineering too. (Score:3)
Mmm hmm. And do you think anyone would have gotten around to that realization had someone not observed that the "baud == bps" approach would not work forever?
Right, but would anyone have bothered to do this had someone not pointed out that you couldn't get higher speeds using the conventional approach?
The moral of the story is that there is value to pointing out the limitations of current technology because that is what allows us to avoid wasting effort by developing new technologies to replace existing technologies that don't need replacing. Conversely, it helps to anticipate problems in existing technology before they start to limit progress, so that new technologies will be ready by the time those limits are reached. This is not "doomsaying", it is simply having a good understanding of current technology. You have to have a thorough understanding of existing technologies, including their limitations, before you can hope to improve on them.
-rpl
"fractions of microinches" (Score:3)
Re:Hard Drives, Modems, Palimsets and other trivia (Score:2)
> you throw away the assumption that baud == bps,
> you can actually drive up speeds to 56Kb/s!
Excellent comment; too bad it is wrong.
Baud has not been the same as bps since the debut of 1200 bps modems in the early 80s. For instance, the good ol' Bell 212A standard for 1200 bps modems uses 300 baud with 4 bits per baud.
John
73GB now available (Score:2)
73GB, Ultra-160 SCSI (160Mbps), 10K RPM. About $1650 available almost anywhere (except in Seagate's online store. Go figure.). Quantum's got essentially the same drives now tho I didn't notice them for sale.
Do the math: Put, say, 7 of these drives in a $300 external enclosure and you've got over 400GB usable RAID-5 for < $12000! That's $0.03US / MB.
Re:Imminent death of the hard drive predicted! (Score:3)
"The axiom 'An honest man has nothing to fear from the police'
Re:Same old same old... (Score:1)
100000 b.c. Early Engineers construct Earth
1000 b.c. Greek Engineers invent Mathematics
1600 a.d. English Engineers invent Calculus and legislate gravitational law.
1940 a.d. American Engineers write some equations and invent Atomic Bomb and once again prove their superiority to theoretical physicists
1960 a.d. Engineers take time out from inventing rock music and invent vaccine for polio
Re:Imminent death of the hard drive predicted! (Score:1)
What *IS* preventing tapes from reaching the capacity that hard drives reach? Is it because the HDs need to be in a sealed environment? Otherwise, I can't see why you just don't "pull" at the end of a track on a HD to make a long tape (logically, not physically). Of course, that'd be one hell of a long tape. But if you cut it into 32, 64 or 128 parts, you could lay them side by side and be able to read/write 1, 2 or 4 words at a time.
Yeah yeah, I know I'm oversimplifying the case. Can anybody else give an explanation of why tapes suck so much compared to HDs?
Re:"fractions of microinches" (Score:1)
Clarke's Law in Action (Score:2)
I collect Scientific American, and one of the most fascinating aspects of my collection is the series of articles on why this or that technology won't work or has reached it's limits. The authors that SciAm gets to write it's articles usually fit the definition in Clarke's Law above, and they have invariably been wrong, usually quickly.
Two examples:
SciAm published an article in 1947 on why long range ballistic missiles wouldn't work, mostly based on the inability to make the guidance systems accuarate enough. About 5 years later we were deploying them.
They also published an article in the 1980s on why space-based lasers for strategic defense wouldn't work. I was working in that area at the time, and the problems they raised had already been solved, we just couldn't talk about it because it was classified.
Here's an approach for increasing magnetic storage capacity I haven't seen elsewhere: Current tape drives are high capacity but slow. They work just like ancient scrolls, unrolling and rolling up on a spool. Think instead like a codex (i.e. a modern book with pages). Have a stack of magnetic sheets arranged like the mess of catalogs at an auto parts place (spines down, pages held to +- 45 degrees of vertical by end holders). Use a static charge to fan out the leaves at the place you want to read, then slip in the read head from above. This gives you 3-D magnetic storage with fast (at least compared to tape) access time.
Daniel
Re:What's next in storage? (Score:1)
Are you trolling? Ah, well. I'll answer you anyway.
The disks don't have to be completely redundant. For that matter, there are raid definitions that don't have any redundancy at all--they just utilize the ability to stripe across multiple disks. (Unfortunately, I can't remember which RAID levels correspond to which features.) The point, however, is that you can aggregate multiple drives for more storage. (That's the point for the home user, at least.)
This line makes me think that you're either trolling or genuinely don't understand RAID. You don't have to lose half your storage capacity to a RAID. (Although mirroring does provide maximum redundancy in case of failures.) Most home users will probably use N+1 redundancy at most, where the data is just redundant enough that you can lose a single drive without problems. This costs you only one drive beyond your actual storage capacity. And with plain striping, you don't lose any capacity (and, consequently, don't get any redundancy).
Finally, slow? RAID is certainly not slow. With striping, it can end up being faster than a single drive, especially for multiple parallell data accesses. This is because each drive acts independently from the others in retrieving data (whereas the multiple heads in a single drive do not), allowing multiple different files to be read simultaneously. This is most noticeable in large file servers, and would probably provide little, if any, speedup for the average home user, but it's certainly not slow.
Still, I don't think RAID in its current incarnation will catch on in the desktop market. RAIDing multiple disks requires that all of the disks have the same capacity. (You can often use drives of differing sizes, but they all get treated as if they were only as large as the smallest drive in the group.) You also cannot, to my knowledge, dynamically add disks to a RAID. (That is, you cannot dynamically grow a RAID. Replacing dead disks is certainly possible.)
--Phil (I don't think I used enough parentheses in my post. (And no, I don't know LISP (at all...)))
Optical Drives (Score:2)
...and Rambus vs other memory... (Score:1)
With all the $$$ Intel is dumping into it, they seem to have forgotten what you've outlined. So, what do we see? DDR memory that can match and pass Rambus throughput for about 1/2 or less of the cost.
Re:Running out of fingers. (Score:1)
Heh. Few months ago, I was thinking the same thing (until I got that 20 gigger that's now full) - to find a way to use all the "free disk space" various websites made available to people (web email, little text boxes used in describing yourself, etc). Only problem would be the redundancy needed to store this information such that one wouldn't use it, as well as encryption.
MORE SPACE NOW!!!!! (Score:1)
---
Buckylube (Score:1)
Re:Capacity isn't the issue any more (Score:1)
How about a door camera that records your visitors, and remembers who was at the door yesterday, or what the sleazy salesman looked like. You KNOW you didn't authorize THAT large a credit card bill. Or editing home movies. Or a door that recognizes the people who live there. No keys needed! Voice recognizition can also use a trifle of space, with big gobs for each new voice, and lots of vocabulary space. Or programs that watch and listen to you to "sense" your emotional tone, and respond accordingly.
This stuff isn't out yet, and will require huge amounts of permanent storage in some form.
Then there's the AI interfaces, that remember the thoughts that they've had in the past (can't be intelligent without a memory!) Etc.
And I've almost certainly left out whatever will turn out to be most significant.
Re:or even (Score:1)
1 lightyear=5.88 trillion miles
speed of light=700 million mph
1 fortnight=14 days (336 hours)
If you travel 1 lightyear in a fortnight, you will travel 5.88 trillion/336 = 17.5 billion mph: 25 times the speed of light.
Seeking: Magnetic Storage Cost Chart (Score:1)
Think about how much the first CD-ROM drives cost. The first writers. Now, $300 gets you a rewriter from Best Buy:)
To show the price drop, you have to keep dropping the appropriate scale, from "thousands of dollars per byte" to "hundreds of dollars per megabyte" to "dollars per gigabyte." That, or deal with figures so far in the decimals that you have to count on your fingers to make sure it's really
I think there used to be one at hatless.com, but it seems to have slipped into 404dom.
Anyone know of a good replacement?
timothy
Re:Performance issues. (Score:2)
-B
Re:Hmmmmm... (Score:1)
Non-mechanical drives (and where to get them) (Score:1)
Re:Physical Size of disk is not important! (Score:2)
So, who really needs 1 TB of storage? (At current rate increases of doubling every 9 mos, this should be in desktops in 4.5 years, another 7 for 1,000 TB) I mean, there is only so much recorded music. 1 TB is about 250,000 MP3s. At 4 minutes a tune, that's about 2 years of solid music (no sleeping).
Digital images? that should be about 10M in 1 TB. If you spend 1 minute average on each one, that's 20 yrs of uninterrupted viewing.
Movies? About 2-4GB for each, 1TB is around 250 moview, 10TB is 2500 movies. (That's about 1 yr solid movie enjoyment.
So with just 12 TB, you have 23 solid years of entertainment (assuming you have a job and sleep--that xlates to 69 years). Further, this assumes that data compression and storage models do not advance. So, in 6 years a PC off the shelf may have the ability to store everything to entertain you for a lifetime.
As for the future of entertainment, it may change to use the full capacity of vast hd space. Until CAVE technology is in mass production, I don't see it happenning.
We forget so soon... (Score:2)
By the time we've met with the capacity of magneto-resistive drives, we'll be moving on to something else. As the article said, thin-film didn't last forever, who/what is saying that MR has to last forever?
There will be no storage shortage in the future. Who cares about the death of MR... bring on the next generation.
PS: Imagine how long a surface scan is going to take on one of these babies. Pack a three course meal, and a good book.
-- kwashiorkor --
Pure speculation gets you nowhere.
Re:Speed, size, reliability (Score:1)
I have invented a way to massively reduce access times while reducing redundancy and increasing portability: Make the hard-drive double as the system's power supply by turning it into a flywheel. If you spin that sucker at 100,000 RPM, it'll run for weeks on a single charge and cut the average latency down to 0.3 ms. Think of the potential for flywheel-powered laptops. It's just a matter of time before someone figures out how to capture the energy from all the random jostling all laptops undergo to generate all the power it'll ever need.
If anyone actually ever does this, I wanna royalty!
Re:IBM's 70 gig is plenty (Score:1)
I remember connecting to BBS's on my fast 1200-baud modem. I can still remember the text scrolling so quickly across the screen compared to that 300-baud I had just replace.
I can't wait until my connection to the GII (TLA I got from school a few years ago: Global Information Infrastructure) is as quick as it is in the movies. I would almost swear watching particular techie movies that they have a T1 for each packet coming across the line.
Motto of the story: Whatever you have, it is not enough.
Re:Physical Size of disk is not important! (Score:1)
<sarcasm>Or, alternatively, you can just barely install Windows 2005</sarcasm>
Seriously, though, you have a point, but how about the bloody sods who want more resolution on sound/video/images ? Of course, I shudder to think downloading all those images on a dialup connection *ducks and runs for cover*
Re:Null results have a place in engineering too. (Score:1)
Canada vs. USA (Score:1)
Sweden and Norway had a war about the same time, yet we don't hate eachother (that bad), we just make really bad not-so-fun histories about the stupidity of eachother.
Re:Physical Size of disk is not important! (Score:1)
-----------------------------------------------
Re:IBM's 70 gig is plenty - not quite (Score:1)
No, but I would like to record, oh, say the next Woodstock 20XX Weekend Marathon of 48 or so hours on my Tivo++ while I'm out of town at some work function.
After that, we'll be sure to think up other uses for 70+GB drives.
Hmmm, how about scaling those matchbox 340MB PCMCIA drives up to a few GB so that I can record a decent length (home or otherwise) video on one? Would that be nice or what? Forget DVDs, carry a couple of the videos(packaging and all) in your pocket. How about being able to backup, copy, and carry your whole MP3 collection processed at 256Kbps over to your friend's house for a party? Those matchbox drives are just barely on the threshold of usefulness today. Put 4-8-12-24 GB on them and suddenly they become very handy indeed.
Re:Speed, size, reliability (Score:1)
Heh. But the angular momentum would be a bitch. Imagine trying to maneuver a 7 pound gyroscope, spinning at 100,000 RPM! Now, imagine an airplane full of these, going into a bank curve...
Re:Physical Size of disk is not important! (Score:1)
So that's
dirk
Re:Hard Drives, Modems, Palimsets and other trivia (Score:2)
Re:Back Seat Nukes (Score:1)
For additional nightmares, consider that a liquefied natural gas tanker carries a nuclear bomb worth of energy. If you can figure out a way to make it go boom in a city harbor, it would be as bad as using a nuke. Or consider an truck bomb attack directed against a nuclear power plant. The containment buildings are pretty tough, but the control rooms aren't.
Daniel
Step back for a minute and think (and a rant too) (Score:1)
Re:Just wondering... (Score:1)
Re:What's next in storage? (Score:1)
As for multiple disk failures... a normal RAID 5 array will have parity information for a single drive failure. You can setup more than one parity segment. [There is a distinction most vendors ignore: RAID 5 doesn't have a "parity drive"; the parity information is distributed throughout the array to avoid the write penalty of a single drive.]
You are correct: RAID is not a substitute for backups. RAID only limits your exposure to downtime due to drive failures. There are many other things that can, and do, fail.
Re:No worries. Just go to washing machine sized dr (Score:1)
NO END IN SIGHT FOR CONVENTIONAL HARD DRIVES!!! (Score:1)
Now, take that thinking and apply it to hard drives. Instead of just buying a faster and bigger disk every two years or so, you'll start getting disks with new features that make using them better. Features like plush leather seats, rear data connector defroster, and a tiny little winshield wiper on the activlity LED. Those are features that would vastly improve the life of the hard drive user, but have been ignored in the past in the mindless quest for bigger and faster.
Re:Levels of Raid (Score:2)
Raid 0 - disk mirroring
Raid 1 - disk striping/no parity
Raid 4 - hair striping?...no wait, wrong list *hehe*
Raid 5 - disk striping with parity
Do you ever feel like you're diagonally parked in a parallel universe?
Re:Speed, size, reliability (Score:1)
Re:Performance issues. (Score:2)
The limitation isn't the ability to read the data off the platter, it is the ability of the platter to not break into shrapnel.
John Carmack
Re:Speed, size, reliability (Score:1)
Re:Performance issues. (Score:2)
The fastest HD that can be bought today has no faster than 5miliseconds access time.
5nanoseconds is 1000 X 5milliseconds
That's order of magnitude 3 not 6.
In fact until about 4 years ago the difference between RAM and HD access time was not that dramatic, no more than 40 times.
Re:Null results have a place in engineering too. (Score:1)
Alternatively, we could just paraphrase a pithier expression: I don't know what devices we'll be storing our data on in 5 years but they'll be called "hard disks".
-rpl
Re:Performance issues. (Score:2)
Here's some random freshman physics class notes I found if you don't belive me:
http://feynman.physics.lsa.umich.edu/~myers/126
-B
Re:New storage tech could kill software companies (Score:1)
Re:Performance issues. (Score:1)
----------------------------
Re:Physical Size of disk is not important! (Score:1)
Re:Long road ahead (Score:1)
(in a nutshell: OAW is essentially a little laser at the end of the servo assembly which can heat up a specific area, changing the coercivity of just that spot, rather than the huge area that a magnetic pulse would have changed)
I do. (Score:2)
----------------------------
Re:New storage tech could kill software companies (Score:2)
No, sir.
That only speeds things up if your database is read-only! Every db write must be written to disk immediately to satisfy the "Durability" requirement of RDBMS design. Combine Durability with the problem of Concurrency, which Oracle solves with separate rollback segments (PostgreSQL now uses versioned records), and Oracle is even more disk-dependent (i.e. if you want speed, you need your rollback segment(s) on a separate disk).
If you've got a pile of RAM and a bunch of data in Oracle that you're only interested in reading, then the best way to do it is to take a snapshot of your data out of oracle, stuff it into a berkeley DB, and then keep that in RAM-- no RDBMS will ever be as quick as a berkley DB if all you're interested in doing is reading a bunch of static data.
Much of Oracle's success has been in areas that they share with OS designers-- filesystem design, memory management, process control. When Larry Ellison spouts off on one topic or another and implies that Oracle should be thought of as an OS, he's not engaging in hubris-- he's just reflecting the problems that his engineers have to face.
If you're a CS grad student, and you want to do an interesting open source project, try designing a generic database filesystem for Linux/BSD-- (sqlfs, perhaps?). An fs with so many constraints (typed data, stored in records, flushed to disk before returning a successful write, presenting consistent views to concurrent access, etc.) would be more difficult to implement than a traditional fs, but it would also present many more avenues for optimization. At the end of the project, you'd have a pretty useful abstraction layer, and the free RDBMS folks could potentially spend their time implementing new features, instead of putting so much work into reinventing the wheel.
None of this ever occurred to me until I had to install Oracle one day-- I'd been used to using free dbs on debian, where installation is essentially transparent, and you can just start hacking away on SQL immediately. Installing Oracle, on the other hand, was a lot like the first time I installed Linux back in '95-- it was rediculously time-consuming, but when I was done, I understood many of the design principles of the system, not just how to use it.
Remember 1micron limit? (Score:2)
Of course, we know what happened after that -- they quit using visual light, and started using shorter-wavelength beams.
A friend of mine says "Is tape storage on the way out? It's not keeping up with disk storage!".
Seagate and HP just introduced a tape drive with 100gb (UNCOMPRESSED) capacity, and they say that they can take that same technology to 250gb native. These LTO drives do this by having oodles of tracks on a tape so that a stretch of tape may have hundreds of tracks in parallel, and by using new tape materials that allow them to make the tape thinner so they can pack more tape into a cartridge. People said linear tape was dead, that helical scan would always be faster and higher capacity, but it appears that conventional wisdom is foiled again...
I don't know what hard drives are going to be like five years from now, but I do know they're not going to stall, capacity-wise, due to some "inherent" limit. Too many smart people are looking for ways to bypass those limits, either by using some other technology altogether (hmm... photo-sensitive materials??) or by figuring some way around the "limit" using clever application of the underlying physics.
-E
Big tape drives (Score:2)
I've been looking at the data sheets on some of the big enterprise-class storage systems. We're talking about boxes that have 5 to 15 drives, and attach via fibre channel loop to multiple servers that need to be backed up, and that have hundreds of tapes that they manage via robotics. Yes, I'm working on enhancing the Linux 'mtx' tape library control software to drive these things, though I'll never be able to personally see or test one :-}. There are some interesting challenges to handle with fibre-attached storage, specifically, the one of "who has the robotic arm now?!", but none that are unsolvable. I am confident that no matter how big hard drives get, we'll be able to back them up -- albeit for a price!
-E
Fanned tapes (Score:2)
Tape drive manufacturers are raising capacities via a variety of methods. They are coming up with thinner tape materials so that they can cram more tape into a cartridge (I understand there will be a DDS5 that crams over 200 meters of tape into a tiny 4mm tape cartridge!). They are coming up with new heads that either store data more densely linearly, or that store data more densely vertically (i.e. put more tracks on a tape). They've also been playing with the speed at which data is recorded, and perhaps varying that to adjust to tape quality etc. There are also experiments ongoing with multiple heads and serpentine tapes, though I haven't heard that this is buying anything (easier to have a smaller cartridge and multiple simple drives rather than big complex cartridges and one complex drive). Having seen these guys do so many "impossible" things (they said that DDS4 was impossible!), I've given up on figuring out where it's all going to end, but I do know that traditional tape drives are nowhere near their limits as far as speed and capacity go.
-E
This is why FibreChannel is growing (Score:2)
1. I/O Bottelneck - Our card does 190 MB per second with 23,000 I/Os per second in non-RAID mode (direct connect). Another good things to do is offload as much as you can to storage processors. This saves the main CPU. Relying on System DMA is a big part of what kills IDE performance. Both SCSI and FibreChannel adapters are DMA Busmasters meaning they can read/write to host memory on their own, without using host processor. Always use hardware RAID (adapter or external/cabinet based) instead of software based. Software based RAID kills processor.
2. Backup - Various forms of RAID can help here. You can configure things so that there are always at least two copies of your data. This doesn't help for real backup where people need things that were overwritten, like tax records from five years ago. Using RAID arrays of FibreChannel tapes speeds things up quite a bit.
As for network speeds...you are right about 100 Mbs being too slow. Heck 1 Gbps (or 128 MBps) are still too slow. That is why you use FC arrays that support multi-initiator. Here, multiple hosts are connected to a set of storage. In this model there is no server front-end to the storage share. No network latencies.
SAN technology is really just starting. Target mode systems (like EMC's storage cabinets) have great possibilities. Simple FC-Adapters can run in this mode as well. In a raw format, they can avoid the OS almost entirely, using it only for initialization and configuration. Backup can be done without any OS interaction.
What about multi-TB databases? (Score:2)
As for a data-starved CPU...not with IDE. IDE controllers use the host CPU for DMA so your CPU is quite busy. SCSI and FibreChannel adapters are busmasters but they are also faster. It is true that today's CPUs can push much harder than today's storage. It is also true that even a 66 Mhz, 64-bit bus is too slow. Interrupt sharing doesn't help either. That is why PCI is on its last legs. PCI-X will not last too long either with InfiniBand on the way.
Re:Why Bubble Memory never went anywhere (Score:2)
It's interesting to note, though, that CMOS had almost exactly the opposite problem when it first came out -- it was slow, but had extremely low power consumption. It also was dreadfully static-sensitive. But, CMOS itself managed to displace the older NMOS technology in the early 80's, so these things can happen.
Re:Physical Size of disk is not important! (Score:2)
Well, my comment is coming in late, but better late than never. I think you're definitely on the right track (heh, heh). Raw storage capacity, just like raw processor speed, is quickly becoming a much less important issue for personal uses.
I mean, you think those numbers are huge, consider text: without compression, and with almost 100% formatting overhead, 1 TB would store hundreds of years of reading material. In other words, textbooks for any field of endeavor ever, all the classics, tons of science fiction (if everything were printed out)...
So the problem, as we already realize with the that puny artifact called the World Wide Web, is what the heck are you going to do about indexing, querying, and searching. Advances in those domains will very quickly dwarf the contributions of merely higher capacity or performance. Unfortunately, these are very, very hard problems.
Re:New storage tech could kill software companies (Score:2)
An interesting side light of something like this is that the project would eventually probably end unix text processing as we know it. The power of unix utilities to treat normal text files as quick and dirty databases is legendary. If you're just warped enough, you can see the translation of many unix utilities and pipelines into the project/restrict/join framework of relational database theory (which, alas, is not quite the same thing as any RDBMS).
Another interesting point is that I could swear that I read about a project to bring a peristent (and ultra-secure) computing environment to Linux, based on a research project done at Penn. But, of course, now I can't recover the URL of the project or what it really did. :-(