Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Technology

The End Of The Road For Magnetic Hard Drives? 111

Phase Shifter wrote to us about the limits of conventional hard drives, which Scientific American is discussing. The article talks about the history of hard drives, and why sometime soon, due to the limitations of the superparamagnetic effect, we will need to find a new storage type. It's a cool background read on hard drives and what goes into them.
This discussion has been archived. No new comments can be posted.

The End Of The Road For Magnetic Hard Drives?

Comments Filter:
  • by Anonymous Coward
    But then again people have been predicting the demise of the hard drive for nearly twenty years.

    Its luck has to run out sometime, but I am willing to bet that it will still give the best value for money for some years to come.
  • ungodly huge doesn't mean ungodly fast. can you say bottleneck...
  • OK, we are going to have to find a new technology because they're going to hit a limit on increasing drive density... Ummm, no.

    To double storage capacity it is not essential to double density - you could double the physical size of disk (or add another).

    Just because they cannot get smaller doesn't mean they get ditched - Want double the storage, get double the number of disks...
  • I did a report in college on bubble memory. Wave of the future. Got a B. Bubble memory got the boot.
  • 1) What is the largest capacity hard drive money can buy today?
    2) Do non-logarithmic graphs that don't start at zero suck, or what?
    3) "kilometers per square inch" is a sweet unit.
  • Without the common "thin film" type data storage we've been with for so long (MFM started major production around 83?) what will we be switching to next. There have been a few /. articles about such stuff, I'm too lazy to go and get some links, but they're there. Either way, optical storage is the future. Fof instance, there was a /. article talking about being able to have a CD that held 140 GB of data, and a roll of scoth tape being able to hold 10 GB of data. Another possibility is RAID will become common, allowing the easy use of multiple drives. I imagine this will be the temporary solution, as the above two optical solutions are a long ways away from being seen at your local PC store.
  • by dillon_rinker ( 17944 ) on Tuesday April 11, 2000 @02:57AM (#1140711) Homepage
    A Brief History of Hard Drives
    (a la Book-A-Minute [rinkworks.com]).

    Scientists: OH NO! Hard drives can't get any better!

    Engineers: Wait! Your science is WRONG! (Writes some new equations).

    Computer industry: You have SAVED us!

    Geeks: YAY!

  • By adding disks or inceasing size instead of increasing data density the costs per MB go up considerably.

    The point is how to get the most storage for the lowest cost.
  • we were reading promises of one TB per square cm! OK, I might be exaggerating, but the point is the same.

    Unlike other computer technologies, the hard disk market consistently finds some revolutionary way to make their products faster, bigger, and cheaper, while staying in business. With that kind of competition, I don't think the hard drive is going away...

    --

  • I was just about to ask whatever happened to bubble memory. It was supposed to be the big thing, they even made good use of it on a Doctor Who episode(Logopolis) once.
  • til magnetic hard drives are moved out of the market. There is still much research to be done before new types can even start to be available to consumers. Then, the first 2 or more years they will be buggy and extremely expensive. After enough people start buying them and they drop in price, there will most likely be a huge surge in development for a better way to make these hard drives and the ones that all the people bought will be obsolete. Then after about 7-10 years or more they will become common place in the market. The thing is that this isnt new news at all, computer hardware and software have and will continually replace itself with newer and "better" versions of itself. Chances are that the system you run now wont even have the opportunity to try to run one of the new types of hard drives that will be made available and even if it could, it would be too slow to handle them. So in conclusion, you have to roll with the market and upgrade as you see fit. Unless you have an interest in design, don't worry yourself about products that wont be available for many years.
  • There is still a need for large capacity, non-volatile memory devices that do not have moving parts. Disk drives are too sensitive to shock, temperature extremes and low air pressure. I've read that mountaintop astronomical observatories (higher than 10,000 feet) have problems with disk drive reliability.
  • Bigger hard drives = more bloated code.
    Either that or...
    Bigger hard drives = more pr0n and mp3s.
    Man, I miss my C64...who needs hard drives when I've got my trusty cassette drive :-)?

  • Increasing the diameter of the disk makes it more difficult for the head positioning servo-mechanisms to keep the head over the track. You would have to reduce the track density and/or the spindle speed.
  • by roman_mir ( 125474 ) on Tuesday April 11, 2000 @03:16AM (#1140719) Homepage Journal
    Xerox is sponsoring a research of 3D storage devices that can manage tens of thousands of gigabits in a volume of sugar cube [geocities.com] at the University of Toronto
  • It's like tying ribbins round your fingers.
    And we're rapidly running out of limbs.

    Next we'll have probabilistic memory based on quantum theory. (Such as the latest secure communication proposals)

    And to think I used to send email out continuously onto the internet back to myself in the days when my university limited us to 200K network space...

    I had dreams to write a virtual disk driver using mail servers across the world.

    (I have better sense now though...)

    zUE65db/j/nDUFJYb6i88bhwJz26I1SMdr78iB6VjqA+tp6q PE
    p7L98Z/QCg/7JD

    -grin-
  • by Anonymous Coward
    If I may draw an analogy. It is like McDonalds announcing that they cannot make their burgers any bigger than 7 feet in diameter. It may well be a practical limitation, and indeed the physics behind it may be sound, but the fact is, that the consumer will have rejected the product as over-engineered long before it reaches the limitations imposed by the laws of nature.

    I mean, can you imagine attempting to eat a burger that size ? It is quite simply ridiculous.

    thank you

    dmg

  • bottleneck
  • by jrs ( 27486 )
    Just bring one of these non-mechanical drives to the market and stop teasing us! :(
  • Bubble memory was expensive while Winchester (remember those) drives were becoming cheaper in the early 80's. Along came the RAM shortage in the mid 80's and the rest is history.

    All you need is RAM connected to a battery and Bob's your uncle. Of course, RAID is a lot cheaper. It would be cheaper to daisychain 8 20 gig drives to get a cheap terabyte and then RAID those bad boys. If one goes bad, plug another and keep moving.

    All this so some secretary can gangbang our inboxes with promises of GAP clothing. Why are we doing this again?
  • What to do? If I needed a RAID system that would have access time of RAM or even faster. In fact it should be possible to do even with the current technology, the rotation speed of a hard drive should be incremented by about 100 times and the number of tracks should be decreased by 10 times (making the bits on the platter much larger, let's say by 10 times) the distance between the tracks should stay almost the same (less radial travel time for the head) maybe increased by 2-5 times. I want this sucker to have up to 50 reader heads on each platter, let's make the platters larger in size and all. This should be able to bring the speed up to match the performance of today's RAM speeds, of-course the capacity on one such hard drive will go down by about 10 times (so instead of 9GB it would be about 900MB, then put something like 50 of those together into a RAID system and we have about 40GB of data with access time of RAM. The cost of one drive would go up due to the better mechanical part of the drive (liquid bearings instead of ball bearings, higher rotational speeds and the technology to combat air turbulence) but it also would go down due to the lower resolution of a platter and larger bit sizes.

    This is EVIL EVIL EVIL idea HAHAHAHAHHAAHAHAAAAA!
  • Another possibility is RAID will become common, allowing the easy use of multiple drives.

    The problem is that RAID (at level 3 and/or 5) just lets you use more space for one big volume without taking as much of a risk as you otherwise would. Instead, you take a performance hit relative to "RAID 0" (striping--which, not being redundant, should really be called AID :-).

    The catch is that the larger the disks in your RAID 3/5 RAIDset, the longer your window of vulnerability when (not if) one fails. Remember, the idea of RAID is that if one disk goes bad, you can reconstruct the data using the parity blocks from all the other drives. That was fine in the days of 2GB drives, but it takes a long time to reconstruct 36GB or more of data, even if you have a dedicated hardware RAID controller. (Even an 18GB disk takes a while.)

    If a second disk fails during this time (and it's more likely to, since you're now hitting it heavily to read off all the blocks you need to recalculate the missing disk with), you're hosed.

    Also, RAID doesn't protect you from software. Directory corruption or an accidental rm/newfs will result in you having a nicely protected, redundant copy of your useless or empty filesystem.

    "RAID 10" or RAID 1+0 or whatever the marketers are calling it is just striping across mirror pairs; that requires twice as many disks as you'd otherwise need for the same amount of storage, but it does give you reliability without the same level of speed hit.

    (Yes, folks, the faster/better/cheaper trio is still pick two. Just ask the Mars Polar Lander team.)

  • I can get all my 16-64K Apple ][ programs on that 70 gig drive. And still have space for the C64 code.

    Not to mention all my AppleScript letters. The biggest problem is getting UDMA/SCSI drives working on the Apple ][. The C64 is a whole different interface issue....

    (Gonna run out of space? Bah, go on a data diet. Do you NEED to be storing HDTV DVD's on your hard drive?)
  • by fnj ( 64210 ) on Tuesday April 11, 2000 @03:48AM (#1140728)
    Hard drive technology has progressed at least as fast as other computer technologies. Let's compare the present day to the day of the original IBM PC XT, some 17 years ago.

    Processor, 4.77 MHz -> 600 MHz: 126 times
    (let's say 1000 times, because the P III does a lot more with each MHz than the 8088)

    RAM, 64 KB -> 64 MB: 1024 times

    Modems, 9600 baud -> 56K: 6 times
    (even 1.5M for cable modem is only 156 times)

    Hard disk drives, 10 MB -> 20 GB: 2000 times

    Hmmmm, seems like the much-poo-poo'ed electro-mechanical technology has easily kept pace with the straight electronic technologies, including the breathtaking advances in chip density.

    Now, when it looks like CPU speed and RAM density really ARE about to reach a plateau for a while, or at least lower its slope of advance, hard disk technology is poised to really rocket ahead. Look at the news from IBM research, foretelling VAST advances in the fairly near future.
  • Sounds like lustful coveting to me. Give me 20 Hail Mary's, a dozen situps and your root password.

    Liquid bearings? How about spinning on a magnetic field like one of them Japanese trains? Of course, your data would suffer slightly.
  • I don't have a link to this info handy, but my recollection is that bubble memory was _way_ too slow - and hard drives just kept getting better ...

    It seems to be a bit of a trend in this industry that whatever works early on gets a lot of resources put into incrementally improving it and making it cheap, such that competing technologies have to be _hugely_ better to have any chance of taking over.

    That is (IMO) partly why:
    - we still use hard drives,
    - CPU's still use CMOS rather than one of the faster switching methods,
    - the x86 architecture is still dominant,
    - the UNIX model is the base of nearly all operating systems.

    There may be potentially 'better' technologies than these out there, but there has been so much engineering and optimisation gone into these technologies that it is really hard for anything to compete.

    The case of the Exponential PowerPC is an example of that - it used ECL rather than CMOS to get substantially higher clock speeds, but before it had really got up to speed, the incremental improvements in CMOS had passed made it look less attractive, and Exponential was dead.

    I expect someone to reply to this and say how much better CMOS (or whatever) is better than anything else ... but at least some of that will be due to the massive research that has gone into making the current technology work well.
  • 1) The largest drive I've seen retail (Future Shop) was 40 GB (for C$500). There was a /. article a while back about IBM producing a 73 GB drive which should be available now.

    2) Yes.

    3) I thought I was the only one to notice this! "Miles per square centimeter," anyone? Gigajoules per cubic Celsius? Huh?

  • by www.sorehands.com ( 142825 ) on Tuesday April 11, 2000 @04:01AM (#1140732) Homepage
    I'm off to the store. I have to stock up on my scotch tape [slashdot.org] supply.

    Since everyone will be replacing their hard drive with rolls of scotch tape, I'll corner the market!

  • Once upon a time, it was said that modems could never exceed 9,600 bps, as the phone lines couldn't cope with higher than 9,600 baud.

    Then, one day, someone realised that - hey! If you throw away the assumption that baud == bps, you can actually drive up speeds to 56Kb/s!

    Then, as modems went up in speed, the same engineers moaned and groaned. The 56Kb/s limit was near, and without a total rewiring of the phone network, an act of Congress in the US (an act of God elsewhere in the world), and more money than anyone had, the 56K barrier would never be breached! Calamity!

    Then, one day, another bright spark realised that if you had modems at the junctions, you could shove REALLY high-speeds down the wires without either Congress -or- God having to do anything. (Much to the relief of both.)

    The Doomsday Crowd, defeated once more, lurked on the fringes. Until, one day, redemption! Hard Disks can't pass a certain density!

    This, of course, is as bogus as all the other claims. If it's possible to read the past ten writes on a given sector, then you can you can increase the density of the disk by AT LEAST an order of magnitude. You just have to remember to read/write all ten layers at one time, and you're fine.

    Then, of course, there's no rule which says you have to use 2-state logic. It's easy, but it's not mandatory. Magnetic fields can have any orientation and any strength. So long as the maximum strength isn't so high that you get bleeding, you're fine. Recognise 256 possible states (using any combination you like of orientation and strength), and you've "encoded" a single byte into a bit - a x8 gain in disk capacity.

    Combine the two, and you've increased the capacity by over 80 times! This can be increased still further, by increasing your ability to scan over-written layers, and by increasing your ability to distinguish magnitudes and orientations. You have two degrees of freedom for rotating the magnetic field, which means that by doubling the ability to distinguish, you quadrouple the number of possibilities available.

    The scientists may be correct about the density, but the density is NOT the only variable open to hard drive manufacturers. In the future, it may become one of the least significant, as others are explored.

  • Yet again. Densities may not increase forever, but when you can cram 40+ GB on a single platter, just add a couple of platters and make the drive a little thicker. Anyone remember full-height 3.5" drives? Maybe they'll make a brief comeback once density plateaus.

    I'm much more concerned about two other relevant factors:

    1: The I/O bottleneck inherent to IDE and SCSI interfaces. All this horsepower, and all this storage, and we can't transfer it fast enough.

    2: In case nobody's noticed, tape drive technology has gotten faster, but it has not kept up anywhere near hard drives from a capacity standpoint. In a network server setting, this can be a real problem! The data sizes and drive sizes are growing, tape speeds have increased somewhat, but the network speeds are still mostly at 100 Mbps or slower, and the backup window times are shrinking quickly. That's a bigger problem. We need faster interfaces and bigger tapes - or cheaper jukeboxes.

    - -Josh Turiel
  • If cheap+fast permanent storage arrives quickly, then Oracle, for one, will be in deep trouble.

    Durable storage without moving parts could easily be three orders of magnitude faster than magnetic disk tech.

    With permanent storage that fast, PostgreSQL 7.0 would perform on a par with, if not faster than, Oracle 8i. All the work Oracle has done to optimise around magnetic disks would be rendered worthless or worse-- imagine how annoying it could be for a newly hired developer to slog through all of that newly-obsolete disk "wizardry" just to fix a bug...
  • And more permanant than semiconductor memory.
    I see that commodity retail core memory is running
    about $0.75 a megabyte today, and commodity disk
    about $8 a gigabyte. That is a factor of a 100.
    The "limits" of both have been decried for decades
    without much effect.

  • This is right in line with "processors will never break 100mhz" as they would require lead shielding to keep the RF interference for Ionizing all the air in you house.
  • by garver ( 30881 ) on Tuesday April 11, 2000 @04:19AM (#1140738)

    Size is the only dead end in site for hard drives.

    • Speed. The average hard drive is spinning at 7200 RPM nowadays. At this speed, there is an average latency of 4.6ms just to spin the track under the head. You can't do much about this except spin the disk faster. At 10000 and 15000 (thanks Seagate), you still have 3ms and 2ms, respectively. This is on top of any time needed to move the head itself. With most access times <8ms in the low end and <5ms in the high end, this ceiling isn't too far off. Sure you can spin the disk faster, but this gets expensive (money, energy, and heat).
    • Size. I think the article addresses this quite nicely. If we hit the ceiling here, we can increase the surface area. But this again gets expensive in all ways and precludes a 100 GB laptop drive in a 1.5 in width and 1 watt power consumption. You know you want it.
    • Reliability. To me, this is the biggest problem. Hard drives are the most relied upon moving part in a computer and yet are the first thing to go in most systems (followed closely by the power supply, who's death is usually hastened by a power hungry hard drive spinning up and down). RAID (or similar) can tackle this, but is expensive in all ways and requires the user's attention.
    Finally, I won't argue that hard drives will meet their doom in 5 years, hell we don't even have a suitable replacement yet (only stuff in research). I just figure they will be a story that I will tell to my grand children.
  • In 1995, I could buy a half-GB drive for $200. Today, that buys over 40 times more storage in the same amount of space, with four times the performance. CPUs are maybe 20 times faster since 1995, but $1/MHz is still cheap -- plus, they're getting bigger!

    --
  • Realistically, how many people need vast increases in capacity? Sure, there are uses for larger persistent random access storage devices, but the number of people who actually keep (legal) video on their machines is fairly small. Who else has a use for -- much less a need for -- 2^37 Byte platters? (2^40 bits = 2^37 bytes ~ 100 GBytes effective capacity.)

    The thing which would be valuable to consumers would be a sharp increase in data throughput. It's true that disk drive capacity has grown faster than CPU speed over the past few decades -- but data transfer rate has not. The result? The CPU is data-starved, both by the bus and the swap speed.

  • Both these problems are really economic, not technological, in the sense that solutions are out there, they just cost a lot more than the big disks.

    If you look at expensive server systems, let alone at mainframes, they already have solutions for these problems -- for I/O you do pretty well with U160W SCSI and 66MHz/64bit PCI; for backups, you put your storage and your backup device on a SAN, for which Gigabit ethernet is pretty much entry-level, and your backup device is a tape jukebox (or you just mirror your disks heavily and forget conventional tape backups).

    The anomaly is that top-end disk technology has come out very cheap, thanks I guess to the huge volumes that are shipping, and so you have 2000 dollar PCs with disks that really "belong" in 10000 dollar servers.
  • Anyone remember full-height 3.5" drives? Maybe they'll make a brief comeback once density plateaus.

    Don't remember 3.5s, but I do remember 5.25s. In fact, I have a friend who has a couple of them whirring away in his room....

    backup window times are shrinking quickly

    QAD solution: Don't backup. Use RAID-5 or some other RAID that gives you redundancy with minimal cost. You could even do RAID-5 with a hot backup, so if one disk does die, another comes to life and takes its place. Giving you double redundancy! Of course, if BOTH disks die, or if a second disk dies before all the information is copied to the backup, then you're SOL!

  • Tape is for storage, the hot bowls of steaming grits is fun.

  • According to a friend of mine who works in the industry, the leading limitation on density is seeking from track to track and remaining locked to the track. There is some new head technology coming down the pipe which should vastly improve hard drive densities. One of the most difficult things to do is servo the heads. This new technology should eliminate this limitation (sorry, I won't go into details).

  • Once upon a time, it was said that modems could never exceed 9,600 bps, as the phone lines couldn't cope with higher than 9,600 baud. Then, one day, someone realised that - hey! If you throw away the assumption that baud == bps, you can actually drive up speeds to 56Kb/s!

    Mmm hmm. And do you think anyone would have gotten around to that realization had someone not observed that the "baud == bps" approach would not work forever?

    Then, as modems went up in speed, the same engineers moaned and groaned. The 56Kb/s limit was near, and without a total rewiring of the phone network ... the 56K barrier would never be breached! Calamity! Then, one day, another bright spark realised that if you had modems at the junctions, you could shove REALLY high-speeds down the wires ...

    Right, but would anyone have bothered to do this had someone not pointed out that you couldn't get higher speeds using the conventional approach?


    The moral of the story is that there is value to pointing out the limitations of current technology because that is what allows us to avoid wasting effort by developing new technologies to replace existing technologies that don't need replacing. Conversely, it helps to anticipate problems in existing technology before they start to limit progress, so that new technologies will be ready by the time those limits are reached. This is not "doomsaying", it is simply having a good understanding of current technology. You have to have a thorough understanding of existing technologies, including their limitations, before you can hope to improve on them.


    -rpl

  • by superdoo ( 13097 ) on Tuesday April 11, 2000 @04:47AM (#1140746) Homepage
    Wow, it really is Scientific American. Down with metric!
  • > Then, one day, someone realised that - hey! If
    > you throw away the assumption that baud == bps,
    > you can actually drive up speeds to 56Kb/s!

    Excellent comment; too bad it is wrong.

    Baud has not been the same as bps since the debut of 1200 bps modems in the early 80s. For instance, the good ol' Bell 212A standard for 1200 bps modems uses 300 baud with 4 bits per baud.

    John
  • Just a month ago I complained on /. that the 73GB drives Seagate talked about in October/November 1999 were still not out. Now they are.

    73GB, Ultra-160 SCSI (160Mbps), 10K RPM. About $1650 available almost anywhere (except in Seagate's online store. Go figure.). Quantum's got essentially the same drives now tho I didn't notice them for sale.

    Do the math: Put, say, 7 of these drives in a $300 external enclosure and you've got over 400GB usable RAID-5 for < $12000! That's $0.03US / MB.

  • by Tassach ( 137772 ) on Tuesday April 11, 2000 @04:58AM (#1140749)
    It's kind of hard to have off-site storage of your RAID-5 array. Relying on RAID-5 will protect you against a drive dying - but that is not the only failure mode you have to worry about. Someone could accidently blow away or corrupt the file system; now you have a nice redundant copy of a blank filesystem. A secretary could accidentally delete a month's worth of files - relying on RAID, you'd have no way to recover from this. Your building could burn down - without off-site backups, your business could go under. The data stored on the machine can be FAR more valuable than the machine itself. The value of the data determines how paranoid your backup strategy has to be.


    "The axiom 'An honest man has nothing to fear from the police'

  • Brief History according to Engineers (well not _ALL_ Engineers, but certain ones who shall remain nameless but are posting above this)

    100000 b.c. Early Engineers construct Earth

    1000 b.c. Greek Engineers invent Mathematics

    1600 a.d. English Engineers invent Calculus and legislate gravitational law.

    1940 a.d. American Engineers write some equations and invent Atomic Bomb and once again prove their superiority to theoretical physicists

    1960 a.d. Engineers take time out from inventing rock music and invent vaccine for polio

  • Good call. Forgot about failure other than purely mechanical.

    What *IS* preventing tapes from reaching the capacity that hard drives reach? Is it because the HDs need to be in a sealed environment? Otherwise, I can't see why you just don't "pull" at the end of a track on a HD to make a long tape (logically, not physically). Of course, that'd be one hell of a long tape. But if you cut it into 32, 64 or 128 parts, you could lay them side by side and be able to read/write 1, 2 or 4 words at a time.

    Yeah yeah, I know I'm oversimplifying the case. Can anybody else give an explanation of why tapes suck so much compared to HDs?

  • That is hilarious! When I read that article I noticed that line didn't flow very smoothly - but I never made the comparison to Scientific AMERICAN... Funny as hell!
  • "When an elderly but distinguished scientist says something is impossible, he is almost surely wrong" - Arthur C. Clarke

    I collect Scientific American, and one of the most fascinating aspects of my collection is the series of articles on why this or that technology won't work or has reached it's limits. The authors that SciAm gets to write it's articles usually fit the definition in Clarke's Law above, and they have invariably been wrong, usually quickly.

    Two examples:
    SciAm published an article in 1947 on why long range ballistic missiles wouldn't work, mostly based on the inability to make the guidance systems accuarate enough. About 5 years later we were deploying them.

    They also published an article in the 1980s on why space-based lasers for strategic defense wouldn't work. I was working in that area at the time, and the problems they raised had already been solved, we just couldn't talk about it because it was classified.

    Here's an approach for increasing magnetic storage capacity I haven't seen elsewhere: Current tape drives are high capacity but slow. They work just like ancient scrolls, unrolling and rolling up on a spool. Think instead like a codex (i.e. a modern book with pages). Have a stack of magnetic sheets arranged like the mess of catalogs at an auto parts place (spines down, pages held to +- 45 degrees of vertical by end holders). Use a static charge to fan out the leaves at the place you want to read, then slip in the read head from above. This gives you 3-D magnetic storage with fast (at least compared to tape) access time.

    Daniel
  • Are you trolling? Ah, well. I'll answer you anyway.

    umm..._Redundant_ Array of Inexpensive Disks each being used to back up the data of the other...how does this help with storage capacity?

    The disks don't have to be completely redundant. For that matter, there are raid definitions that don't have any redundancy at all--they just utilize the ability to stripe across multiple disks. (Unfortunately, I can't remember which RAID levels correspond to which features.) The point, however, is that you can aggregate multiple drives for more storage. (That's the point for the home user, at least.)

    RAID is damn slow and only something like half the combined storage capacity of the drives is available.

    This line makes me think that you're either trolling or genuinely don't understand RAID. You don't have to lose half your storage capacity to a RAID. (Although mirroring does provide maximum redundancy in case of failures.) Most home users will probably use N+1 redundancy at most, where the data is just redundant enough that you can lose a single drive without problems. This costs you only one drive beyond your actual storage capacity. And with plain striping, you don't lose any capacity (and, consequently, don't get any redundancy).

    Finally, slow? RAID is certainly not slow. With striping, it can end up being faster than a single drive, especially for multiple parallell data accesses. This is because each drive acts independently from the others in retrieving data (whereas the multiple heads in a single drive do not), allowing multiple different files to be read simultaneously. This is most noticeable in large file servers, and would probably provide little, if any, speedup for the average home user, but it's certainly not slow.

    Still, I don't think RAID in its current incarnation will catch on in the desktop market. RAIDing multiple disks requires that all of the disks have the same capacity. (You can often use drives of differing sizes, but they all get treated as if they were only as large as the smallest drive in the group.) You also cannot, to my knowledge, dynamically add disks to a RAID. (That is, you cannot dynamically grow a RAID. Replacing dead disks is certainly possible.)


    --Phil (I don't think I used enough parentheses in my post. (And no, I don't know LISP (at all...)))
  • Magnetic Drives have been predicted to fade out long ago. Ever since the late 80's with the optical drives that held ~2gigs, magnetic hd companies struggled against the beliefs of the physicist. Engineers developed newer ways of making magnetic media more efficient, while the optical theories were more effective in the long run, the drive for magnetic media was more prominent, as were the funds. Lets face it....how many people own magnetic media as opposed to optical media. In the hardware industry, if there is funding and money, there is progress. MORAL: Support the optical media foundations around you. They will give you terabytes to loose files instead of gigabytes :-)
  • Great post...I'd only add one thing to it; Rambus memory.

    With all the $$$ Intel is dumping into it, they seem to have forgotten what you've outlined. So, what do we see? DDR memory that can match and pass Rambus throughput for about 1/2 or less of the cost.
  • I had dreams to write a virtual disk driver using mail servers across the world.

    Heh. Few months ago, I was thinking the same thing (until I got that 20 gigger that's now full) - to find a way to use all the "free disk space" various websites made available to people (web email, little text boxes used in describing yourself, etc). Only problem would be the redundancy needed to store this information such that one wouldn't use it, as well as encryption.

  • Science schmince! Quickly...I need more space to store MP3s!! NOW!
    ---
  • I know a guy who has been working on nanotech for real products. Apparently the first realistic uses of the stuff is as thin films.. the soonest apparently may be a carbon tubule based lubricant for hard drives, which lets the head skate over the medium. Problems include toxicity of the stuff plus difficulty in quality control but it ain't impossible. Covered in the article? dunno, slashdotted.
  • Many users. The uses just haven't developed yet, however...
    How about a door camera that records your visitors, and remembers who was at the door yesterday, or what the sleazy salesman looked like. You KNOW you didn't authorize THAT large a credit card bill. Or editing home movies. Or a door that recognizes the people who live there. No keys needed! Voice recognizition can also use a trifle of space, with big gobs for each new voice, and lots of vocabulary space. Or programs that watch and listen to you to "sense" your emotional tone, and respond accordingly.
    This stuff isn't out yet, and will require huge amounts of permanent storage in some form.
    Then there's the AI interfaces, that remember the thoughts that they've had in the past (can't be intelligent without a memory!) Etc.

    And I've almost certainly left out whatever will turn out to be most significant.
  • lightyears per fortnight works, you just wouldn't be able to travel it without turning into energy, according to Einstein's theories of relativity.

    1 lightyear=5.88 trillion miles
    speed of light=700 million mph
    1 fortnight=14 days (336 hours)

    If you travel 1 lightyear in a fortnight, you will travel 5.88 trillion/336 = 17.5 billion mph: 25 times the speed of light.

  • Does anyone know of a site which graphically shows (pie graphs or whatnot) the dropping price of magnetic (or any other) storage?

    Think about how much the first CD-ROM drives cost. The first writers. Now, $300 gets you a rewriter from Best Buy:)

    To show the price drop, you have to keep dropping the appropriate scale, from "thousands of dollars per byte" to "hundreds of dollars per megabyte" to "dollars per gigabyte." That, or deal with figures so far in the decimals that you have to count on your fingers to make sure it's really /that/ much cheaper.

    I think there used to be one at hatless.com, but it seems to have slipped into 404dom.

    Anyone know of a good replacement?

    timothy
  • The access time of RAM is about A MILLION times faster than any hard drive (that's 6 orders of magnitute, not a figure of speech). Any device based on moving parts is not going to see that kind of improvement, no matter how you trick it out.

    -B
  • If pictures and audio (your "pr0n and .mp3") are what you want to store, they make removable hard disk drives for that. Does Iomega still sell the Jaz® brand? I know Iomega is selling an internal CD writer called ZipCD, which would be perfect for burning MP3 collections for a Mambo-X [mambox.com] portable layer 3 CD player.
  • Currently, HD size is going up exponentially (a percentage increase). Adding more HDs is a linear increase. In the short run, you get a better return, but in the long run you get a bad return on investment.

    So, who really needs 1 TB of storage? (At current rate increases of doubling every 9 mos, this should be in desktops in 4.5 years, another 7 for 1,000 TB) I mean, there is only so much recorded music. 1 TB is about 250,000 MP3s. At 4 minutes a tune, that's about 2 years of solid music (no sleeping).

    Digital images? that should be about 10M in 1 TB. If you spend 1 minute average on each one, that's 20 yrs of uninterrupted viewing.

    Movies? About 2-4GB for each, 1TB is around 250 moview, 10TB is 2500 movies. (That's about 1 yr solid movie enjoyment.

    So with just 12 TB, you have 23 solid years of entertainment (assuming you have a job and sleep--that xlates to 69 years). Further, this assumes that data compression and storage models do not advance. So, in 6 years a PC off the shelf may have the ability to store everything to entertain you for a lifetime.

    As for the future of entertainment, it may change to use the full capacity of vast hd space. Until CAVE technology is in mass production, I don't see it happenning.

  • Does anyone even remember this article [slashdot.org] posted by CmdrTaco sometime yesterday (or maybe even the day before?)... about the naval research institute achieveing 400GB/sq. inch ?

    By the time we've met with the capacity of magneto-resistive drives, we'll be moving on to something else. As the article said, thin-film didn't last forever, who/what is saying that MR has to last forever?

    There will be no storage shortage in the future. Who cares about the death of MR... bring on the next generation.

    PS: Imagine how long a surface scan is going to take on one of these babies. Pack a three course meal, and a good book.

    -- kwashiorkor --
    Pure speculation gets you nowhere.

  • Speed. The average hard drive is spinning at 7200 RPM nowadays. At this speed, there is an average latency of 4.6ms just to spin the track under the head. You can't do much about this except spin the disk faster. At 10000 and 15000 (thanks Seagate), you still have 3ms and 2ms, respectively. This is on top of any time needed to move the head itself. With most access times less than 8ms in the low end and less than 5ms in the high end, this ceiling isn't too far off. Sure you can spin the disk faster, but this gets expensive (money, energy, and heat).

    I have invented a way to massively reduce access times while reducing redundancy and increasing portability: Make the hard-drive double as the system's power supply by turning it into a flywheel. If you spin that sucker at 100,000 RPM, it'll run for weeks on a single charge and cut the average latency down to 0.3 ms. Think of the potential for flywheel-powered laptops. It's just a matter of time before someone figures out how to capture the energy from all the random jostling all laptops undergo to generate all the power it'll ever need.

    If anyone actually ever does this, I wanna royalty!

  • And Gates thought 640K would be enough.

    I remember connecting to BBS's on my fast 1200-baud modem. I can still remember the text scrolling so quickly across the screen compared to that 300-baud I had just replace.

    I can't wait until my connection to the GII (TLA I got from school a few years ago: Global Information Infrastructure) is as quick as it is in the movies. I would almost swear watching particular techie movies that they have a T1 for each packet coming across the line.

    Motto of the story: Whatever you have, it is not enough.
  • So with just 12 TB, you have 23 solid years of entertainment

    <sarcasm>Or, alternatively, you can just barely install Windows 2005</sarcasm>

    Seriously, though, you have a point, but how about the bloody sods who want more resolution on sound/video/images ? Of course, I shudder to think downloading all those images on a dialup connection *ducks and runs for cover*

  • Not quite... it's good to know the limitations of the technology, sure, but the problem is that every so often, the doomsayers come out and say "This technology is going to die soon! It's reached its limit, it can't go any further!"... Yet, interestingly enough, someone always manages to figure out some way to extend its life, improve it, even though its death has been predicted. It's just stupid that a supposedly professional, respectable publication like SciAm would manage to consistently predict the death of many different technologies, while there's still life in them.
  • It's so fscking lame of you Americans to keep flaming eachother. Just because you had a war for 186 years ago you don't have to be enemies.
    Sweden and Norway had a war about the same time, yet we don't hate eachother (that bad), we just make really bad not-so-fun histories about the stupidity of eachother.
    /Rovfrukt (peace@america.now)
  • You dumbass! all of these files types that you've mentioned only fit those sizes because they are compressed... with huge drives in the TBs compression won't be needed. movies that u say take up 2-4 gigs are only like 720X480 (i think that's what DVD is, i don't remember), so imagine a movie with a resolution even 2 times higher! way better quality, and takes up more space! and that's still compressed... with bigger drives it's not the ability to store MORE files, it's the ability to store HIGHER QUALITY files!
    ------------------------------------------------ -----
  • >Do you NEED to be storing HDTV DVD's on your hard drive?

    No, but I would like to record, oh, say the next Woodstock 20XX Weekend Marathon of 48 or so hours on my Tivo++ while I'm out of town at some work function.

    After that, we'll be sure to think up other uses for 70+GB drives.

    Hmmm, how about scaling those matchbox 340MB PCMCIA drives up to a few GB so that I can record a decent length (home or otherwise) video on one? Would that be nice or what? Forget DVDs, carry a couple of the videos(packaging and all) in your pocket. How about being able to backup, copy, and carry your whole MP3 collection processed at 256Kbps over to your friend's house for a party? Those matchbox drives are just barely on the threshold of usefulness today. Put 4-8-12-24 GB on them and suddenly they become very handy indeed.

  • Heh. But the angular momentum would be a bitch. Imagine trying to maneuver a 7 pound gyroscope, spinning at 100,000 RPM! Now, imagine an airplane full of these, going into a bank curve...

  • Well you forget that Windows 2005 will be out by then.

    So that's .25TB of storage gone right there...

    dirk
  • The Bell 212A modem used differential PSK with four phases, encoding 2 bits per symbol at 600 baud and 1200 BPS.
  • They don't, but then that wasn't a design requirement. Shooting down ballistic missiles in flight was. The way you stop back-seat nukes is by keeping really close tabs on who has the critical technologies (like isotope separation and making nice, symmetrical trigger implosions), and where the inventory is (fissionable materials, live bombs). Your other defense is that old standby, MAD. Any organization that uses nukes on the US knows that there won't be anything left of them after we are through.

    For additional nightmares, consider that a liquefied natural gas tanker carries a nuclear bomb worth of energy. If you can figure out a way to make it go boom in a city harbor, it would be as bad as using a nuke. Or consider an truck bomb attack directed against a nuclear power plant. The containment buildings are pretty tough, but the control rooms aren't.

    Daniel
  • Ok, don't mean to beat a dead horse, but WHY DO YOU NEED ALL THAT DAMN SPACE!!! Exactly how many mp3's can you listen to??? Do you NEED to have The Matrix in some insanely high res. ON your harddrive?!?!? Hell, if you love it that much go out and *gasp* BUY it!!! Then play it from the damn disc. And how many games do you need to have on your harddrive AT ONE TIME. Why not work more on improving access times for the drives, rather than size. "Yeah, but back when i had my 5 meg hard card, i didn't think i could fill that either" One does need a certain amount of space for some things, such as word processing/entertainment and whatnot. But really, let's think about this. Do you NEED or WANT for that matter a bigger word processing program??? Can't 70 gigs ON ONE DRIVE satisfy you?!?!? *and the rant begins* Hell, for the price (of Office) one would be better off going out and buying a SEPERATE word processor complete with monitor and mouse and ALL THE SAME F***ING FUNCTIONS as Office, except that LITTLE FUCKING OFFICE ASSISTANT NO YOU MOTHER F***ER, I AM NOT WRITING A F***ING LETTER JUST BECAUSE I WROTE THE F***ING WORD "DEAR" THEN HIT "ENTER" Plus, one can transform their info from a WP to a pc using utilities that come with the WP. HA! Anyhow, with the current permanent storage available (and on the horizon (i.e. DVD burners)) the need for insanely large hard discs is just dumb. It'll die one day and take lots of data with it. I think i am going to go lie down now.
  • Maybe I'm a perfectionist, but that was the first thing I noticed. One would thing SciAm would have enough of a clue to spot that.
  • Actually, it doesn't take "a long time". I measure "long time" in days. RAID reconstruction doesn't take more than a few hours -- usually less than 4hrs even on 70G drives.

    As for multiple disk failures... a normal RAID 5 array will have parity information for a single drive failure. You can setup more than one parity segment. [There is a distinction most vendors ignore: RAID 5 doesn't have a "parity drive"; the parity information is distributed throughout the array to avoid the write penalty of a single drive.]

    You are correct: RAID is not a substitute for backups. RAID only limits your exposure to downtime due to drive failures. There are many other things that can, and do, fail.
  • Doesn't work. Huge discs require huge heads, which, together with larger tracks, increases seek times. And of course, there is simply no way anymore to get a consumer to buy anything that doesn't fit into his 3.5"-normed box.
  • Sure, they may not be getting that much faster or larger in the future, but that's not all there is to a hard drive. Take, for example, cars. Cars aren't getting faster or bigger, but other important car features are constantly being improved or invented. Things such as safety, navigation, and ergonomic features that make driver much more enjoyable and safer.

    Now, take that thinking and apply it to hard drives. Instead of just buying a faster and bigger disk every two years or so, you'll start getting disks with new features that make using them better. Features like plush leather seats, rear data connector defroster, and a tiny little winshield wiper on the activlity LED. Those are features that would vastly improve the life of the hard drive user, but have been ignored in the past in the mindless quest for bigger and faster.

  • FYI, here are the general RAID levels:

    Raid 0 - disk mirroring
    Raid 1 - disk striping/no parity
    Raid 4 - hair striping?...no wait, wrong list *hehe*
    Raid 5 - disk striping with parity

    Do you ever feel like you're diagonally parked in a parallel universe?
  • That could be a selling point: stability. "So stable, you can't even tip 'em over!"
  • You can't just choose to rotate a drive platter 100 times faster. It may be remotely possible to make a 50,000 rpm drive in the near term, but 100x faster (and larger diameter) is WAY beyond the limits of known materials. It could also double as a dandy KE tank killer or space launch system if it could actually be built...

    The limitation isn't the ability to read the data off the platter, it is the ability of the platter to not break into shrapnel.

    John Carmack
  • iirc they actually do tap into the rotation of the platters as a source of power already. specifically to move the heads to the locked position when you power down the disk.

  • The fastest RAM that can be bought today has no faster than 5nanoseconds access time.
    The fastest HD that can be bought today has no faster than 5miliseconds access time.

    5nanoseconds is 1000 X 5milliseconds

    That's order of magnitude 3 not 6.

    In fact until about 4 years ago the difference between RAM and HD access time was not that dramatic, no more than 40 times.
  • I suppose that is a fair assessment. When I hear the "doomsayers" prophecy the end of a technology, I generally interpret it as a hyperbolic way of saying, "we're going to have to do something more clever than the incremental refinements that we've been getting by with so far." Whether the end result of the more clever improvements is a "new technology" is open to debate. I think some of the refinements to HDDs mentioned in the article are significant enough that modern HDDs could be called a "different technology" from previous HDDs, and in that sense the people who predicted the "death" of the old-style disks were right. In any case, even if they overstate their claims, the doomsayers are still useful because they neatly outline (and help motivate people to overcome) the challenges looming on the horizon.


    Alternatively, we could just paraphrase a pithier expression: I don't know what devices we'll be storing our data on in 5 years but they'll be called "hard disks".


    -rpl

  • Ummm...no. Milli is 10^-3, Nano is 10^-9. The difference being 10^6, or ONE MILLION (pinky touching corner of mouth).

    Here's some random freshman physics class notes I found if you don't belive me:
    http://feynman.physics.lsa.umich.edu/~myers/126/ notes/Metric.html

    -B

  • Of course, if you want your Oracle Database to go fast, you jam as much of it as you can into RAM... Downside: when you buy Gigabytes worth of RAM for a DEC/Compaq Alpha running Oracle it's a wee bit pricey :)
  • You're right, but 3 orders of magnitude still isn't worth dismissing. Hell, I'll always take even a single order of magnitude of performance improvement if it's available.
    ----------------------------
  • Eh, who cares about how many years of continuous entertainment one can get out of 1 TB of storage? We should be more worried about how many years it will be before Micro$oft comes out with a version of Windows that requires 1 TB of disk space for a minimum install. At the current rate, that'll probably happen within the next two years.
  • If I had to guess, this is referenceing Quintas/Seagates optically assisted winchester technology. supposedly not far from market, but reguler servo tech is doing just fine, ergo, no need for it *yet*, but it is waiting in the wings

    (in a nutshell: OAW is essentially a little laser at the end of the servo assembly which can heat up a specific area, changing the coercivity of just that spot, rather than the huge area that a magnetic pulse would have changed)
  • by mosch ( 204 )
    Reasons I'd love supermassive, small capacity drives. Imagine buying books, and having them all available on your palm pilot (every o'reilly, right there at your fingertips, with of course every RFC, the entire acm digital library and a few others). Imagine having every movie you own in your pocket, also on your palm pilot, with an adapter so you could watch any movie you already own while you're travelling. Every movie, every cd, uncompressed, full quality. No need for the distortion of mp3 when you have a terabyte in compactflash format :-) The reasons why there isn't any current applications like this, is because it isn't currently feasible. At home I have around 150gigs of storage on my network, most of which is in one RAID cluster. Useless? it seemed so but I keep on finding handy ways to use it. My latest project is storing all my live concert recordings (dat and cd) (yes they're legal), to .shn files so that I can easily spin off copies of them on demand. If you use your imagination, you'll realie that the number of people who keep legal video on their machines is small because it's not generally practical. Consumer-ized special-purpose computers such as the TiVo are changing this. A super-high capacity TiVo like device combined with broadband access and you could start selling movies legally, on-line.
    ----------------------------
  • "if you want your Oracle Database to go fast, you jam as much of it as you can into RAM..."

    No, sir.

    That only speeds things up if your database is read-only! Every db write must be written to disk immediately to satisfy the "Durability" requirement of RDBMS design. Combine Durability with the problem of Concurrency, which Oracle solves with separate rollback segments (PostgreSQL now uses versioned records), and Oracle is even more disk-dependent (i.e. if you want speed, you need your rollback segment(s) on a separate disk).

    If you've got a pile of RAM and a bunch of data in Oracle that you're only interested in reading, then the best way to do it is to take a snapshot of your data out of oracle, stuff it into a berkeley DB, and then keep that in RAM-- no RDBMS will ever be as quick as a berkley DB if all you're interested in doing is reading a bunch of static data.

    Much of Oracle's success has been in areas that they share with OS designers-- filesystem design, memory management, process control. When Larry Ellison spouts off on one topic or another and implies that Oracle should be thought of as an OS, he's not engaging in hubris-- he's just reflecting the problems that his engineers have to face.

    If you're a CS grad student, and you want to do an interesting open source project, try designing a generic database filesystem for Linux/BSD-- (sqlfs, perhaps?). An fs with so many constraints (typed data, stored in records, flushed to disk before returning a successful write, presenting consistent views to concurrent access, etc.) would be more difficult to implement than a traditional fs, but it would also present many more avenues for optimization. At the end of the project, you'd have a pretty useful abstraction layer, and the free RDBMS folks could potentially spend their time implementing new features, instead of putting so much work into reinventing the wheel.

    None of this ever occurred to me until I had to install Oracle one day-- I'd been used to using free dbs on debian, where installation is essentially transparent, and you can just start hacking away on SQL immediately. Installing Oracle, on the other hand, was a lot like the first time I installed Linux back in '95-- it was rediculously time-consuming, but when I was done, I understood many of the design principles of the system, not just how to use it.

  • Didn't they used to say that you couldn't make a silicon chip with a less than 1micron feature size because the wavelength of light was too wide to properly expose the die? (Note to newbies -- a silicon chip of that era was basically photo-etched, i.e. a mask was made that was the inverse of the features on the chip, the chip was painted with resist, resist is exposed to light and then after the mask and light are removed an etchant eats away the unexposed portions in order to form features).

    Of course, we know what happened after that -- they quit using visual light, and started using shorter-wavelength beams.

    A friend of mine says "Is tape storage on the way out? It's not keeping up with disk storage!".

    Seagate and HP just introduced a tape drive with 100gb (UNCOMPRESSED) capacity, and they say that they can take that same technology to 250gb native. These LTO drives do this by having oodles of tracks on a tape so that a stretch of tape may have hundreds of tracks in parallel, and by using new tape materials that allow them to make the tape thinner so they can pack more tape into a cartridge. People said linear tape was dead, that helical scan would always be faster and higher capacity, but it appears that conventional wisdom is foiled again...

    I don't know what hard drives are going to be like five years from now, but I do know they're not going to stall, capacity-wise, due to some "inherent" limit. Too many smart people are looking for ways to bypass those limits, either by using some other technology altogether (hmm... photo-sensitive materials??) or by figuring some way around the "limit" using clever application of the underlying physics.

    -E

  • Seagate and HP just introduced their LTO technology that holds 100gb native. Tandberg's SLR-100 holds 50gb native, as does the Exabyte Mammoth II (or does the Mammoth II have 66gb? I'm away from my office, alas, so don't have the specs in front of me). Granted, we're talking about $3,000 tape drives here, in an era where 83gb hard drives cost half as much. But (shrug) fast tape storage devices have always been more expensive than the hard drives they back up, at least in recent memory (I understand it was different back in the 60's).

    I've been looking at the data sheets on some of the big enterprise-class storage systems. We're talking about boxes that have 5 to 15 drives, and attach via fibre channel loop to multiple servers that need to be backed up, and that have hundreds of tapes that they manage via robotics. Yes, I'm working on enhancing the Linux 'mtx' tape library control software to drive these things, though I'll never be able to personally see or test one :-}. There are some interesting challenges to handle with fibre-attached storage, specifically, the one of "who has the robotic arm now?!", but none that are unsolvable. I am confident that no matter how big hard drives get, we'll be able to back them up -- albeit for a price!

    -E

  • I think you underestimate just how much tape is in today's cartridges. There are 150 meters of tape in a DDS4 cartridge, for example (or is it 180? Doesn't matter, still a lot). That's about 750 feet of tape for the metrically impaired, crammed into a tiny cartridge barely larger than the mini-cassettes occasionally used to record meetings. This tape is ultra-thin and very tightly spooled in order to cram it all into that teeny cartridge. The whole point of tape is storage density -- fitting the most data into the smallest space for the least per-byte cost -- and there's just no room there to slide heads inbetween leaves.

    Tape drive manufacturers are raising capacities via a variety of methods. They are coming up with thinner tape materials so that they can cram more tape into a cartridge (I understand there will be a DDS5 that crams over 200 meters of tape into a tiny 4mm tape cartridge!). They are coming up with new heads that either store data more densely linearly, or that store data more densely vertically (i.e. put more tracks on a tape). They've also been playing with the speed at which data is recorded, and perhaps varying that to adjust to tape quality etc. There are also experiments ongoing with multiple heads and serpentine tapes, though I haven't heard that this is buying anything (easier to have a smaller cartridge and multiple simple drives rather than big complex cartridges and one complex drive). Having seen these guys do so many "impossible" things (they said that DDS4 was impossible!), I've given up on figuring out where it's all going to end, but I do know that traditional tape drives are nowhere near their limits as far as speed and capacity go.

    -E

  • I am working on a FibreChannel-RAID adapter at my job. FibreChannel-RAID answers these questions pretty well..for now.
    1. I/O Bottelneck - Our card does 190 MB per second with 23,000 I/Os per second in non-RAID mode (direct connect). Another good things to do is offload as much as you can to storage processors. This saves the main CPU. Relying on System DMA is a big part of what kills IDE performance. Both SCSI and FibreChannel adapters are DMA Busmasters meaning they can read/write to host memory on their own, without using host processor. Always use hardware RAID (adapter or external/cabinet based) instead of software based. Software based RAID kills processor.
    2. Backup - Various forms of RAID can help here. You can configure things so that there are always at least two copies of your data. This doesn't help for real backup where people need things that were overwritten, like tax records from five years ago. Using RAID arrays of FibreChannel tapes speeds things up quite a bit.
    As for network speeds...you are right about 100 Mbs being too slow. Heck 1 Gbps (or 128 MBps) are still too slow. That is why you use FC arrays that support multi-initiator. Here, multiple hosts are connected to a set of storage. In this model there is no server front-end to the storage share. No network latencies.

    SAN technology is really just starting. Target mode systems (like EMC's storage cabinets) have great possibilities. Simple FC-Adapters can run in this mode as well. In a raw format, they can avoid the OS almost entirely, using it only for initialization and configuration. Backup can be done without any OS interaction.
  • These things are becoming more common each day. Data warehousing and SANs will create even more demand. Then you have video-on-demand servers that have full screen digital movies on tap. I am talking about full control video with all your VCR functions of stop, pause, fast forward, rewind, etc. stored remotely and sent to simple cable box. This kind of applications take up lots of space.

    As for a data-starved CPU...not with IDE. IDE controllers use the host CPU for DMA so your CPU is quite busy. SCSI and FibreChannel adapters are busmasters but they are also faster. It is true that today's CPUs can push much harder than today's storage. It is also true that even a 66 Mhz, 64-bit bus is too slow. Interrupt sharing doesn't help either. That is why PCI is on its last legs. PCI-X will not last too long either with InfiniBand on the way.
  • IIRC, ECL logic has problems with high power consumption. So it's not at all clear that ECL is an improvement rather than a differing tradeoff.

    It's interesting to note, though, that CMOS had almost exactly the opposite problem when it first came out -- it was slow, but had extremely low power consumption. It also was dreadfully static-sensitive. But, CMOS itself managed to displace the older NMOS technology in the early 80's, so these things can happen.
  • So with just 12 TB, you have 23 solid years of entertainment (assuming you have a job and sleep--that xlates to 69 years). Further, this assumes that data compression and storage models do not advance. So, in 6 years a PC off the shelf may have the ability to store everything to entertain you for a lifetime.

    Well, my comment is coming in late, but better late than never. I think you're definitely on the right track (heh, heh). Raw storage capacity, just like raw processor speed, is quickly becoming a much less important issue for personal uses.

    I mean, you think those numbers are huge, consider text: without compression, and with almost 100% formatting overhead, 1 TB would store hundreds of years of reading material. In other words, textbooks for any field of endeavor ever, all the classics, tons of science fiction (if everything were printed out)...

    So the problem, as we already realize with the that puny artifact called the World Wide Web, is what the heck are you going to do about indexing, querying, and searching. Advances in those domains will very quickly dwarf the contributions of merely higher capacity or performance. Unfortunately, these are very, very hard problems.

  • If you're a CS grad student, and you want to do an interesting open source project, try designing a generic database filesystem for Linux/BSD-- (sqlfs, perhaps?). An fs with so many constraints (typed data, stored in records, flushed to disk before returning a successful write, presenting consistent views to concurrent access, etc.) would be more difficult to implement than a traditional fs, but it would also present many more avenues for optimization. At the end of the project, you'd have a pretty useful abstraction layer, and the free RDBMS folks could potentially spend their time implementing new features, instead of putting so much work into reinventing the wheel.

    An interesting side light of something like this is that the project would eventually probably end unix text processing as we know it. The power of unix utilities to treat normal text files as quick and dirty databases is legendary. If you're just warped enough, you can see the translation of many unix utilities and pipelines into the project/restrict/join framework of relational database theory (which, alas, is not quite the same thing as any RDBMS).

    Another interesting point is that I could swear that I read about a project to bring a peristent (and ultra-secure) computing environment to Linux, based on a research project done at Penn. But, of course, now I can't recover the URL of the project or what it really did. :-(

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...