Posted
by
Hemos
from the and-it-has-HAL-inside dept.
Nate Fox writes "Finally all the mp3 storage I want: TechWeb has a lil story on 2,300GB solid state drives available within 2 years. " Interesting stuff-but as always, I'll wait to see it in the proverbial flesh.
This discussion has been archived.
No new comments can be posted.
Shamless BeOS plug: not only does BeOS have a 64 bit fs which would have no problem addressing 2.3 TB its a journaling FS that doesn't need fsck.
Seriously tho, by the time this technology is available linux might well have SGI's XFS available which is a journalling fs also. In anycase the access times of a solid state drive could well be orders of magnituted faster than a conventional HD, so the fsck time need not be any longer than it is now for your 3.2 Gig HD
I'm surprised anyone would ask this question. If, in two years, 20tb drives are available for reasonable prices, then they will be half-filled by a full load of Microsoft Windows 2002 and MS-Word. And, sadly, probably by Linux 4.0 plus KnomeDE and SunOffice 8. The Peter Principle applies to software, too.
This is going to be one hellish invention for the music industry...
I can fit about 350 songs onto a gigabyte at a reasonable quality. but let's just go with 200 assuming a little better quality. 200 * 2300 = 460,000 songs on a credit card. so then the problem for the music industry is that a copy of everything they own could get out and it might fit in your wallet. whoops!
only problem is that at 100Mbit/sec or 12.5Mbytes/sec one of these puppies would take a *while* to copy. 12.5Mbytes/sec is "only" around 45 gigs an hour. so 2300 gigs would take 51 hours (a bit more than 2 days) to copy!! (that is, if i did my math right...;-))
and what the heck kind of file system is this going to run? i can just see it now...
C:\>dir
Volume in drive C is BOOT Volume Serial Number is C09E-B641
Okay, we'll narrow the search on the windows system a bit. I'll just peer at \Windows\System and \Windows\System32
Well, well, still almost 200 megs. (well, okay, to be fair it's only 156).
Now I'll try and even it up a little more, and include/dev in the above equation. That sill only adds another 18 megs, bringing it to a total of under 25 megs on the nix box.
Of course the comparison will never be completely fair, they are two completely different operating systems. The point I was trying to prove was that this person obviously isn't doing their homework though if they think a base install of Linux is more space intensive than that of a windows box.
According to the MIT research below... Each fringe pattern (frame) is about 6MB and that's in a highly *compressed* form. When you uncompress it, it's about 36MB per frame. It'll be a couple of years still, but this *certainly* should suck up most of that space! i always mess up the math, but here goes...
so your 2.3TB disk will only hold about 3.5 hours of holographic video. so this miracle disk is already too small to hold the holovideo of branaugh's hamlet...
guess you'll just have to switch disks...;-)
http://www.media.mit.edu/groups/spi/holovideo.ht ml http://www.media.mit.edu/people/lucente/holo/hol ovideo-timeline.html http://lucente.www.media.mit.edu/people/lucente/ holo/PhDthesis/contents.html http://lucente.www.media.mit.edu/people/lucente/ holo/papers.html#SIGGRAPH95
i mean when slashdot posts things like this its not always news.. just 'this could be cool'...and again about the price of course i'd like to only pay $50 i'd be really willing to pay whatever they would ask for it.. even $1000 or more... i mean.. when will ya ever need to buy a new one... altho a 100G watch would be cool... save carrying all those zipdiscs around...
50$ for the disk, thats how much it costs them, it costs about 70$ to manufacture a P3 600 yet it sells for many hundreds. Regardless of retail price, I would love to have 2 terabytes of raw storage. But what I wonder about is the overhead for such a huge disk capacity. Not only do you need space for ECC but then theres the question of access time, not to mention file system for one of these. Would it have a super low latency and high bandwidth where I could use it as a giant boot ROM or would is have a much higher latency similar to a CD-ROM or M/O disk? If it is commercial viable I would hope that it is fast enough and manageable enough to use for more than an archive backup system.
At that size and cost (yeah I know there's no way they'd only charge $50 for 2.3TB *if* they're for real (they've already got a cadre of VC sharks ready for IPO)), you could quadruple mirror on the drive. Surface scan? who cares. If it's bad go get another copy and throw this one away.
I for one would like to be able to remember everything I see hear or touch. At least with 2.3TB I could remember everything I hear for about 4 years, (add speech recognition to make it searchable).
Imagine how great that would be in arguments: "You said this software would be completed on time" "No, I said it was a complete waste of time - I'll play it back for you"
And it would be really handy with the wife too...
Bring on the implants.. preferably before I go senile.
FAT32 can address up to 2 terabytes i beleive, and NTFS can go up to like 2 or 3 zetabytes (or something like that) some absurdly large number, so i don't see it being a problem. if i recall correctly, ext2 can only address up to 1 terabyte...
One thing that stands out in this article - the "data access time" is quoted as 100Mbps. However, data access times aren't measured in megabits/sec.. they are measured in milliseconds, or hopefully for this invention, microseconds (do microseconds come after milliseconds?). 100Mbps could be the data transfer rate. If that's correct, this device is actually really slow - 12.5 MB/s. much slower than both IDE and SCSI's current speeds.
This means it probably wouldn't find its way into servers until its speed problems were corrected.. It'd even be a little slow for PCs..
and about what we'd do with it. Back around 10 years ago when 30 meg hard drives were roomy, nobody could think of what we'd do with PCs that had 20,000 megs of storage like today's PCs do.. I'm sure we'll find ways to use the extra space.
I agree that 12.5 m/s is pretty slow for a drive of any interface. The thing to remember is that for a prototype that's screaming. This drive definitely has uses even at its slow speed. It's faster than a CD-ROM drive, and if you look at the price on the drive (which will be highly inflated at first because it's new and cool, but will settle down)it far outweighs 7/14/21 drive tower assemblies. If you store all your CD-ROM's on this thing you will have a REALLY SUPER FAST Cd-ROM drive. And because it's a hot new prototype, many companies will try to grab their technology and create similar products with a lower cost, more space, and higher speeds. If nothing else, this drive creates competition in the drive market and that's really where this drive has its place. (for now at least)
right on brother (or sister). the register is a much more informative site, and i don't have to worry about morons posting crap, either, though said crap is fun to read.
First of all, a CRC (Cyclic Redundancy Check) is not ECC (Error Correcting Code). A CRC is some kind of checksum that allows you to check whether a chunk of data (typically a single sector for a hard disk) is correct or not.
ECC adds more functionality to this: it provides information about which bits are incorrect. (And, since there are only two possible values for a bit, replace it by the correct value.) I'm not an expert, but I can believe a decent ECC algoritm requires 20-40% extra.
How many ECC bits are used depends not only on the algoritm used, but mostly on the expected frequency of errors.
Interleave, is something completely unrelated: it refers to the bits in between the sectors that the drive uses to position and synchronize itself to the data on the surface.
I think that we all know that any company that releases this technology will be charging far more than $50 for 2.3TB of storage. It's not that they have to, but that they can to maximise their profit.
And that's not necessarily a bad thing either.
I predict that 2.3TB will be above $400 in two years IF this technology holds water
The article said 100 Mbps (Mega bits per second) which is more on the range of 10 MB/s. Currently the high end SCSI drives can sustain throughputs of around 18 to 23 MB/s. To get the throughput up to a reasonable rate you would have to stripe it across multiple devices.
As for current technology handling the bandwidth, Fibre Channel can currently handle 100 MB/s and 200 MB/s isn't that far out on the horrizion. The biggest problem is that your standard 33 MHz, 32-bit PCI bus only provides 132 MB/s of bandwidth, and the implementation on a lot of systems provides much less. Some systems (I think a Sun Ultra 60 for example) do have a 64-bit PCI bus, but it's hardly the norm.
I think a rethink of partition tables might be necessary at that point. Unless people have a LOT of nested extended partitions or something.... which is pretty bad, as an extended partition wastes at least one cylinder IIRC.
You know, I have a feeling this isn't total vapor-nonsense. If I were a scientist who had discovered a storage technology with these kind of metrics (3,600 GB, 100Mbs!!), I'd be awfully worried about blathering these kinds of astronomical numbers unless I was fairly certain I could do it.
Curiously enough, I work at a company that develops medical imaging software. We have a product that is bundled by a large supplier of MRI machines with their machines. The connection being that the scientist in quesiton here also led the team that invented the MRI machine.
As some writers have pointed out, some things that are not widely available and cheap would have seemed like an impossible pipedream just a few years ago... even given that storage space has gotten cheaper, it always seems that the curve has to level out soon.
I wonder if there is a site (or if I can intrigue someone into creating one) that shows a curve representing the falling cost of storage space, as in
"X: Time
Y: Cost of 1 MB of (hard drive or equivalent) storage, in constant dollars (how about 1999 just for current easy-ness)"
Similar charts would be great / neat / mind-blowing for both RAM and 'processing power' (though deciding on the unit to measure might be tricky, since processors are not a strict 'x amount of processing'...
Maybe this should have been an Ask Slashdot question instead, but it's this topic which reminded me of this idea which has been brewing a few years.
No thats a double sided, double (or is it triple?) layered disk that hold that much. I believe the typical 2 hour movie not including all the little extras you get is around 1.5GB (THIS IS A VERY ROUGH ESTIMATE, so don't get on my case about the numbers I don't really have the time now to look them up)
100Mbps could be the data transfer rate. If that's correct, this device is actually really slow - 12.5 MB/s. much slower than both IDE and SCSI's current speeds.
So plunk down $450 for nine of them and build a raid-5. Sheesh.
This has big implications for TiVo and ReplayTV (guess their respective URLs). They can only record a few hours on their current HDDs. This tech, if real, could explode that market.
An example of why this is still not a good comparison:
Under Windows, the C runtime dynamic link libraries are in \windows\system32
Under Linux, the equivalent libraries are in/usr/lib, which you did not list.
\Windows\system32 usually ends up being a dumping ground for every third party dll. Again, not a good design, but this means that you should at least include/usr/lib in your comparison.
...if you are trying to do anything like perusing a database using a field which isn't indexed, it's going to be glacial.
Glacial? This is a RANDOM access system, not linear. There is no need to scan the entire 2.3 TBs unless your database is that big, and if your database is that big you'd better index anything you want to search for.
It would make much more sense to have 100 23 GB drives than 1 2300 GB drive for a great many purposes...
I would rather have one 2,300 GB drive than 100 23 GB drives... I can partition the 2.3 TB drive anyway I need (say to store that 24 GB movie file) without worrying about RAID configurations. With your 100 drives you've just increased the likely hood of a drive failure by a factor of 100. I'll gladly take two 2.3 TB drives and use one as a backup (mirrored) over trying to find room on my desk for 100 drives...
No thats a double sided, double (or is it triple?) layered disk that hold that much. I believe the typical 2 hour movie not including all the little extras you get is around 1.5GB
A single-sided single-layered DVD holds 4.6 GBs, enough space for a "normal" two hour movie. If the movie is less than two hours, or the compression ratio is better than "normal" there will be room left over for extras.
Ignoring flamage, I'm just going to straight out tell you that Gnome will not be incorporated into KDE: it will die.
Sorry to say, you're wrong. I don't see KDE or GNOME dying. Becoming more and more alike, yes. But enough of us hate and fear C++ to refuse to use it, and when your base toolkit requires C++, you lose a lot of potential talent. I don't mind using KDE, though it feels to me to have less coherence and depth than GNOME in general, but I certainly won't be coding for it.
I also dispute your assertion that "Open-source projects are winner-take-all." Where did you get this? Actually, I take that back--the winner does take all, and with open source and free software projects, the users are the winners. But the zero-sum competition models that have prevailed so far in the commercial software world are outmoded and are dying.
I agree that StarOffice is in trouble. I haven't tried KOffice, so cannot speak to its potential. I wish it all possible success. The more choices I have, the better.
You say, "I can see the future...". I don't know what it is that you're seeing, but it isn't my future. Lessee... does it have a lot of lint on it? Are you sure it isn't the inside of your belly-button?
I'm related to the scientist who discovered the technology that makes this possible. Actually the real storage size possible is much much greater. The trouble is that there are rather large companies like Seagate, Quantum, et al who, like oil companies with superior car propulsion systems, have vested interests in making this technology never see the light of day.
I don't know about you but I'm growing tired of watching the advancement of the human race being held back by greedy, selfish corporations.
It's certainly time that Linux got a journalling file system. fsck's are not only much faster, they're unnecessary. In the event of a crash the file system checks for any writes that were open but not committed, eliminating the necessity to check the entire volume. XFS also has support for 9 million terabytes which should suffice for the next few years. check out the white paper [sgi.com].
Did you read the article? Access speed of 100 mega-something per second; it was not immediately clear to me if this was bits or bytes. Assuming this is bytes, the 2300 Gb drive would take over 6 HOURS to be read from one end to the other.
The conclusion I draw from this is that such devices are either
Going to require large increases in access speed, such as multiple read/write heads, or
Going to require serious application of database technology with comprehensive indexing
to store things usefully. The days of grepping your hard drive for things you lost will be gone, gone, gone.
Think of what the government and all the corporations could do with this amount of memory. And you thought data-warehousing was a big thing now. With storage this cheap and small (one of the biggest problems that the government has is where to store millions of CDs and reals of tape) you'll have your whole life recorded and indexed whether you like it or not.
Revelations talks of a beast that will know the whereabouts of every person on the planet. Is that where all of this technology is headed?
Or $400 for a mirror and four stripes. Bet that's pretty peppy. I don't know why, but I'm of the opinion that $400 is a reasonable amount to spend on 9.2TB of 200ns-seek-time storage that reads at 100MB/s and writes at 50MB/s. Well, I'd probably want a / partition of 200MB, and maybe splurge for a GB or so of swap on there, too. I wonder how much a card-changer for "tape" backups will go for, 'cause the DAT drive's going out the window.
Similar charts would be great / neat / mind-blowing for both RAM and 'processing power' (though deciding on the unit to measure might be tricky, since processors are not a strict 'x amount of processing'...
Why not MIPS per $? That sounds like it would be fair enough...
According to the company, an additional advantage over existing data-storage systems is that only 20 percent of the total capacity is needed for error correction, significantly less than the
40 percent now needed for hard disks and the 30 percent needed for optical storage.
There is no way hard drives have 40% overhead for EDAC. More like 4%. Say four bytes per interleave, four interleaves, and a word for CRC. So 4*4 + 2 = 18 bytes added to a 512 byte sector for only 3.4% overhead. It can get more or less efficient, depending on the sector size.
ok MST3K == mystery science theater 3000 right? I love that show!! I even found the poster for the movie in the reject bin at my local movie store and i now have it on my wall. I wonder if its worth anything?? I'd be willing to trade it for one of those harddrives.
Right now, my physical devices are up to N: (That 7-disc CD changer really did it, not that the Zip, Sparq, and Cd-rw were helping things)
I can just imagine partitioning this thing. Screw that! I'd rather see it as one of those "plug in and forget" network-ready storage toasters. Just NFS-mount the damn thing and forget about the partitioning limitations of our current OS's.
Personally, I think it'll be significantly more expensive than that, but the prices will eventually come down. I think it can be done; this guy has one hell of a reputation that he has to uphold. Consider that he said production costs were $50 a drive. The hard drive companies will likely start by charging thousands for the devices, because people will pay for that much storage.
You sound like Bill Gates..."640K will be more than enough for anyone" Also, im shure those of us running a Dual Boot Winblows and Linux will need the space, cause just think 3 years = 2 releases of Windows later, and if you think Windows 95/98 is Bloatware...just wait.....you will be out spending $100 to get two of those little suckers.
I think they're about 4gigs for a standard movie. Since most discs are double sided, w/ widescreen on one and fullscreen on the other (cuz companies are too dumb to use the built in widescreen to pan&scan features of dvd) you have about 8gigs per disc.
Now, the solid state bit is an interesting spin, but think about it: 1. How much faster than 10K RPM can we spin drives?
Actually, according to the article they'll still need actuators to move the read/write head over the material... which is starting to sound suspiciously like an ordinary hard drive (actuators move on one axis and the disk medium spins on the other). Solid state starts looking like a bit of a misnomer here.
AFAICT from the article this is just a device working much like a hard drive with multiple layers per platter that uses a magneto-optical system to do layer selection (much as DVDs can focus on different layers). Where they get their size, cost, and capacity numbers from I'm not sure.
BeOS can access something like 18 Petabytes on a single volume! Someday I'm sure that number will sound so small, but for now, I'm sure that should be enough storage for any computer on the planet.
The reason I'd like to see a chart that shows the price curve on hard drive space cost is just this... so I could tell how linear / what tendency the line is / has. Even if in 5 years we have hard drive space that is 25 times cheaper, that would mean your USD 180 could get you 200 gigs-plus... and that's room for... well, for a lot.
Though I hope we really do see that kind of increase, from the middle of 1999 this still seems like a crazy one. It wasn't long ago that I was stunned that my computer had an entire gig (!) of hard disk at all.
Assuming this is bytes, the 2300 Gb drive would take over 6 HOURS to be read from one end to the other.
For a system that would hold 500 (SSSL) DVD's worth of data this is bad how? Besides if 12.5 MBps (MegaBytes) isn't fast enough, stick six together in a stripped array... 13.8 TB at 75 MBps... That should be fast enough for most anybody...
Frankly, latency and seek times are more of a concern that raw throughput.
This isn't exactly fast, but it's about the same speed of many of today's 10K rpm drives (probably the smaller ones - lower densities). A SUSTAINED transfer rate like that is pretty respectable. I believe that you're comparing it to the IDE/SCSI controller/protocol rates... (generally 66MB/s IDE and 80 (maybe higher now?) MB/s for scsi are the current limits). However, disks don't spin that fast... (They can burst it from their caches... woooo... that's useful...)
You will get higher rates out of current-day disks since they have such high density... but you still have several ms of access time... THIS is what kills us... 99% of the time, I'm not reading huge sequential sections off of the disk! Solid state would virtually eliminate the access time...
I think that a lot of the assumptions that go into today's filesystems make them something that wouldn't work too well on this drive. If the seek time is negligible (it isn't on disks), then you could use a really braindead allocation algorithm that wouldn't bother to keep files contiguous.
100Mbs isn't really a good data transfer rate. Assuming I'm reading it right (100 megabits per second.. I think megabytes per second would be MBps). Current EIDE/UDMA drives get about 11 MBps. And this is only 10MBps? (Note: hdparm -t/dev/hda3 tells me my drive is currently doing 11.35 MBps.) If it is 100MBps, well... then.. bathe it and bring it before me.
As a slight side note, I gotta say I think all the talk of this being vaporware is kinda silly. Firstly, so what if it is? You'll go on living in your day to day 6.3GB world and be as happy as the proverbial clam. Secondly, why would they do that? Do they have any competitors to crush by doing it? Do they really stand to gain anything with it? No. Investors would want to see a prototype in action before they'd give it any money, and all the big tech companies have bigtime badass doctors of engineering to go over the reports they've been given and see if it's feasible.
I bought my 8.4 gig this year for $60 including s&h.. and prices don't go down at a constant rate, they go down at an exponential rate, because technology is advancing. Everyone is thinking in terms of cost per megabyte, but that's completely linear thinking. Technology advances, it doesn't just get bigger! For instance, a DSL connection right now would cost this household less than our two internet connections, three phone lines, and two 14.4k(yeah, 14.4, it would still cost less) modems! It would also support a hell of a lot more.
Well you see, that's the trick. Technology has this weird way of following what science fiction predicts.
Case in point, on some episodes of the original Star Trek series, Spock had little data units that were, basically, 3.5 inch multicolored thin squares. That episode was made in the 60s (I think, I could be wrong) but the little data units or whatever they called them were identical to modern 3.5" floppies, and the slot he put them in looked alot like the corresponding floppy drive. It even had an eject button.
I'm not saying fall on your knees before star trek or anything, just using them as an example of how technology tends to follow what science fiction authors predict/make up. Life imitating art and all that.
The question was about data _access_ times not transfer rates. The article states "The data-access time for the new storage technology is predicted to be around 100 Mbps." Access times are not measured in Mbps, Id like to see an access time in milliseconds. Id guess that the mechanical parts in this design would limit access times to pretty much the same as current HDs, or even greater as there is a need to focus something to read the multiple layesrs.
I wonder if there is a site... that shows a curve representing the falling cost of storage space,
Not a curve, but I have two relevant bookmarks, both found while looking for something else: Historical Cost-of-Storage Data [littletechshoppe.com], and a great article [deja.com] about trends and such by a storage engineer at SGI. cheers, mike
Lay off the fumes man, you're starting to see cross eyed.
If I wander over to the windows 98 machine I have in this house, and click properties for both the System and Windows folders, I get a combined size of just over 600 megabytes. The Windows folder is such a mess because of a poor set up on their part that I don't even want to bother trying to clean it up anymore.
If I however run a couple quick ``du -sh'' comands on my nix box, I come up with a combined size of under 5 megs for/boot, and/etc. The kernels less than a meg of that, around 550kb to be exact.
So what kind of weird hallucinagens have you been taking today? It looks an awful lot to me like the core operating system size differences are quite exponentially different, in favour of Linux being the more compact and less bloated.
I think it refers to the physical layout of the bits in the material, and the amount of "empty space" between them. There are also things to help locate the beginning of each block as it passes under the head, and generally to reduce positional error, which may be all that they mean here.
It's nice to see so many people don't have a clue what they're talking about...
From what I see, my Linux installation is using quite a bit more hard drive space than my Windows 2000 installation, and my Windows 98 installation doesn't come anywhere near the size of Linux. This would seem to imply that Linux would be the one to take advantage of the extra terabytes, wouldn't it?
There are people who have almost all the episodes on tape, and trade to get more. Unfortunetely I don't have any Comedy Central ones on tape, and they won't trade for other things. There's also an MST3K VCD project, but I'd rather have tapes.
If anyone knows a good place with someone nice enough to just take maybe 2 or three tapes and then send one back w/ episodes on it, I'd love to find it. Lets start a Slashdot MST3K tape trading club.
You just have to pick the right code. With ideal block codes (which exist, for certain block lengths), every two parity symbols gives you the ability to correct one symbol error. So if you've got a medium with a low enough error rate (and aren't hard drives less than 1e-6 error probability?), 4% overhead can be more than enough.
So anyway, it's possible; as to whether that's how much is actually used, your guess is as good as mine.
It's hard to compare a Win98 installation to a Linux installation. Win98 is really a given size (+ or - about 100MB of options), but linux comes in packages between a meg and few gigabytes. When you say Linux is bigger than Win98, what distribution, what packages, were appz included?
For a system that would hold 500 (SSSL) DVD's worth of data this is bad how?
Not necessarily bad, just cumbersome. The capacity/read speed quotient of these things makes mag tapes look like speed demons, and the time to scan a significant portion of the contents is going to be way up there. If you are trying to replace a DVD jukebox with one of these this is not a factor, but if you are trying to do anything like perusing a database using a field which isn't indexed, it's going to be glacial. It would make much more sense to have 100 23 GB drives than 1 2300 GB drive for a great many purposes, and some of those purposes are likely to be yours at one time or another.
In Star Wars, there are Holocrons that are tiny, wafer thin (from what I can tell) crystal-like squares that store immense amounts of data, mostly in the form of diaries, histories, etc.. legends, training, wisdom, and skills of the Jedi that are passed down from generation to generation. I realize that some might think that, well, if it's only history.. i.e., paperwork type stuff, it might not take up that much space, but you also have to remember that, as far as I know, the holocrons could also display video (holograph-projection, maybe?), and play sound. Of course, not too sure, I'll have to check my "Weapons & Technology" book again, but you get the idea. Pretty cool, IMO.
That is not a good comparison. Looking at my WinNT box, I see that the \Winnt directory includes such things as:
My browser cache. All of my e-mail, including attachments. Help files for much of the system. A couple of third party applications. Anything stored on "the desktop". Everything in the "personal" folders for all users. Dynamic link libraries for many third party apps.
\Windows holds much, much more than "the core operating system".
Now granted, this sort of organization sucks rocks, but to say that this is equivalent to/etc and/boot is, at best, extremely misleading.
Well, if this is true (and I haven't made up my mind yet; the lack of technical details was disturbing), what will we do with the space?
Sure, large web servers and other massive database-driven information repositories will be able to use it, but what about the home user? 15,000 hours of MP3s? Not likely.
I'm not going to make the mistake of saying it will be more than enough for anything; I'm sure in 10 years 2.3TB will be pitifully small, but I would like to know. In retrospect, it's easy to see how we can use more than 640KB RAM, but what retrospectively obvious things are we going to do to fill these drives?
You are forgetting that after y2k nothing will really matter. And 5cents will be too expensive because our economy will colapse.... more prozac please.
Stick a little microprocessor on it (it wouldn't need much of one!), add a DAC and ADC, and suddenly you have a portable audio recorder/player with some amazing muscle!
I was thinking about this, and I wonder if any of the following might be true:
a) It's volitle b) It has to be kept at 4 kelvin c) It's volitle and has to be kept at 4 kelvin
I always thought it might be funny to have a computer that ran on cryogens. Imagine coming in the morining and doing a liquid helium transfer before getting to work.
Or perhaps a 5000 Watt dishwasher sized helium compressor sitting next to your credit card sized hard drive.
Just a quick question regarding this topic, but how are todays' OS'es set up to handle 2.3 TB of memory on a single drive?
I seem to recall something in the BEos bible regarding the addressing of this much memory, but, truth be told my eys start getting glossy when there's lots of '0's.
I'm assuming that Win9X will suck hard at this, but I'm not sure. Would Linux and the BSD's be able to manage this? Are there any other issues for dealing with drives this large?
4-5 years ago, when 1GB drives first started dropping under $1000, I would have laughed out loud at anyone who told me that you'd be able to buy a 4GB drive for under $100 by the end of the century, nor that new PCs would be shipping with 23+ gigs as standard. I have very little doubt that in 2 years, we'll see multi-terabyte drives shipping for consumer-friendly prices. Now, the solid state bit is an interesting spin, but think about it: 1. How much faster than 10K RPM can we spin drives? Not particularly that much before we have overheating and wear-and-tear issues to deal with. 2. Sure, we can have penny-sized CD that holds umpteen zigabytes of data, but when dealing with magnetic disks, we're going to run into physical issues soon with data density. 3. Power. 10K drives need more current than 7200 or 5400rpm ones, and to go faster we'll need to suck even more. In today's world of green PCs, faster conventional hard drives aren't gonna do it. I think this article is completely legit. Granted, I'm all with CT on the "believe it when I see it" issue, but I don't think it's completely off-the-wall. -Chris
I'm still skeptical. The historical numbers you posted are exactly why. Apparently you forgot to do the math on those numbers =)
5 years ago 1 GB drives were just starting to come down in price, and I got an 810 MB hard drive for around $300. Now the best you can get for $300 is around 30 gigs. That's an increase of around 40 times the storage capacity/dollar over 5 years.
On the other hand, if a 2.3 TB drive were to ship for $50 in two years, that'd be an increase of around 500 times the storage capacity/dollar over two years.
I don't think that's going to happen. Perhaps in two years we'll see 300 gig hard drives, or possibly 500 gig hard drives at decent prices, but i doubt we'll see 2300+ gig hard drives for under $5000, let alone $50.
Well, I remember when buying a computer in 1994 that most hard drives were around 50 cents per megabyte. I bought my 8.4 gig last year (march 1998) for US$180, which is around 2 cents per megabyte.
This is why that 2.3 TB drive for $50 looks unreleastic. That'd be around 0.00002 cents per megabyte. In five years we've gone from 50 to 2 cents (25 times less), so I doubt we'll go from 2 to 0.00002 cents (100,000 times less) in a mere two years.
I, too, would like to see a chart with something more accurate than my anecdotal evidence =)
After checking over my math, I'm even more skeptical. The 2.3 TB drive for $50 would represent a 600,000 times increase in capacity per dollar over two years, compared to the 40 or so we've seen in the last 5 years.
This could help to solve NASA's storage problem. I've heard, maybe even on/. that they get so much data that there is really no way to archive it. Other groups also generate this kind of data. There will always be projects that will use large amounts of storage -- buildings of file cabinets, vaults of punched cards, paper or EM tape, hard drives, hard drive arrays. If the price is right, there'll be someone who wants to store an arbiutrarily large amount of data.
I'm referring to a minimal installation of RedHat 6.0. By default, Gnome, KDE, Enlightenment, and some services and other accessories are installed. In the case of Windows, one has to keep in mind that a zillion accessories get installed along with the OS. Oh, and I meant Windows 2000 Server, not Professional, and IIS is installed. (Apache was not installed on the RedHat box)
Not that I believe in this technology, but one big consumer application would be digital VCRs. You could record a thousand hours of DVD-quality video with one of those. So you could record every episode of your favorite TV shows. Or get HBO for a few months and build up a library of movies.
Of course, this is still a long way from being able to record every channel all the time. With only 100 channels, you would run out of storage within a day. You could, though, pick your favorite channels, set up a profile of stuff you know you don't want to watch (e.g., golf), and have it record everything that doesn't fit the profile. You would then have a week or so after something was recorded to decide to watch or save it before it is recorded over by newer stuff.
Well, not really, but my employer, EMC, has been selling multi-terabyte storage systems for years. If you've got the money, we'll set up a 10TB system for you.
Generally, EMC storage systems are partitioned into separate volumes, which show up as separate devices when viewed by a host computer.
Still, the point is that people are dealing with storage systems larger than what we're talking about here.
fsck has always been a pain. There are several solutions, though.
Much of the time used by fsck is for reading all the inodes. If you reduce the number of inodes, you speed up fsck. I did this with my MP3 partition. Unfortunately, ext2 won't let me have one inode per 1024K. Since with such large storage systems most people will be storing very large (by today's standards) files (excluding news/mail archives with one file per message), it makes sense to alter the file system to reserve fewer inodes. Using dynamic inode allocation makes a lot of sense here. You can also save some time by using larger block groups and larger block sizes, but the advantages there will be relatively insignificant.
The trend towards huge storage is one of the reasons why folks want a JFS for Linux. I had to fsck a couple 10GB IDE disks a few weeks ago and it was coffee break time. I can't imagine what TB-scale fsck times would be like. I have my fingers crossed that XFS makes it into Linux 2.6 (next year?).
Another solution is to use a different type of file system--one that protects itself from corruption or uses some sort of journaling to reduce the need for a full-blown fsck.
Re:hmmmmmmmmmm (Score:1)
Seriously tho, by the time this technology is available linux might well have SGI's XFS available which is a journalling fs also. In anycase the access times of a solid state drive could well be orders of magnituted faster than a conventional HD, so the fsck time need not be any longer than it is now for your 3.2 Gig HD
Re:What do you do with 2.3 TB? (Score:1)
*THE* MP3 archive from hell (Score:1)
I can fit about 350 songs onto a gigabyte at a reasonable quality. but let's just go with 200 assuming a little better quality. 200 * 2300 = 460,000 songs on a credit card. so then the problem for the music industry is that a copy of everything they own could get out and it might fit in your wallet. whoops!
only problem is that at 100Mbit/sec or 12.5Mbytes/sec one of these puppies would take a *while* to copy. 12.5Mbytes/sec is "only" around 45 gigs an hour. so 2300 gigs would take 51 hours (a bit more than 2 days) to copy!! (that is, if i did my math right...
and what the heck kind of file system is this going to run? i can just see it now...
C:\>dir
Volume in drive C is BOOT
Volume Serial Number is C09E-B641
Directory of C:\
08/25/99 10:54p 6 foo
1 File(s) 6 bytes
2,300,000,000,000 bytes free
C:\>
sheesh!
i can just see the error messages now...
please free up some disk space. NT workstation requires 1.5 terabytes to install. thankyou.
Re:Microsoft and disk space (Score:1)
Okay, we'll narrow the search on the windows system a bit. I'll just peer at \Windows\System and \Windows\System32
Well, well, still almost 200 megs. (well, okay, to be fair it's only 156).
Now I'll try and even it up a little more, and include
Of course the comparison will never be completely fair, they are two completely different operating systems. The point I was trying to prove was that this person obviously isn't doing their homework though if they think a base install of Linux is more space intensive than that of a windows box.
--
Mark Waterous (mark@projectlinux.org)
Two Words: Holographic Video (Score:1)
According to the MIT research below...
Each fringe pattern (frame) is about 6MB and that's in a highly *compressed* form. When you uncompress it, it's about 36MB per frame. It'll be a couple of years still, but this *certainly* should suck up most of that space! i always mess up the math, but here goes...
6MB * 30 f/sec = 180MB/sec
180MB/sec * 3600 = 648GB/hr
so your 2.3TB disk will only hold about 3.5 hours of holographic video. so this miracle disk is already too small to hold the holovideo of branaugh's hamlet...
guess you'll just have to switch disks...
http://www.media.mit.edu/groups/spi/holovideo.h
http://www.media.mit.edu/people/lucente/holo/ho
http://lucente.www.media.mit.edu/people/lucente
http://lucente.www.media.mit.edu/people/lucente
pity a good place like techweb were fooled (Score:1)
altho a 100G watch would be cool... save carrying all those zipdiscs around...
You won't pay... (Score:1)
Re:Addressing 2.3TB with current OS'es (Score:1)
Anyway, if those drives become available, they will provide a nice test case for the new fs SGI has contributed.
Re:The access is a nightmare. (Score:1)
no way they'd only charge $50 for 2.3TB *if*
they're for real (they've already got a cadre
of VC sharks ready for IPO)), you could quadruple
mirror on the drive. Surface scan? who cares.
If it's bad go get another copy and throw this
one away.
:-)
Re:What do you do with 2.3 TB? (Score:1)
Your memory'll fit in GIGABYTES?
Arthur C Clark says Petabytes, and he's obviously clueless (On this subject, otherwize, he's brilliant).
I'd need at least a couple of Extabyte chips to fit my memory in!
Re:hmmmmmmmmmm (Score:1)
Yeah, its not nearly enough (seriously) (Score:2)
At least with 2.3TB I could remember everything I hear for about 4 years,
(add speech recognition to make it searchable).
Imagine how great that would be in arguments:
"You said this software would be completed on time"
"No, I said it was a complete waste of time - I'll play it back for you"
And it would be really handy with the wife too...
Bring on the implants.. preferably before I go senile.
Re:2.3TB is small! (Score:1)
100 Mbps? (Score:3)
This means it probably wouldn't find its way into servers until its speed problems were corrected.. It'd even be a little slow for PCs..
and about what we'd do with it. Back around 10 years ago when 30 meg hard drives were roomy, nobody could think of what we'd do with PCs that had 20,000 megs of storage like today's PCs do.. I'm sure we'll find ways to use the extra space.
Could be a problem... (Score:1)
:-)
I'll buy It (Score:1)
^.
( @ )
Re:100 Mbps? (Score:1)
Re:Got a problem with the Register? (Score:1)
Re:40% ECC Overhead? (Score:1)
First of all, a CRC (Cyclic Redundancy Check) is not ECC (Error Correcting Code). A CRC is some kind of checksum that allows you to check whether a chunk of data (typically a single sector for a hard disk) is correct or not.
ECC adds more functionality to this: it provides information about which bits are incorrect. (And, since there are only two possible values for a bit, replace it by the correct value.) I'm not an expert, but I can believe a decent ECC algoritm requires 20-40% extra.
How many ECC bits are used depends not only on the algoritm used, but mostly on the expected frequency of errors.
Interleave, is something completely unrelated: it refers to the bits in between the sectors that the drive uses to position and synchronize itself to the data on the surface.
The article says $50 is cost of manufacture... (Score:1)
$50 is a nice dream (Score:2)
And that's not necessarily a bad thing either.
I predict that 2.3TB will be above $400 in two years IF this technology holds water
IMHO of course.:)
J:)
90 GB vs. 2.3 TB hrm.. (Score:1)
Re:100MB/sec smokes current high-end (Score:1)
As for current technology handling the bandwidth, Fibre Channel can currently handle 100 MB/s and 200 MB/s isn't that far out on the horrizion. The biggest problem is that your standard 33 MHz, 32-bit PCI bus only provides 132 MB/s of bandwidth, and the implementation on a lot of systems provides much less. Some systems (I think a Sun Ultra 60 for example) do have a 64-bit PCI bus, but it's hardly the norm.
Re:Could be a problem... (Score:1)
mithy:/dev> ls wd0*
/dev/wd0s0
/dev/wd0s0a
/dev/wd0s0c
/dev/wd0s1
.
.
.
/dev/wd0s976
I think a rethink of partition tables might be necessary at that point. Unless people have a LOT of nested extended partitions or something.... which is pretty bad, as an extended partition wastes at least one cylinder IIRC.
"What do you want to boot today?"
Big claims .. I'd be worried for my head (Score:3)
Curiously enough, I work at a company that develops medical imaging software. We have a product that is bundled by a large supplier of MRI machines with their machines. The connection being that the scientist in quesiton here also led the team that invented the MRI machine.
Hmmm... $50 is too expensive! (Score:2)
20 years ago... (Score:1)
To me, it sounds more like those memory crystals on Star Trek and in so many other sci-fi stories.
historical perspective sought (Score:2)
I wonder if there is a site (or if I can intrigue someone into creating one) that shows a curve representing the falling cost of storage space, as in
"X: Time
Y: Cost of 1 MB of (hard drive or equivalent) storage, in constant dollars (how about 1999 just for current easy-ness)"
Similar charts would be great / neat / mind-blowing for both RAM and 'processing power' (though deciding on the unit to measure might be tricky, since processors are not a strict 'x amount of processing'
Maybe this should have been an Ask Slashdot question instead, but it's this topic which reminded me of this idea which has been brewing a few years.
Just a thought,
timothy
Re:Big claims .. I'd be worried for my head (Score:2)
damn that's big (Score:1)
Re:What do you do with 2.3 TB? - Movies (Score:2)
Re:100 Mbps? (Score:1)
So plunk down $450 for nine of them and build a raid-5. Sheesh.
--
Re:Digital VCR (Score:1)
2000+ Gigs? Cool (Score:1)
Ken
Re:Microsoft and disk space (Score:1)
Under Windows, the C runtime dynamic link libraries are in \windows\system32
Under Linux, the equivalent libraries are in
\Windows\system32 usually ends up being a dumping ground for every third party dll. Again, not a good design, but this means that you should at least include
Re:The access is a nightmare. (Score:1)
Glacial? This is a RANDOM access system, not linear. There is no need to scan the entire 2.3 TBs unless your database is that big, and if your database is that big you'd better index anything you want to search for.
It would make much more sense to have 100 23 GB drives than 1 2300 GB drive for a great many purposes...
I would rather have one 2,300 GB drive than 100 23 GB drives... I can partition the 2.3 TB drive anyway I need (say to store that 24 GB movie file) without worrying about RAID configurations. With your 100 drives you've just increased the likely hood of a drive failure by a factor of 100. I'll gladly take two 2.3 TB drives and use one as a backup (mirrored) over trying to find room on my desk for 100 drives...
Re:What do you do with 2.3 TB? - Movies (Score:1)
A single-sided single-layered DVD holds 4.6 GBs, enough space for a "normal" two hour movie. If the movie is less than two hours, or the compression ratio is better than "normal" there will be room left over for extras.
KnomeOffice (Score:1)
Sorry to say, you're wrong. I don't see KDE or GNOME dying. Becoming more and more alike, yes. But enough of us hate and fear C++ to refuse to use it, and when your base toolkit requires C++, you lose a lot of potential talent. I don't mind using KDE, though it feels to me to have less coherence and depth than GNOME in general, but I certainly won't be coding for it.
I also dispute your assertion that "Open-source projects are winner-take-all." Where did you get this? Actually, I take that back--the winner does take all, and with open source and free software projects, the users are the winners. But the zero-sum competition models that have prevailed so far in the commercial software world are outmoded and are dying.
I agree that StarOffice is in trouble. I haven't tried KOffice, so cannot speak to its potential. I wish it all possible success. The more choices I have, the better.
You say, "I can see the future ...". I don't know what it is that you're seeing, but it isn't my future. Lessee ... does it have a lot of lint on it? Are you sure it isn't the inside of your belly-button?
Vaporware due to commercial "interests" (Score:1)
The trouble is that there are rather large companies like Seagate, Quantum, et al who, like oil companies with superior car propulsion systems, have vested interests in making this technology never see the light of day.
I don't know about you but I'm growing tired of watching the advancement of the human race being held back by greedy, selfish corporations.
XFS (Score:1)
The access is a nightmare. (Score:1)
The conclusion I draw from this is that such devices are either
Re:What do you do with 2.3 TB? (Score:1)
Revelations talks of a beast that will know the whereabouts of every person on the planet. Is that where all of this technology is headed?
Re:100 Mbps? (Score:1)
--
Re:historical perspective sought (Score:1)
Why not MIPS per $? That sounds like it would be fair enough...
40% ECC Overhead? Not. (Score:1)
There is no way hard drives have 40% overhead for EDAC. More like 4%. Say four bytes per interleave, four interleaves, and a word for CRC. So 4*4 + 2 = 18 bytes added to a 512 byte sector for only 3.4% overhead. It can get more or less efficient, depending on the sector size.
Re:MST3K!!! (Score:1)
I love that show!! I even found the poster for the movie in the reject bin at my local movie store and i now have it on my wall. I wonder if its worth anything?? I'd be willing to trade it for one of those harddrives.
$50 is the _PRODUCTION_ cost (Score:1)
Not that that's bad for 2 terrabytes!
Yes, drive letters will be hell. (Score:1)
I can just imagine partitioning this thing. Screw that! I'd rather see it as one of those "plug in and forget" network-ready storage toasters. Just NFS-mount the damn thing and forget about the partitioning limitations of our current OS's.
Vaporware == venture capital fraud (Score:2)
It would be interesting to see who the people behind this are, and what they've done in the past.
Disclaimer: I have no evidence, only suspicions.
I'll wait and see... (Score:2)
If they can do this for $200, I'll still buy one.
Personally, I think it'll be significantly more expensive than that, but the prices will eventually come down. I think it can be done; this guy has one hell of a reputation that he has to uphold. Consider that he said production costs were $50 a drive. The hard drive companies will likely start by charging thousands for the devices, because people will pay for that much storage.
Re:What do you do with 2.3 TB? (Score:1)
Also, im shure those of us running a Dual Boot Winblows and Linux will need the space, cause just think 3 years = 2 releases of Windows later, and if you think Windows 95/98 is Bloatware...just wait.....you will be out spending $100 to get two of those little suckers.
nuff said
Re:What do you do with 2.3 TB? - Movies (Score:2)
This is "solid state" the way hard drives are. (Score:2)
Actually, according to the article they'll still need actuators to move the read/write head over the material... which is starting to sound suspiciously like an ordinary hard drive (actuators move on one axis and the disk medium spins on the other). Solid state starts looking like a bit of a misnomer here.
AFAICT from the article this is just a device working much like a hard drive with multiple layers per platter that uses a magneto-optical system to do layer selection (much as DVDs can focus on different layers). Where they get their size, cost, and capacity numbers from I'm not sure.
Re:2.3TB is small! (for BeOS) (Score:1)
hard drive cost (Score:1)
Though I hope we really do see that kind of increase, from the middle of 1999 this still seems like a crazy one. It wasn't long ago that I was stunned that my computer had an entire gig (!) of hard disk at all.
timothy
Re:The access is a nightmare. (Score:1)
For a system that would hold 500 (SSSL) DVD's worth of data this is bad how? Besides if 12.5 MBps (MegaBytes) isn't fast enough, stick six together in a stripped array... 13.8 TB at 75 MBps... That should be fast enough for most anybody...
Frankly, latency and seek times are more of a concern that raw throughput.
Re:100 Mbps? Not slow! (Score:1)
You will get higher rates out of current-day disks since they have such high density... but you still have several ms of access time... THIS is what kills us... 99% of the time, I'm not reading huge sequential sections off of the disk! Solid state would virtually eliminate the access time...
A new FS (Score:1)
Re:Big claims .. I'd be worried for my head (Score:1)
As a slight side note, I gotta say I think all the talk of this being vaporware is kinda silly. Firstly, so what if it is? You'll go on living in your day to day 6.3GB world and be as happy as the proverbial clam. Secondly, why would they do that? Do they have any competitors to crush by doing it? Do they really stand to gain anything with it? No. Investors would want to see a prototype in action before they'd give it any money, and all the big tech companies have bigtime badass doctors of engineering to go over the reports they've been given and see if it's feasible.
--Me [bomis.com]
Re:historical perspective sought (Score:1)
Re:20 years ago... (Score:1)
Case in point, on some episodes of the original Star Trek series, Spock had little data units that were, basically, 3.5 inch multicolored thin squares. That episode was made in the 60s (I think, I could be wrong) but the little data units or whatever they called them were identical to modern 3.5" floppies, and the slot he put them in looked alot like the corresponding floppy drive. It even had an eject button.
I'm not saying fall on your knees before star trek or anything, just using them as an example of how technology tends to follow what science fiction authors predict/make up. Life imitating art and all that.
Re:100 Mbps? (Score:1)
The article states "The data-access time for the new storage technology is predicted to be around 100 Mbps." Access times are not measured in Mbps, Id like to see an access time in milliseconds.
Id guess that the mechanical parts in this design would limit access times to pretty much the same as current HDs, or even greater as there is a need to focus something to read the multiple layesrs.
Re:historical perspective sought (Score:1)
Not a curve, but I have two relevant bookmarks, both found while looking for something else: Historical Cost-of-Storage Data [littletechshoppe.com], and a great article [deja.com] about trends and such by a storage engineer at SGI. cheers,
mike
Re:Microsoft and disk space (Score:1)
Re:Microsoft and disk space (Score:1)
Lay off the fumes man, you're starting to see cross eyed.
If I wander over to the windows 98 machine I have in this house, and click properties for both the System and Windows folders, I get a combined size of just over 600 megabytes. The Windows folder is such a mess because of a poor set up on their part that I don't even want to bother trying to clean it up anymore.
If I however run a couple quick ``du -sh'' comands on my nix box, I come up with a combined size of under 5 megs for
So what kind of weird hallucinagens have you been taking today? It looks an awful lot to me like the core operating system size differences are quite exponentially different, in favour of Linux being the more compact and less bloated.
--
Mark Waterous (mark@projectlinux.org)
Re:40% ECC Overhead? Not. (Score:1)
Re:Could be a problem... (Score:1)
Microsoft and disk space (Score:1)
From what I see, my Linux installation is using quite a bit more hard drive space than my Windows 2000 installation, and my Windows 98 installation doesn't come anywhere near the size of Linux. This would seem to imply that Linux would be the one to take advantage of the extra terabytes, wouldn't it?
Re:MST3K!!! (Score:1)
If anyone knows a good place with someone nice enough to just take maybe 2 or three tapes and then send one back w/ episodes on it, I'd love to find it. Lets start a Slashdot MST3K tape trading club.
Error correction in 4% overhead possible (Score:2)
So anyway, it's possible; as to whether that's how much is actually used, your guess is as good as mine.
Re:Microsoft and disk space (Score:1)
Re:Hmmm... $50 is too expensive! (Score:1)
Re:The access is a nightmare. (Score:1)
galactica (Score:1)
Re:Microsoft and disk space (Score:1)
My browser cache.
All of my e-mail, including attachments.
Help files for much of the system.
A couple of third party applications.
Anything stored on "the desktop".
Everything in the "personal" folders for all users.
Dynamic link libraries for many third party apps.
\Windows holds much, much more than "the core operating system".
Now granted, this sort of organization sucks rocks, but to say that this is equivalent to
What do you do with 2.3 TB? (Score:2)
Sure, large web servers and other massive database-driven information repositories will be able to use it, but what about the home user? 15,000 hours of MP3s? Not likely.
I'm not going to make the mistake of saying it will be more than enough for anything; I'm sure in 10 years 2.3TB will be pitifully small, but I would like to know. In retrospect, it's easy to see how we can use more than 640KB RAM, but what retrospectively obvious things are we going to do to fill these drives?
Speculations, anyone?
Porn. You can never have enough porn. (Score:1)
Re:20 years ago... (Score:1)
Oops, I finally revealed myself
Re:Hmmm... $50 is too expensive! (Score:1)
really matter. And 5cents will be too expensive because our economy will colapse.... more prozac please.
Re:What do you do with 2.3 TB? (Score:2)
2300 * 1024 * 1024 * 1024 / (44100 * 4 * 60 * 60) = ~3888
Stick a little microprocessor on it (it wouldn't need much of one!), add a DAC and ADC, and suddenly you have a portable audio recorder/player with some amazing muscle!
Possible Gotchas (Score:3)
a) It's volitle
b) It has to be kept at 4 kelvin
c) It's volitle and has to be kept at 4 kelvin
I always thought it might be funny to have a computer that ran on cryogens. Imagine coming in the morining and doing a liquid helium transfer before getting to work.
Or perhaps a 5000 Watt dishwasher sized helium compressor sitting next to your credit card sized hard drive.
Re:What do you do with 2.3 TB? (Score:1)
Addressing 2.3TB with current OS'es (Score:2)
I seem to recall something in the BEos bible regarding the addressing of this much memory, but, truth be told my eys start getting glossy when there's lots of '0's.
I'm assuming that Win9X will suck hard at this, but I'm not sure. Would Linux and the BSD's be able to manage this? Are there any other issues for dealing with drives this large?
Those who cannot remember the past... (Score:3)
Re:Addressing 2.3TB with current OS'es (Score:2)
Windows9x may or may not be able to handle it. The FAT32 maximum is somewhere above 2 TB, but I'm not sure how far above.
Linux will indeed "suck hard at this," due to ext2fs's maximum of 1 TB.
Re:Those who cannot remember the past... (Score:2)
5 years ago 1 GB drives were just starting to come down in price, and I got an 810 MB hard drive for around $300. Now the best you can get for $300 is around 30 gigs. That's an increase of around 40 times the storage capacity/dollar over 5 years.
On the other hand, if a 2.3 TB drive were to ship for $50 in two years, that'd be an increase of around 500 times the storage capacity/dollar over two years.
I don't think that's going to happen. Perhaps in two years we'll see 300 gig hard drives, or possibly 500 gig hard drives at decent prices, but i doubt we'll see 2300+ gig hard drives for under $5000, let alone $50.
Re:historical perspective sought (Score:2)
This is why that 2.3 TB drive for $50 looks unreleastic. That'd be around 0.00002 cents per megabyte. In five years we've gone from 50 to 2 cents (25 times less), so I doubt we'll go from 2 to 0.00002 cents (100,000 times less) in a mere two years.
I, too, would like to see a chart with something more accurate than my anecdotal evidence =)
oops (Score:2)
Re:Could be a problem... (Score:1)
One word, vaporware (Score:1)
Re:What do you do with 2.3 TB? (Score:1)
Re:2.3tb....scandisk and defrag would be a week lo (Score:1)
heheh... that's why you'd use a decent file system instead of that FAT16/32 crap.
Re:Microsoft and disk space (Score:1)
Digital VCR (Score:5)
Of course, this is still a long way from being able to record every channel all the time. With only 100 channels, you would run out of storage within a day. You could, though, pick your favorite channels, set up a profile of stuff you know you don't want to watch (e.g., golf), and have it record everything that doesn't fit the profile. You would then have a week or so after something was recorded to decide to watch or save it before it is recorded over by newer stuff.
Re:What do you do with 2.3 TB? (Score:2)
... and have almost 100MB of space left over to store a document
(sorry, gratuitous MS-bashing... you could see it coming, couldn't you?
2.3TB is small! (Score:2)
Generally, EMC storage systems are partitioned into separate volumes, which show up as separate devices when viewed by a host computer.
Still, the point is that people are dealing with storage systems larger than what we're talking about here.
fsck time (Score:2)
Much of the time used by fsck is for reading all the inodes. If you reduce the number of inodes, you speed up fsck. I did this with my MP3 partition. Unfortunately, ext2 won't let me have one inode per 1024K. Since with such large storage systems most people will be storing very large (by today's standards) files (excluding news/mail archives with one file per message), it makes sense to alter the file system to reserve fewer inodes. Using dynamic inode allocation makes a lot of sense here. You can also save some time by using larger block groups and larger block sizes, but the advantages there will be relatively insignificant.
That's why we need JFS (was Re:hmmmmmmmmmm) (Score:2)
The trend towards huge storage is one of the reasons why folks want a JFS for Linux. I had to fsck a couple 10GB IDE disks a few weeks ago and it was coffee break time. I can't imagine what TB-scale fsck times would be like. I have my fingers crossed that XFS makes it into Linux 2.6 (next year?).
Re:oops (Score:2)
*scratches head*
Re:fsck time (Score:2)