Vint Cerf: Data That's Here Today May Be Gone Tomorrow 358
dcblogs writes "Vinton Cerf is warning that digital things created today — spreadsheets, documents, presentations as well as mountains of scientific data — may not be readable in the years and centuries ahead. Cerf illustrates the problem in a simple way. He runs Microsoft Office 2011 on Macintosh, but it cannot read a 1997 PowerPoint file. 'It doesn't know what it is,' he said. 'I'm not blaming Microsoft,' said Cerf, who is Google's vice president and chief Internet evangelist. 'What I'm saying is that backward compatibility is very hard to preserve over very long periods of time.' He calls it a 'hard problem.'"
We're at an interesting spot right now, where we're worried that the internet won't remember everything, and also that it won't forget anything.
XML? (Score:2)
I think that given MS office and LibreOffice are in XML, it shouldn't be difficult at all to reverse engineer in the future.
Re: (Score:2, Insightful)
The same applies to any *open* format.
Re: (Score:2)
Hell, even the non-open formats are pretty easy to get to a readable level of functionality. They won't contain the markup necessarily and certain features won't be available but, frankly, if we're able to decode all the other ancient languages I'm pretty sure someone will be able to decode these as well.
Speaking of ancient... Err.. When did Vint go to Google? That's kind of cool that he has but that is, in itself, news to me. I must have missed the announcement as I'm sure there was one.
Re: (Score:3)
In to a usable document from scratch? Pretty hard. Ever looked at the XML of a moderately complex document?
Re: (Score:3)
Holy shit, yeah, you're right - it's totally impossible to strip out the XML tags and be left with readable plain text content!
I bet nobody could ever decode it!
You seem to be assuming a flat-text file with predictable order. Strip the XML out of anything in a tabular format (eg a spreadsheet -- see TFS) and you lose vital data. Blank cells are lost and the tabulated data no longer lines up.
It gets worse in a filetype with unstructured formatting, eg DTP and slideware. You've got a collection of elements that are only ordered by their metadata. The explanatory labels you want to overlay on top of that image? They're no longer linked to it and you've no way of k
Re: (Score:3)
I think that given MS office and LibreOffice are in XML, it shouldn't be difficult at all to reverse engineer in the future.
Yes, the problem is not "data" but "data in proprietary formats" ... and even that is becoming less of a problem. A converter to/from almost anything is usually just a google search away. With VMs and emulators, even proprietary binary programs are easier than ever to deal with. I can run any CP/M or C64 program on my desktop Linux computer using free emulators. This was indeed a "hard problem", but today it is mostly solved.
Re: (Score:2)
The problem is not just related to the format, but the medium it's stored on. I can still read C64 floppies because I have some drives, but everything I have for my Apple ][ is considered lost until I find both a drive and a working machine.
see Windows 1250 and 1251 (Score:3)
https://en.wikipedia.org/wiki/Windows-1250 [wikipedia.org]
Re: (Score:3)
I think that given MS office and LibreOffice are in XML, it shouldn't be difficult at all to reverse engineer in the future.
Binary formats were standard for everything up through Office 2003. Office 2007(2003 with optional converter pack and some weird bugs) could output something XML based, though I have the vague memory from the OpenDocument/Open Office XML slugfest that 2007 produced something that deviated from the theoretical ideal of OOXML in some respects, and that full conformity happened at 2010 or 2013. I might be remembering that wrong; but anything before 2003, and a lot from 2003 were definitely binary.
Re: (Score:2)
Maybe. (Score:5, Insightful)
XML doesn't magically solve everything in this regard. If there's no good documentation for the format, it's unlikely you'll be able to display everything exactly as intended. Likewise, if the format is hideously complex (see: Microsoft Office Open XML) or there's bugs in the de-facto implementation, it's going to be tricky to reverse engineer.
I'd also point out that MS Office spits out compressed XML. I believe it's based on ZIP, which is very well documented, but that's yet another hurdle to cross. And then you have to deal with the binary format of the XML itself -- ASCII, UTF8, etc.
Re: (Score:2)
The file being in ZIP format is documented. the character encoding of the XML file is specified in the XML file itself, like all XML files should do.
From what I've used so far the Open XML formats aren't hideously complex, although i've only been working with XLSX files.
Re: (Score:3)
ZIP format is documented.
Right now it is. What about the ragtag bunch of misfit librarians who are all that's left after the zombie apocalypse?
They burned all the books for warmth and to keep the zombies away.
Re: (Score:3)
Re:XML? (Score:5, Insightful)
Have you seen what some people (and MS) do with XML? And what convoluted structures they use? Coded in binary? With compression and other eminently hard to understand stuff? Most of these things will be readable just as long as the applications that created them are around, but not longer.
Forget XML. Forget Unicode as well. Plain ASCII is the only thing that works. Simple PDF or PostScript will work also, because the standards and open-source tools to read them will still be around. But nothing as complicated as a MS office document will survive. LibreOffice formats may have a chance, because LibreOffice may still be compilable and runnable (being FOSS), but only because of that and I would not bet on it.
Incidentally, all my decades old LeTeX documents still compile and can also be read directly. So can my 20 year old ASCII-coded measurement data.
Re:XML? (Score:5, Informative)
Not even Microsoft can implement their Office XML "standard" ; from examination it's pretty much a direct name-for-name serialization of their internal binary structs, with some of the more obvious gaffes like explicitly saying "do this like this old version of Word" hastily renamed to placate ISO. It needs you to implement a whole bunch of specific behaviours if you want it to work in the MS software (things like "if you update this bit, you also have to update this other bit just so or it won't work"), but these aren't documented.
You've got more of a chance, sure, just because the structs are marked and you don't have to infer where their boundaries are, but it's a far cry from ODF which was designed from the outset to be an open XML format rather than just hastily being bunged together to permit large purchasing bodies (like governments) to tick the "Open format" box on their form.
My data will be readable (Score:4, Informative)
My data will be readable because I use bog-standard formats. If I get really froggy I use HTML, and you can just strip the tags and read that.
If his data won't be readable, that's his problem. Anything you want to save for posterity, export it now.
Re:My data will be readable (Score:5, Insightful)
Until HTML includes DRM and half the stuff you create ends up being unreadable.
Well, really we are probably good for anything that can be opened in a text editor for a long long while; but the point is there. Anything can be lost to data format shifts.
As someone who had to re-type a 80 page document because the company stopped using the software the document was created on, and didn't have a licence for it an no converter found online worked - I can say this does happen.
How many people are going to shell out $600 for software to open something they want to make an edit on? How many are going to just give up and find someone to rekey it, or just give it up as a loss?
With more and more systems including format locks, in 50+ years historians will likely have a lot of trouble finding out details from today. Kind of like it is now when we go to look at archival film from WWII and find it's all faded into obscurity. We have the same problems, just with different causes. Then it was lack of preservation of a medium with a limited lifespan. Now it's storing stuff in formats that will go away as they are improved upon, blocked, or just forgotten about.
Sure if your in your 20s, or even 30s, you probably haven't realized the copy of your grandfathers photos are sitting on a floppy disk in a proprietary format. But when you get older you may encounter these issues.
Re:My data will be readable (Score:5, Informative)
Or NASA data from deep space probes that's stored in now-unknown formats on mag tapes from long, long, long gone manufacturers.
Re: (Score:3)
Well, there's the problems with the medium itself, then there's the format, as you say (ought to be right up a cryptanalyst's alley, tho), then there's the real blocker: number of tracks, head design, and the circuitry that goes with it. Unless there are good documents for the machine's design and building, or one can be found in working order in a museum, you're SOL. It's a big problem that doesn't get much exposure.
Re: (Score:3)
From a 2002 slashdot story:
mccalli writes :
"Thought people might find this amusing. In 1986, the UK compiled an electronic [copy of the] domesday book. They used BBC Master computers to do it, and the result was put on laserdisc. I actually used this project whilst at school. This article states that nothing can now read these merely 15-year old discs. The original, written approx. 1086, is still doing fine thank you very much."
Sounds like a good candidate for Bruce Sterling's Dead Media Project. (Speaking
Re: (Score:3)
In fairness they did manage to transfer the stuff off the discs and put the stuff without copyright issues online.
Re: (Score:3)
Re: (Score:3)
"How many people are going to shell out $600 for software to open something they want to make an edit on?"
The upside to this is that when somebody wants to update that nifty company Flash web site and discovers that Flash now costs an arm and a leg, the site gets re-written in html.
emulation / virtualization (Score:3)
Re: (Score:2)
and when Unicode and ASCII are replaced?
Re: (Score:2)
Re: (Score:2)
Unicodes is a bit sprawling; but ASCII is only 128 characters(unless dealing with the wonderful world of nonstandardized non-latin extensions or ad-hoc 8-bit extensions-of-convenience is your problem, in which case I'd advise shirking your duties and drinking heavily), making preserving the whole thing even by chiselling it into stone monuments or other archaic methods potentially viable.
Re: (Score:3)
A) We have software that can use something called frequency analysis to decipher something encoded that has a 1-1 correspondence so something we know (ie the english alphabet).
B) Ignoring software, frequency analysis is something that could be (and before the days of computers, was) done by hand. Hell, some things could be picked out by eye. For one, all files would h
Re: (Score:2)
Re: (Score:2)
How do you encapsulate the VM so it will still work 20 years in the future?
Re:emulation / virtualization (Score:5, Funny)
You're very clever, young man, very clever - but it's VMs all the way down!
Re: (Score:3)
There are a few industrial setups where that is pretty much what has happened.
Re:emulation / virtualization (Score:4, Interesting)
We should have listened (Score:5, Insightful)
We're in a difficult spot right now because for years we ignored the warnings about 'proprietary file formats'.
I'm not blaming Microsoft either. We let Microsoft do this to us of our own free ignorance.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Actually, languages have been consolidating and standardizing rapidly with the advent of the printing press, effective and affordable transportation, broadcast media like TV and radio, and the Internet. Diversity of language is rapidly disappearing.
The way things are going now, there will be only a few dozen languages left at the end of this century, and possibly only a handful after the hundred years that follow.
Although it's entirely possible that technology will preserve native languages, too. If machine
Re: (Score:2)
Yes, backwards compatibility, blah blah blah... (Score:5, Insightful)
Yes, you're right I have this ASCII text file created in 1997 and I can't find anything to read it...
OH WAIT ACTUALLY FUCKING *EVERYTHING* STILL READS IT.
Stop gargling Microsoft's balls so much and wipe off your chin. Proprietary data formats are THE PROBLEM. Stop trying to redirect public discourse with this thinly veiled bullshit.
Re:Yes, backwards compatibility, blah blah blah... (Score:5, Informative)
Odds are that you don't need to convince Vint Cerf or Google in general about the advantages of open formats.
Re: (Score:2)
Yes, the Talk XMPP shutdown and Google Reader are a little disturbing. We're as far as we are with the ubiquity of the internet because of open formats enabling intercommunication and competition between products and services by different providers. That seems to be going away again in favour of platform lock-in with things like iMessage, FaceTime, etc. Google's Hangouts are at least cross platform, but that's really only a mild improvement. You still need to use Google's implementation. I'm just happy I c
Re: (Score:2)
But your EBCDIC documents are absolute rubbish now and the tools to convert them aren't commonplace any more.
Re: (Score:3)
Just Googled "ebcdic to ascii converter"
About 123,000 results.
Re: (Score:3)
Just Googled "oranges to apples converter"
About 4,780,000 results
Re: (Score:3)
But your EBCDIC documents are absolute rubbish now and the tools to convert them aren't commonplace any more.
How deep are your pockets?
*IBM Consulting*
apt-cache search EBCDIC (Score:2)
Yields 4 results in Ubuntu. You can search reputable open source archives on the web, too.
How deep are your pockets?
*IBM Consulting*
Um, really???
Re:Yes, backwards compatibility, blah blah blah... (Score:5, Insightful)
But your EBCDIC documents are absolute rubbish now and the tools to convert them aren't commonplace any more.
$ printf "\xC5\xC2\xC3\xC4\xC9\xC3\x25" | iconv -f ebcdic-us -t ascii /usr/bin/iconv
EBCDIC
$ dpkg -S `which iconv`
libc-bin:
$ apt-cache show libc-bin | grep -e Essential -e Priority
Essential: yes
Priority: required
So we got a program that can convert from EBCDIC-US to ASCII (or UTF-8 or whatever you want) and that program is in an Essential/Required package on any Debian-based system and for some reason you say that "aren't commonplace"?
Are you on crack?
Re: (Score:2)
...everything except that Zip Drive I saved it on.
Comment removed (Score:5, Interesting)
Re:DRM and the digital black hole (Score:5, Interesting)
What about online-only games? Will historians in 100 years be able to play WoW and see what the game was like?
Re: (Score:3)
Re: (Score:3)
Don't forget DRM (Score:5, Insightful)
Were living in what could well be a future dark age for archeologists / historians. Hardly anything is put into a nice hard format (stone is incredibly rare and metal gets stolen) for someone to find. What's left suffers from incompatible file formats, acid based paper that decomposes, bit rot, cryptography, incompatible technology for data storage and worst of all DRM. With DRM you have active measures that try to prevent something from being usable.
In the old days people stopped use with armed guards, obfuscation and primitive crypto. Today we have servers that are required for operational functionality for many products. With the advent of the cloud you have reasons for storing things where you have a dependency on a third party. How many services that are cloud / server based have come about and gone tits up?
Even having a large well known brand name doesn't protect you from having a server shut down. Just think of Microsoft's play4sure service that lasted less than a decade. Having a license and a physical disk isn't that helpful when the DRM requires an authentication server that doesn't exist. With the movement to put more and more DRM into the cloud or with SSL certificates (again dependent upon servers and naturally time bombed) this is going to be a problem that will only grow worse.
Learning to break DRM is far more critical than file formats which require nothing more than a conversion tool.
Re: (Score:2)
Learning to break DRM is far more critical than file formats which require nothing more than a conversion tool.
That is utterly a waste of time. It makes me sick to think of how much good effort is wasted jailbreaking the iPhone, when Apple could merely write a few lines of code and none of that would have been necessary. The entire jailbreak community around Apple is compensating for a few lines of code.
I say that with complete respect for the jailbreakers, but it could be so much better.......
*sigh* (Score:3)
Digital archival is one of the HARD problems. Over the last 40 years we have already lost more cultural artifacts that were created for the entirety of human history. A great deal of that is useless garbage of course but the original moon landing tape? 1000s of government emails reavealing exactly what was going on at pivotal times in history?
The truth is, we need systems for hardcopy; digital is too tranient; emulators are a useful stop gap measure but dont protect againt the kinds of catastropic failures that we will likely see over the longer time frame; and we need indexing because someone at somepoint will want to wade through our digital ditritus.
real problem is: FEATURE CREEP (Score:3)
I've been part of archival problem planning. We went with DVD. now I am not there, I suspect they are thinking DVD sucks and are moving "forward" when the DVD was more than good enough and those plastic discs will last a century. mpeg-2 files will have open source decoders. Now physical readers will still be a problem... the only solution is to wait as long as possible and then switch to the next long lasting format - but not necessarily the newest one at that time. (which is why moving to blueray is a w
This is news? Nope. Not new... (Score:3)
Within arms reach, I have Floppy drives that contain files created in AMI Pro work processors.... WHen I say Floppy, I am talking about the 5 1/4 inch floppies.
Technology hardware and software is not stagnant... It will always continue to develop and progress (ignore windows 8). Data that is worth keeping will get converted. Data that isn't will get left behind. I would not be surprised that in about 25 years, there will be "classic" software as there is Classic literature...
Too much typing.. going back to drinking.....
Re: (Score:3)
In fact, a lot of things that would be of interest to us- i.e. everyday, mundane life- was never recorded at all, back when film and equipment were quite expensive and the effort and cost would have been saved for documenting "important" occasions. Even at a personal level, if I'd known
Tax Records (Score:3)
The IRS wants to audit me, going back several years. I kept the records as required but they are unreadable now.
Thanks Microsoft!
Re: (Score:2)
If it is Word/Excel, try disabling the file blocks using the registry or in 2010 or later using the UI in the Trust Center.
See http://support.microsoft.com/kb/922849 [microsoft.com]
One would think (Score:2)
Maybe there are smarter robots by then babysitting...
Another argument... (Score:2)
For open source. Save your files in open and/or openly defined, standardized formats and there will always be software that can deal with it.
But I guess it's difficult for people to hear you explain that to them with their head up their ass.
Re: (Score:2)
Argument for open standards, yes. Open source, no. You don't need open source for open standards. And open source does not necessarily mean open standards.
Google hire me, I solved this problem in 3 seconds (Score:2)
On the PowerPoint 4.0/95 converters... (Score:5, Insightful)
MS removed the PowerPoint 4.0/95 converters completely with Office 2007 for Windows and later, and disabled them by default in Office 2003 SP3 [microsoft.com]. And the PowerPoint 4.0 converter (but not 95) was disabled by default instead of fixed [microsoft.com] with MS09-017.
On the Mac, they removed then even earlier, when they ported Office to Carbon [microsoft.com].
IMO it would be a good idea for MS to package PP4X32 and PP7X32 from PowerPoint 2003 separately, along with a utility to call the converters of course.
Uh, hello? (Score:5, Funny)
He could've just downloaded MS's Powerpoint 97 viewer [microsoft.com]
Wasn't this solved ages ago? (Score:2)
I remember over two decades ago there was talk of making data objects, that is data that new how to present an object interface to get at its information. Data self contain its own reader in some ubiquitous language. But wait, we never got a ubiquitous language. Perhaps javascript today? But if you want to solve this problem then this is how to solve it. Or perhaps you could just package a converter to convert format XYZ to BSON as being good enough or at least better than today's breakage.
One thi
I have legible pictures over 150 years old (Score:3)
Re: (Score:2)
What digital picture standard (not raw) do you think you'll have any trouble reading, and roughly when do you think you'd have any trouble with it?
Media formats will change. 8" floppies anyone?
So you are worried that your MFM HD from the 1980s won't work, not worried about the .BMP on it not being readable, if you found some way to spin it up? I can't even tell
Re: (Score:3)
yes - this is a real issue - and ARCHIVED data that is important DOES need to be "spun up" and refreshed to new media.
If it's hard drives, yes. If it's optical media. . . well that depends. Because some optical media just plain degrades over time. Some is written in special proprietary formats (like Apple's early implementations of CD+R) that you're going to have a hard time reading with CURRENT equipment.
If your data is archived to tape, and more than 10 years old, I'm afraid you're fucked.
No different than cars (Score:5, Interesting)
We're still able to restore cars from the 80s and earlier as the cars were fully mechanical or hydraulic. No computers.
Fast forward to 20yrs from now, nobody's going to be carrying the computer boards for a 2004 Toyota Pruis or a 2013 Tesla.
However, you'll still be able to restore your grandfather's '57 Chevy...
Re:No different than cars (Score:4, Informative)
Code should accompany data (Score:5, Interesting)
I presented a solution [blogspot.com] to this long-standing problem last year to the Denver HTML5 Meetup.
Code should never be separated from data. This is possible with HTML5, JavaScript, and open source.
In the presentation, I steal and repurpose Hofstadter [wikipedia.org]'s analogy of DNA to an LP vinyl record, which is an information bearer, but useless without its information retriever (the record player). Like the cell of an animal, which contains both DNA and the means to "play" it, I ask why not the same with software?
My maxim is: data should always carry the code with it to play itself. It was inspired from the field I've spent 50% of my career in: non-destructive testing where, for example, X-Rays and ultrasounds are performed on safety-critical industrial parts with 50-year service lives. If one of those parts fails and kills someone, you're going to want to go back into the old data and find the earliest indication of the flaw or fault and reinspect every other part in the world like it that is still in service. And maybe you need to go back 50 years. Under such a context, not providing the code with the data could be considered an act of gross neglect.
In my presentation, I use the 1990's era trick of embedding XSL into an XML file, with the addition of the XSL now being able to use HTML5/JavaScript. Sadly, I've only gotten it work with Firefox -- the other browsers consider it a security violation.
Re: (Score:2)
From a future data recovery standpoint, how is the "code" any more useful than data? You'd still need to be able to figure out how to execute the code itself --- the code is just an especially complex and capable file format (which likely makes it very difficult to figure out if you've lost the execution instructions). Some file formats are already complete programming languages --- like PostScript. Do you think you could make much sense of a PostScript representation of a document if you started without a
Re: (Score:2)
It's an issue of installed base.
The installed base of any given NDT system is typically less than Qty. 100, often much less. The installed base of HTML5 interpreters is on the order of a billion. The installed base of PowerPoint 97 at its peak was in the tens of millions. To be honest, I think Vint Cerf is complaining a bit much. Anyone (including him) could download the appropriate VMs, archival operating systems, and archival Microsoft Office systems to read PowerPoint 97 and even convert it to a modern f
Re:Code should NEVER accompany data! (Score:5, Insightful)
No! Fail! You don't get it!
1) Code is data
2) Code is data that is especially hard to interpret
3) One of the main reasons of all this mess ia that in all those proprietary formats, data is intermixed with code, and the whole mess is very hard to parse.
Data should be kept completely isolated, as far away from code as possible. That way, if you cannot interpret the code any more, you will still be able to analyze and parse the data. You know, it is not that hard to construct a record player.
I do blame Microsoft (Score:5, Informative)
Seriously, why would Vincent Cerf not blame Microsoft? They have an extremely poor track record with backwards compatibility, and I don't think they even know what forwards compatibility is. If you design the data formats correctly then you can keep things usable for decades (or centuries). Guess what, twenty year old TeX documents still work, and yet Word X won't work with Word X-2. I've pulled runoff documents off of 70's versions of Unix that can still be printed. That says to me that one can deal with compatibility issues.
This is all intentional on Microsoft's part too. They make money when customers buy new copies of software, so it is in their best financial interests to make sure that customers have significant pressure to upgrade. I remember the solution to an acknowledged bug for Word 97 was to make sure that everyone who was going to read your document had the appropriate Word 97 plug in in their older version of Word. I completely blame Microsoft here.
This is not that hard a problem, IF the company pays attention to it and gives it even a small amount of priority.
Re: (Score:3)
To say that MS has a poor record of backwards compatibility is, well, ridiculous. It's only just about *the* most important thing for them, because the majority of their business is with busnesses, and if their FooBar app doesn't run, then they don't upgrade.
No other OS has near the level of compatibility that the MS sequence does.
http://www.youtube.com/watch?v=vPnehDhGa14 [youtube.com]
http://blogs.msdn.com/b/oldnewthing/archive/2006/11/06/999999.aspx [msdn.com]
http://blogs.msdn.com/b/oldnewthing/archive/2003/08/28/54719.aspx [msdn.com]
Re: (Score:3)
No other OS has near the level of compatibility that the MS sequence does.
Somebody's been drinking the kool-aid.
There's a small, little known company called IBM selling a type of computer called a "mainframe" which might beg to disagree. You can buy a modern mainframe which will still run your unmodified programs which you wrote on an original System 360. In 1964.
Microsoft have not even existed as long as that chain of backwards compatibility, and you try getting the original digger to run on Windows 8 (or
Re: (Score:3)
The OS stays compatible in some ways (Windows is not at all unique here). However the Microsoft applications have serious problems in this regard. Maybe some of the competition is not so great either but it's no excuse when Word can't even be compatible with itself. They have changed the file format in Word in fundamental ways several times.
Re: (Score:3)
No other OS has near the level of compatibility that the MS sequence does.
It's called ANSI C on Unix. Pick up a copy of The UNIX Programming Environment and you can still use the examples verbatim on a Linux machine today. And you can even still use Motif apps, if we're talking about GUI programs. They still work just like they did when they were new, except a hell of a lot faster.
Oh, you want backwards compatibility for closed-source software? Guess what? Plenty of software craps itself when it does anything interesting on the wrong version of windows. In reality, there's only o
This problem isn't new to anyone (Score:2)
This problem isn't new to anyone. If it's new to you, then you need to get involved in the digital preservation movement.
http://en.wikipedia.org/wiki/Digital_obsolescence [wikipedia.org]
"hard problem" (Score:5, Insightful)
Vint, that's bullshit and you know it. It's nothing more than preserving syntaxes, grammar, file formats. That's not hard, and it only requires someone to create a format conversion ONCE to solve the problem at each stage of the evolution.
The real problem here is proprietary non-public formats and structures. When the structure of data has been a closely guarded secret and requires reverse engineering that may not even yield a perfect result, THAT is hard.
He should be blaming Microsoft (Score:3)
My first Latex publications from 20 years back and all my human-readable ASCII scientific data still be read and used without any problem. Human-readable file
formats in the UNIX tradition completely solve this problem.
This problem is only hard if the people making the data formats are either stupid or do not want their formats to be easily accessible to other applications, as Microsoft does. Of course, others are creating just as fundamentally broken formats for either of the same reasons.
Re: (Score:3)
00007b0 5f00 675f 6f6d 5f6e 7473 7261 5f74 005f
00007c0 696c 6362 732e 2e6f 0036 5f5f 7270 6e69
00007d0 6674 635f 6b68 6500 6978 0074 6573 6c74
00007e0 636f 6c61 0065 626d 7472 776f 0063 706f
00007f0 6974 646e 7300 7274 636e 7970 7000 7475
0000800 0073 6177 6e72 0078 5f5f 7473 6361 5f6b
0000810 6863 5f6b 6166 6c69 6900 7773 7270 6e69
the man is out of touch (Score:3)
You can get emulators for just about every machine you can imagine: PDP-10, PDP-11, DOS, Atari, Amiga, C64, microcontroller, etc. You can get hardware emulators with FPGAs if you like. Almost any important format is documented or has been reverse engineered. Yes, you can easily read 1997 PowerPoint files, even if his weird choice of Office on Mac can't. And that's only with current technology. Give it a few decades and all that can happen behind the scenes and computers will just automatically perform even the most complicated data conversions behind the scenes. "Computer, scan the 1997 floppy and put the data on screen."
best safeguard (Score:3)
The best safeguard is the abandonment of all existing proprietary formats to freedom (so anybody can write conversion software) and the proliferation of open formats on an ongoing basis.
Re:So? (Score:5, Insightful)
I think you will find that there's a little known branch of academia called "history" which sometimes takes a curious interest in even the most trivial of past information.....
Re: (Score:3)
I think you will find that there's a little known branch of academia called "history" which sometimes takes a curious interest in even the most trivial of past information.....
Even if you don't care about the historians, I'm sure the lucky people who have the pleasure of handling property deeds at your local governance hive can tell you a story from within the last week or two about needing to pull some rather seriously dusty documents to allow a present-day transaction to go through without incident.
Many data will, indeed, be of no interest at all, or the same historical interest that neolithic refuse dumps are; but data in the nontrivial-number-of-decades range are still live i
Re: (Score:2)
Now back to my zombie virus work. Anybody got a decent time capsule for me to use?
Re: (Score:2)
Re: (Score:2)
That is a gross exageration but is an anaolgy to the point of the article. Without proper protections, all the information, notes, white papers, studies, etc will be useless if there doesn't exist technology that can read it.
In a worst case scenario how would humankind rebuild and not forget what was previously learned (e.g. dark ages we already experienced).
Re: (Score:2)
Re: (Score:2)
I think the user was either using PowerPoint 4.0 for Mac or did not upgrade to Office 97 immediately.
Re: (Score:2)
Quite likely. I had some old Word for Mac documents of scientific papers I wrote in the 90s, and the only way I was able to recover them a few years ago was to install a Windows 3.1-era copy of Word for Windows.
Re: (Score:3)
Have you tried disabling the file blocks [microsoft.com] first? At least Word for Mac 4.x and 5.x can be read this way.
Re: (Score:2)
WordBasic, actually. What is fun BTW is to unblock Word 6.0/95 formats in 2010 and later and open a file with WordBasic like SCANPROT.DOT.
Re: (Score:2)
Note they also sometimes drop support for old formats too:
https://bugs.freedesktop.org/show_bug.cgi?id=59902 [freedesktop.org]
Re: (Score:2)
By the time YOU care to convert a file and can't... there's no app, and NOBODY but you gives a damn about that file you got.