Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Instant Access Memory 110

tnielson writes: "The April issue of Wired interviews Stuart Parkin, an IBM scientist developing MRAM; Non-volatile, fast, durable, and cheap. It should be great in an MP3 player, and according to the article, could make all of our computers instant-on! Problem is, five years is a long time to wait..."
This discussion has been archived. No new comments can be posted.

Instant Access Memory

Comments Filter:
  • by Anonymous Coward

    This is yet another example of a piece of "amazing" new technology which net magazines like Wired love to write about. They are always promising something which is a "quantum leap" ahead of current technology, but it's always 5 years down the line. And then, 5 years later do we see this? No, it's just more of the same.

    What is it about the net that encourages these places to publish this sort of rubbish? Is it the constant pressure for novelty, or is it just sloppy "journalism". And do we really need a synopsis of the life of Mr. Parking? Does it add anything to the article? No, it doesn't, it just wastes bandwidth.

  • Not having to route standby power is useful, but not essential. Tickling memory is about as simple of an operation as it gets! There's just no logic in it.

    As for milliseconds refreshing power, ye gads memory operates in the nano realm. This means memory is effectively realtime already.

    --Dan

  • MRAM makes it so you don't need to tickle your memory to keep the contents alive.

    It's not that hard to tickle memory. It's just that motherboards don't support doing it because operating systems have never known how to deal with it.

    Tickled DRAM is essentially identical to MRAM for purposes of nonvolitility within desktops and servers.

    --Dan
  • by Effugas ( 2378 ) on Tuesday April 11, 2000 @11:49PM (#1138450) Homepage
    This is the guy who helped come up with GMR? I bow down to his technical skills. But it seems that this technology is being sold for something that it just isn't.

    Really, this just doesn't have much to do with instant on technology.

    It's true. As useful as it is to require no power to store a charge, neither desktops nor servers have any serious problem with power--they're both plugged into a wall! There's no reason for mature DRAM memory to not receive the trickle charge it requires to keep its contents from drifting away. Problems come when operating systems (primarily) and motherboard standards fail to build in stasis modes--for all the determinism of computers, I find it rather surprising that the entire system cannot be simultaneously frozen until a given restart interrupt is triggered. But that's the situation we face--it's not that the memory doesn't last, it's that we don't know how to deal with a house of cards we don't need to rebuild every so often.

    Where I see this technology being useful is in laptops, or anything else where "power just to suspend" is a real issue. Heck, even for normal operation, memory can be a real drain on power: Witness the effect of increasing from 2 to 8 MB of RAM on a Palm V(it's significant!). So this does matter for pervasive computing, as the article suggests.

    But it has almost nothing to do with "instant on". I do forsee it being implemented in systems which don't want to have to "recover state from hard drive" or "implement a trickle charge system to keep existing state", but that's not so much a break through. The reduced power load scene DOES seem interesting, but lets not forget just how mature a technlogy DRAM is. They'll have to do some pretty amazing work with the MRAM to surpass DRAM. By then, where will DRAM be? Remember, Intel has its dominance partly out of the sheer amount of resources they can put into making the horrifically complex x86 fast. 21bil is alot of money to lose to MRAM!

    Thoughts?

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • BBC Micro was a 6502-based microcomputer manufactured by Acorn in the UK. It has 64K ram of which 32 was taken up by a BASIC interpreter.
    It did colour nad cool sounds and loads of ports - parallel, serial, midi, weird proprietry ones for drives and a processor direct bus that allowed you to connect a second computer (didn't have to be the same architecture) for multiple processing.

    They were very popular in UK schools and can still be found around the UK. Bulletproof as well.
    Had an excellent bank of ROM slots - you bought an application (the popular one being Wordwise) on a rom, slotted it into the machine and it became a resident programme.

    Basic programmes were imputted off the command line, the command AUTO giving you line numbering and stuff (cool in those days).

    Originally sold with a tape drive, they mostly used 40/80 track 5.24 inch floppy drives and were famous for being bundled with teh Epson FX80 dot matrix printer that was way more expensive than the copmputer and the loudest thing on the planet when running.

    Best games include Frak! Castle Quest and the original Elite...... first (only?) game to use two screen modes on the screen at once :)

    Troc
  • It did have 64 but the assembler you mention above used the first 32 automatically and you couldn't change that, so for applications and whatnot you only had 32 available.

    The user port was cool and the excellent analogue joysticks when everyone else had clicky microswitch ones :)

    Troc
  • Text adventures......

    god I remember getting well annoyed with The Hobbit.

    And buying books of computer games and typing them in......

    and one line scrollig games in 255 characters

    and citadel which took bloody hours to load off tape and wouldn't transfer to disc :(

    that's it, where's my duster. I can feel the need to play with the old beastie again (and I mean the BBC B :)

    On a weirder note, the British Science Museam has an Acorn Electron in the toechnology and communication section as "the shape of personal computing - soon we will have have computers like this at home" - slightly behind the times I feel.

    Troc
  • Very true - what tends to happen is that in 5 years time, another technology, or a development of an existing technology has rendered the "amazing new breakthrough" obsolete (or too expensive, complex etc)

    These things tend to resurface a few years after they were invented or whatever and become part of technology anyway but without the fanfare and with a few people going "I told you so" and claiming 20/20 hindsight :)

    Things like holographic and/or 3D memory storage - was firsat mentioned well over a decade ago as the "next great thing" and was promptly forgotted and has recently resurfaced as working prototypes etc.

    Best thing to do is take it with a pinch of salt and wait a few years.

    I also like the fact that you can read the submission at the top of the page as indicating an awesome instant-on device that takes 5 years to power up :)

    Hohum

    troc
  • Yes, instant-on is *good* for a system that does not crash as often. Mostly the system is unaware that the machine was turned off, so it is actually running for a very long time.

    I was wondering why we cannot do that right now. Can't the whole memory be put into the swap partition on shutdown, and some special version of LILO just restore it? I would think the time to swap in 100M from the disk would be pretty small. Biggest problem is that all the hardware has to be reinitialized, I guess new "poweroff/on" signals have to be sent to every process and a whole lot of programs and drivers need to be rewritten to reinit hardware on these signals. I can also see these being so badly written that in fact the "instant-on" is no longer instant, as each takes many seconds to reinit...

    This will have almost miniscule effect on rebooting. The typical application spends most of it's time "initializing", not swapping in stuff from disk.

  • Q1: What happens when someone takes a powerful magnet and wiggles it around one of these chips?

    Q2: What happens if/when the Earth's magnetic field flips polarity?

    Q3: What happens if someone shoves a whopping good bolt of electricity into these chips?
  • by Tjl ( 4493 )
    No-one here recognizes core memory from the 60s? Little magnetic rings, turned on&off by wires running through them?

    Maybe they're making a comeback but somehow I'm thinking that this is more of an april fools thing...
  • And the best "bitty box" it was too. It had a built in assember, the "user" port was one of the most generic expansion ports I've ever seen, you could plug a write-protectable ram cartridge into one of the ROM slots and write your own ROMs, wow! Those were the days. It only had 32k, though, not 64. The later machines had 128k, but it wasn't easy to access.
  • Note that the stuff described above has all been fairly straightforward evolution of the hardware and software technology. The revolutionary part has been its effect on us all.

    Personally, I won't think anything truly revolutionary has come about until we all can have some sort of implantable wireless-networked computer that directly interfaces with our brains and gives us a sort of limited omniscient telepathy. If we can all plug into the World Mind at will, that would be truly revolutionary.

  • Wired isn't a technical magazine, so it didn't go into a tremendous amount of detail about the tech involved.

    Time was, when they had this thing called a "Geek Page." RIP. Now, they have ads for private corporate-executive jets. And I no longer have a WIRED subscription. Doom on you, WIRED. You forgot to dance with them that brung you.

  • Instant on would not make crashes meaningless. Since the screw-up that caused the crash is also in memory, it would fail again as soon as restarted. You'd need to have some sort of transactional system you could backstep through to a non-crashing point - in effect to CVS a corefile every few seconds. It would be a huge waste of resources. So, in fact a crash would be the case when you *should* go through the slow proper boot process. You'd still lose all your work, and you'd still have to wait awhile, although booting from NV-RAM would be much faster than from hard disk.
  • Freetrek.linuxgames.com

    and lots of other stuff too for work.

    I am definitely a coder, and have been since I was a kid. Coding is one of those things that most definitely does NOT take a fast processor to do. Programmers in this day and age who go out and buy a 700 MHZ processor are wasting their money unless they are also going to play Quake on that machine.

    Use make, gcc, and vi. You need nothing else. Compiles and links are extremely fast even on a P133 running Linux, and completely comfortable on my Celery 300.

    I guess you guys don't remember the bad old days when a fast machine was 1 mips. Even on those machines Turbo Pascal v3.0a wasn't too bad at all.

    Your FreeBSD machine should be very fast. If you're compiling 1.5 megs of code every time you do compile-link-test cycle, then I'd recommend using a more modern technique: separate compilation units and makefiles. :-)
  • Apparently in 5 years we will have computers. [1939]

    Apparently in 5 years I will have a computer. [me, in 1977]

    Anyway, we're going to have super mega fast computers in 5 years, with super mega capacity hard drives, and awesome color.

    If you compare computers of 5 years ago you'll find that they were all 486's which from a user's perspective were frustratingly slow. Even Linux, speedy as it was on that hardware, wasn't super fast because the hardware wasn't up to it.

    Nowadays, everything on my Linux box happens instantaneously, and I have absolutely no desire for a faster computer.

    The gains of the next 5 years will therefore mean less to me than the gains of the last 5 years.

    If there is some real change in my computing experience, it will be because we've crossed some magical gap that allows a new technology.

    I've been reading for years about how speech interfaces were just around the corner, and that the new 386 processors would have the horsepower to do it and blah blah blah. Every year it was always just around the corner.

    I think really good speech recognition will still take at least 5-10 years, and will require machines at least 10, probably 100 times faster than my Celery 300A machine. Until then, I just don't need a bigger box!

  • I'll settle for a slashdot web server that
    serves pages at some sort of industry standard
    speed.

  • Have you ever used Linux? If you want JUST boot the kernel and bash you can load up in a few seconds, if you want to load any deamons or apps that let you do something it takes as much time as Windows to boot. Instant on computers are stuff like terminals although an instant on workstation would be pretty cool. Your stability atgument is stupid, the OS means shit if your programs aren't programmed well, a shoddy Linux app is as bad as a shoddy Windows app. Boot up time for an app is agregate so after a while the time builds up to a loass of productivity. In the course of a year you'd make back hours if not days worth of productivity if you could have a program open the instant you needed it. Stop being a "me too" weiner.
  • Booting up a kernel is nothing, start in run level 3 on an older system that only has about 32 megs of ram and you'll see the advantages of having an instant on system. If your system boots up a GUI and deamons, that is considered part of the system. Stability is a moot point, depending on what you do or don't do can definitely affect system stability. You can run Windows 98 for days if you're keen and know how to manage it (Wintop works great for killing hung apps). I wouldn't want to spend days in Win 98 but it is possible. Booting isn't an issue if you store your system in a FlashROM and can hit a power button and get to work. GO down to Circuit City and play with an i-Opener demo, flick the power on and off a few times and you'll see what I'm talking about.
  • by Graymalkin ( 13732 ) on Wednesday April 12, 2000 @10:27AM (#1138467)
    Linux weiners tout never having to reboot a system because their OS is oh so stable. Big efing deal. My file server's been up for a week and will probably stay up until I decide I want to upgrade the kernel. This is totally beyond the point of instant-on computers. It would be rad to have a computer with 10 gigs of ram that all your stuff was on, that way you'd have little seek time and could turn the system on and off like a television. Why is this cool? Because some people don't like leaving computers on all the time, in many places eletricity can be pretty pricy at certain times of the year or they may just not want to have their electrical toys running 24/7. Instant on would be great in a corporate environment because you'd get to save time waiting for stuff to start, that time builds up in the course of a year costing you cash. Home users also wouldn't have to deal with boot-ups which they may or may not understand. Oh well.
  • Note that the stuff described above has all been fairly straightforward evolution of the hardware and software technology. The revolutionary part has been its effect on us all.

    I think you are right to question the revolutionary nature of individual technologies. Nature doesn't move in quantum leaps, only in a continuum, and by nature I mean to include human artifice. It follows that patents should not be granted on the grounds of non-obviousness because every invention, regardless of its complexity, will owe a large debt to the past.

    A better solution might be to extend the notion of usefulness in judging patentability by placing a large burden on applicant to show that his or her invention benefits human existence in some fairly significant way. This may still suffer from some vagueness, but at least it promises some payback for granting exclusive rights to work that incorporates the benefits of past discoveries without due compensation.

  • Certainly a superior choice to the current implementation of patents. But if the same clerks are doing the job with the same kind of oversight (seems to be "more patents is better...shows you've been working!"), then the results probably wouldn't be any better.
  • uhm, when the crash always (or with a probability of 90%) occurs when you just happen to be saving your 50 pages diploma thesis, I guess you wouldn't mind reboot times.

    Instability mostly means lost *work* and that means more lost time then only the reboots.

    The by far worst problem with instability is data loss, not reboot time.
  • This idea of persistant memory is interesting when combining with something like EROS [eros-os.org], which is designed to be a persistant system. I don't see how it works well for current systems - the article makes references to not having to wait for the computer to reboot if it crashes... Except with current systems, you'd need to reload a lot of stuff in RAM anyway, because it would've been corrupted by the crash...

    I remember a little hack on the Amiga (Fastboot?) which was nearly instant-on. It dumped a copy of your memory to disk, and would just pull that memory image back into memory upon boot... So you could boot the machine in the time it took to pull owever many megs of RAM you had off of hard disk. It certainly had it's share of problems, but it was interesting to play with... Windows 98's suspend to disk mode is pretty similar, although I haven't actually played with that. Still, it certainly sounds like a nice technology for things like MP3 players and palmtop type computers, if nothing else.

  • >> Not having to route standby power is useful, but not essential. <<

    It is funny that you should say this. The entire computer industry is based in the concept of "useful, but not essential." Soundcards, 3d graphics and 13 gig hdds sure are useful, sometimes not, and most times overkill for people who only touch a computer for internet access.

    Oh, and I meant to say nano, but being not awake can naught but harm my thinking skills.

    Besides, if the difference in access from 70ns to 10ns is noticable in hefty applications such as photoshop, a 10ms to 0ms jump would be just as noticable.

  • I'd say dismissing this as a laptop thing is a bit out of bounds. The ability to suspend your system without power is good for desktops as well. It would be nice since board manufactures don't have to route standby power, decreasing complexity.

    Also, any milliseconds not spent refreshing power is a millisecond using data. Consider what this would do for realtime access.

  • If you pull the plug, the state of memory is retained, but the CPU state is lost.

    Some older systems (PDP-11) support a power-fail interrupt that allows the CPU to save any volatile state information in core (non-volatile) memory. I don't know if any PC hardware supports this.

  • Anyone else remember bubble memory [webopedia.com] from the 80's ?

    That was magnetic based and non-volatile and I seem to recall it being used in a series of portables from some manufacturor like Sharp.

    I've got a nagging feeling that one of the reasons it didn't take off was access speed. I wonder if this new approach is at all similar, and if so what they have done about any performance issues.

    I mean if it required dirty great RAM cacheing to make the performance acceptable surely this would be a reinvention of the hard disk ?( joke )

  • So what you're saying is that you will accept less stability if th reboots take less time? I don't think so. In my world, downtime is downtime, and no matter how short the downtime is, it takes away from my productivity. Or worse, takes away from the productivity of my clients.

    I'll gladly accept a 5-minute reboot cycle if I have to do it once every couple months, during off hours. A 1-minute reboot cycle (let's be realistic about this, it will never be "instant," especially with certain OSes) in the middle of quarter-end or year-end processing is A Big Deal.

  • It wasn't the April 1st edition was it?
  • > I don't care if it doesn't recover from a crash because my linux box never crashes, seriously.

    Your power 100% too? Ever accidentally turn off the machine or hit reset?

    As someone up there pointed out, though, total persistence is not needed. You can treat part of the memory as a filesystem if you want and the rest as scratch storage (like current RAM). It's just that it becomes a rather strange and arbitrary distinction.

    It's also hard to believe that one day these things will be cheaper than HDD - $0.01/MB sounds pretty cheap for any sort of silicon...

    Stephen

  • by srn_test ( 27835 ) on Wednesday April 12, 2000 @12:17AM (#1138479) Homepage
    This sort of thing is what the persistent operating system groups have been working on doing for years.

    It turns out that it's _hard_ to do - keeping the data around is the easy part; what do you do when the OS crashes? How do you recover?

    You end up with a huge database like wrapper around the entire OS, and really heavy-weight recovery code to try to rebuild a consistent state of the system.

    You've also got the problem that if something is wrong in the OS, when you reboot you'll quite possibly just trigger the same bug again! Makes Microsoft style "reboot to fix the problem" solutions not so good.

    See some persistent OS sites, like:

    • http://www.psrg.cs.usyd.edu.au/
    • http://www.cs.stir.ac.uk/~aol/publicationlist.ht ml

    This is just a few I happen to know.

    Stephen

  • Modify Linux so that I hit a key combination that stored the current memory contents and CPU state to disk. Then modify lilo so that it would load the image to memory and reset the CPU to the current state.

    This way, I could load the programs I normally use and wouldn't have to wait for each to load each time I rebooted?

    Just one problem I can think of off hand, What to do with the state of devices, say a sound card, that are normally reset when the driver is loaded?
  • 1) Don't do that, it's not nice 8^) floppy discs/backup tapes never responded well, either

    2) We have bigger problems than losing 'instant-on' function... (plus the time on this is...)

    3) Same thing that happens when lightning hits anything else. Hope it's under warantee.
  • by CAIMLAS ( 41445 ) on Tuesday April 11, 2000 @11:32PM (#1138482)
    Would an instant on computer not make computer pogram stability a near-useless feature? For me, loss of work isn't my fear through a crash - it's sitting their on my white skinny butt for X minutes while my machine reboots.

    I switched to linux to prevent that. (And to geek around more, but that's another story.) Would such a thing make Linux's main strong point null, or would linux be able to develop it's other fields - digital image/video editing, audio, games, workstation software - in time to surpass wintel products, on a quality based assessment alone?

    -------
    CAIMLAS

  • Wired isn't a technical magazine, so it didn't go into a tremendous amount of detail about the tech involved. But I'm curious as to how it works.

    The article mentioned that the guy featured has a few patents on the basic technology, including the magnetic cell itself and the archtechture of the MRAM memory system as a whole.

    Anyone got the patent numbers? I'd love to read them.
  • Just becasue you have an "instant-on" PC doesn't mean your OS has to be stupid. Just leave in the normal boot-up ability.

    Why not just make MRAM a boot option in your BIOS? It knows to look for anything in MRAM that isn't 0. If there are any 1's, it boots from memory. Then, just implement a button on the front of the case, similar to reset. When you push it, after a crash, it resets all bits in MRAM to 0. When you power on your system, the thing can't boot straight from memory, so it defaults to your first IDE device.

    Makes sense to me.

  • The way I see, a type of RAM that allows instant-on computers is rather pointless. In an unstable system (such as Windows), you need to clear the RAM in order to free resources and bring back semblance of stability. In a stable system, the system does not need to be shutdown so instant-on computers aren't that special because you don't have to reboot anyway. Even if you wanted to reboot, in a system such as Linux, the bootup is so quick that instant-on doesn't make much difference anyhow.

    Chris Hagar
  • I use Linux all the time. It doesn't seem to me like a long boot-up, or possibly it's just so much less than a Windows boot-up that it seems short. Regardless, a Linux system with a bunch of daemons takes the same amount of time as Windows without all such stuff. My main argument is that you can keep Linux running and so not waste time booting.

    A shoddy Windows app can crash the entire system handily. My main argument is that the operating system is insecure and leaks memory, situations which are remedied by rebooting and so flushing the cache and reloading the operating system.

    The boot-up time being talked about here is the boot-up time of the system. Starting apps isn't often called "booting up" anyhow. Besides, a program in Linux can be started up almost instantly and even in Windows is quite quick.

    You're "stop being a 'me too' weiner" is obviously just a simple flame, but...I had not seen this idea expressed and was not even bashing Windows.

    Chris Hagar
  • re-entrancy problems

    Checkpoint the memory periodically?


    MIB problem

    Have another key sequence that wipes all of your still-semi-volatile memory?


    --

  • 2)It still takes idiot windows2000 30+ seconds to boot up on my machine with 256 megs of RAM.
    That's right, it's pretty bad. (And don't forget, you still have to POST, which takes a good 15 or 20 seconds on my machine.)

    But Windows also supports suspend-to-RAM, which is much more useful. Waking up from suspend-to-RAM only takes about three seconds. As long as you're confident that the machine will have power while it's suspended, there's no reason to hibernate.

    This is really more of a motherboard/chipset feature than an OS feature, by the way. Windows 98/2000 is just the only x86 OS that implements it.
  • >Would an instant on computer not make computer
    >pogram stability a near-useless feature?

    No, instant reboot would most likely only work if the computer is shutdown in a stable state. If it crashes then you will have to zap the content of the memory and reload everything from the HD.

    In some ways it's not very different of putting your computer in sleep mode rather than shutting it down (for computers that can turn off everything but RAM while in sleep mode.)

    Janus
  • If the magnetic polarity flips, then I think the least of your problems will be having to reboot your PC...
  • I'm fairly certain they mean instant in a slightly less technical fashion here, as in since the data is always held in memory, it's 'instantly' accessable, since you don't have to load it into RAM.Take it to mean how they use the word when discussing the 'instant on' feature of a Palm or other PDA device.
    ----------------------------------------- --
  • Uh, it HAS a new name: MRAM... Magnetic RAM.
    -------------------------------------------
  • We still really need something like this. A cheap, persistent memory would be perfect for devices like digital cameras and mp3 players, where access speed is much less important than capacity and cost. Perhaps if bubble memory hadn't been killed it would now be fulfilling it's potential.

    HH

    Yellow tigers crouched in jungles in her dark eyes.
  • by wowbagger ( 69688 ) on Wednesday April 12, 2000 @01:10AM (#1138494) Homepage Journal
    I see two different but related problems with a system composed entirely of non-volatile memory:
    1. If you pull the plug, the state of memory is retained, but the CPU state is lost. This is very much akin to an interrupt occurring, except that an interrupt at least records the CPU state on the stack. The problem is that you now have to protect everything from re-entrancy problems: otherwise when the system is abruptly powered off and restarted the CPU has to do a restart and system or application data structures may be in a non-reenterable state.
    2. If the memory is corrupted, how do you force the system to clear? You'd need a [button|keysequence|etc.] that would tell the system to do a complete coldstart & purge of memory.

    Also, this could be bad if the Men In Black (and I don't mean division 6) kick in your door. Anything in system memory will be in system memory "and may be used against you in court", whether you like it or not. You won't be able to just yank the plug and clear system memory.

    That said, I still think this will be wonderful in the main. It's just going to have some implications we'll need to think about.

  • Simply using nonvolatile memory won't make your computer boot faster. After a crash, you'll still have to wait for it to load & initialize everything.
    Wouldn't the solution *Be* to use an OS that doesn't crash and boots quickly?

    And there's absolutely nothing in this article about the speed of this memory. WTF, Rob?

    If we're nerds, we should understand something a *little* more technical than this. Why don't they post a link to IBM's website or something?

    Where is my mind?
    mfspr r3, pc / lvxl v0, 0, r3 / li r0, 16 / stvxl v0, r3, r0
  • It's not just that. The problem is, people want more and more complicated software--and as it becomes more and more complicated, it also becomes more unstable, inefficient, etc.

    Yes, Windows 2000 is even slower and clumsier than Windows 95. But compare GNOME, KDE, and MacOS 9.0 to AnotherLevel, AfterStep, and MacOS 7.5. In every case, they're much bigger, much slower, and less stable.

    So why do we continue to upgrade? TWM came preinstalled with my distribution, and yet I never use it.

    Part of it is just the spiffiness issue. As Steve Jobs said in explanation of the glowing buttons in Aqua, "Well, when you've got a gigaflop to play with...." My GNOME desktop looks much better than my first X desktop--and most of the apps I run fit in with it pretty well. Plus, Falco dances along to the music when I play MP3s!

    For that matter, I can now play MP3s, even mix two at a time. I can edit huge graphics files with GIMP. I have 4 times the pixels at 4 times the color depth and 20 times the net bandwidth, and everything still works smoothly.

    And then there's usability. I could ramble on about how nice it is to be able to browse SMB shares, and drag an MP3 onto an XMMS playlist, but there's a much better indicator. My roommate, who's never used anything other than Win9x and MacOS 8/9 was able to sit down at the computer and figure out Netscape, GnomeICU, GEdit, the file manager, etc. within a few hours, with no help. When I tried to get a former roommate to use linux a couple of years ago, she ran screaming from the room....

    So yeah, ESD and GNOME and imwheel and so on are slowing down my computer, but I still use them, and I'm sure a lot of other people do. And they make my extra CPU power useful for more than just compiles and games....
  • Another reason suspending is useful for desktops: There are times when you need to disconnect the plug.

    For example, yesterday, I moved my computer to a new UPS. To do this, I had to power down, unplug the computer, plug into the UPS, power up, and go through the whole boot sequence. If I could just suspend, unplug, plug in, and wake, like a laptop, it would have been more convenient. And it would have meant 2 seconds downtime for my server instead of 2 minutes.

    It's not that big of a deal, but it's something.
  • When's the last time you used a 10-year-old laptop?

    First you'd have to remember how to use GNOME 1.0 or MacOS 9 or Windows 98.

    Then you'd spend the rest of the day cursing the 366MHz CPU, the tiny 8GB hard drive and 128MB memory.

    The chunky 1024x768 resolution that only works from a narrow angle would really bother you. Plus the old-fashioned keyboard--layouts change in 10 years (if we're even still using flat QWERTY-based key arrays)

    And what's the chance the trackball or touchpad or thumb thingy would even work after 10 years of dust collecting? And will people still use such things anyway? I used to be perfectly happy navigating windows with a joystick on an Apple //gs, but I can't imagine doing it now.

    Then you'd remember that it can't connect to the net without some kind of Ethernet thingy connected to DSL or Cable, whatever those are. After a few hours of futzing around, you'd get it connected, and it would run off to your homepage.

    So after it tried to hit an IPv4 nameserver to look up the old-style DNS name slashdot.org, you'd spend the next few hours actually getting it connected to the right place. Where it would try to load the page customized for your old account, which hasn't been used in 8 years.

    Of course your browser won't support any HTML 9.0 features, but good old /. will still be useful with an HTML 4.0 browser, and all your old slashboxes will come up. So you can finally get to the latest episode of Sluggy Freelance.

    But come on, will Sluggy still be funny after all those years? Pete's good, but he's no Walt Kelly....

  • If you really want to be accurate, Windows 2000 is missing half the features that NT 5.0 was supposed to have (and that were half-way done in the beta).

    And by the way, MacOS X, GNOME, and KDE don't bear much code resemblance to MacOS 7, AnotherLevel, or AfterStep either....
  • ...(aside from the fact that this is still vaporware) that every 49 days you'd have to buy new MRAM for your Windows machine...

    All right, I couldn't resist a cheap shot at Bill. I'm not real proud of it, but I'd probably do it again.

    On a more serious note; I've heard of "magnetic" memory research before and I'm wondering how M. Parkin has gotten aroud the speed issues that have plagued these efforts in the past...Is it just a matter of size that keeps the growth/collapse of the magnetic fields brief?

  • by brunes69 ( 86786 )
    More cool research form IBM. Man, these guys are revolutionizing everyting.
  • Why have RAM in the name...

    It's not even like RAM, it uses magnetics. It's also non-volatile so it's more like a Flash-ROM. Speaking of which, Flash Read Only Memory should have a different name too because writing to something read only. It's just plain wierd.

    Ahh, I give up
  • But it's a different technology. That's like saying... Ya we should have called DVDs CDs cause they are compact and they are discs so... It's a new technology, it deserves a new name.
  • I've seen some code floating around somewhere to do just that. When you use it, it causes Linux to effectively go into suspended hibernation. When you reboot, you get the entire state back. Wish I could remember where I saw it. If you poke around on the Linux high availability sites, you might turn it up.
  • Q1: That would just be stupid, wouldn't it?

    Q2: I'm hoping that doesn't happen in my lifetime.

    Q3: See Q1.

  • My computer's already instant on. I just scoot on over and hit the monitor button. I run Linux so I can get away with that. The only time I ever reboot is to swap the kernel out.
  • by Greyfox ( 87712 ) on Wednesday April 12, 2000 @03:30AM (#1138507) Homepage Journal
    If your OS crashes or becomes unstable you'll probably have to jump through some hoops to zero out all your RAM. So it'll only be instant-on if your system remains stable. This should increase, not decrease, the demand for a stable OS. One which doesn't allow programs to take the OS with it.
  • I would be happy just to have the same ability to freeze my linux box that a laptop has, i.e. to deterministicly power down everything except the dram refresh, then quickly bring it back up later. I don't care if it doesn't recover from a crash because my linux box never crashes, seriously.

    The box has a buncha huge drives serving up mp3s to my house, but is used less than 10% of the day. I don't want to leave it on sucking up power and making a god-awful racket all day and night. I just want to turn it off an on as quickly as my amp, and as Effugas eloquently pointed out [slashdot.org], this is not that hard, even with DRAM. We don't need MRAM for that.

    And I doubt that it's just me that would love something like this...

  • I also very highly recommend the "Dreams of Rio" series. It's five CDs of high quality stuff. For you hardcore audio story types only.

    The AH-HA! Phenomenon was my first exposure to Jack Flanders. It's only one CD long and it is most certainly WAY WAY out there. It's pretty funny.. but probably not before three or four beers. Sober it's just.. weird.

    Eg: The lotus jukebox.

    Rami James
    Pixel Pusher
    ALST R&D Center, IL
    --
  • HEY! Moderate him back up! That was pretty funny!

    But obviously only if you've listened to the AH-HA! Phenomenon though..

    Rami
    --
  • Windows 2000 seems to have a feature called hibernate. This is what it does. It dumps all your memory to harddisk, sets up windows so when you restart it loads that file back into memory (i guess that it's some sort of image) and then it shuts down the machine completely. When the box gets turned back on, windows loads the image and you are right where you were when you left.

    There are two main problems with this:

    1) If you have a lot of memory (I think NT supports what? A terabyte of memory?), then you have to have the equivalent in disk space -- more or less, I don't know if the file is compressed or not.

    2)It still takes idiot windows2000 30+ seconds to boot up on my machine with 256 megs of RAM. (I think this is related to point 1. However, I don't dare run the beast from redmond on a machine with less than 256 megs.)

    Nice idea, poorly implemented. I don't see why they can't use some of that leverage that they have over the industry to add a feature to the MoBo which allows the computer to turn off for the most part, but allow enough power to keep the memory the way it was. Feasable? I have no idea. Cool? Perhaps.

    I never shut down the machine anyways. :)

    Rami James
    Pixel Pusher
    ALST R&D Center, IL
  • by Lonesmurf ( 88531 ) on Tuesday April 11, 2000 @11:26PM (#1138512) Homepage
    Apparently, in five years, we will have 1 Terabyte solid-state harddrives, instant memory (what? 10ns is too long??), 1 bazillion GHz processor rings, and video cards that will spit out realtime images straight into our fucking brains, all running on 1 Petabyte networks.. within our homes. Oh ya, I may have forgotten to mention that all this will be free.. all supported by a little blinking banner on your desktop that you will mentally block out after a week of using the machine.

    Then again, the way software is moving, I may need this to play Quake |||(|)||| on my BloatedLinux(tm) ver.100.3.2 system.

    I'll believe this stuff when I see it.

    Rami James
    Pixel Pusher
    ALST R&D Center, IL
    --
  • by EntropyMechanic ( 88779 ) on Wednesday April 12, 2000 @03:27AM (#1138513)
    The problem with bubble memory was that it was serial. All the bits were stored as little magnetic domains, but the only way to read those domains were to actually shift them to a particular part of the IC where special circutry existed to read them. The domain storage was logically laid out linearly, and as I recall, the domains could only be shifted in 1 direction due to the physical layout of the IC used to hold them. Thus, if you read a byte and then tried to read it again, all the domains that composed the bits in that byte had to be shifted around the entire domain storage array again.

    Now while a lot could have been done with caching and using multiple domain storage arrays, bubble memories were serial devices and their latencies just would not scale up well as you added more bits to them. Bubbles would make a good NV storage device, but could never replace RAM.

    Bubble memories were introduced in the late 70's, I believe. I think their big failure was lack of storage space and speed. Their commercial death knell was the ramp-up of HDD storage capacities in the mid-80's. They did have the benefit of having no moving parts and I think a military hardened version was available. If they exist at all any more, I'm sure it's just in a few niches.

    JTS

    Baldric, you wouldn't know a cunning plan if it painted itself purple and danced around on a harpsichord singing 'cunning plans are here again' - Lord Edmund Blackadder

  • RAM == Random Access Memory. This stuff is random access, and looks like memory to me. It's also solid-state, fast, and magnetic. So MRAM seems a perfectly reasonable acronym. What's the problem?
  • by _W ( 103302 )
    It's not only IBM that's working on this kind of technology: IMEC, the largest independent micro-electronics R&D center in Europe is also working on this kind of NVM: Ferroelectric memories [www.imec.be]
  • The Sinclair Spectrum was much better

    :)

    Rich (not *really* wanting to go over all those old arguments again)

  • Apparently it was used by the spectrum from time to time as well.

    Apparently not I'm afraid. The spectrum only had one screen mode (though I think the American Timex versions had more). Funky pixel addressing too which made sense when you started getting into assembly.

    Rich

  • The most sucky thing about the BBC was that it allocated screen memory from RAM so if you wanted to do any decent graphics (e.g. mode 2), you had no space left for your program.

    My personaly preference was for the Spectrum compromise, essentailly monochrome graphics with a colour overlay grid. Never bothered trying to understand the C64 model though.

    Rich

    print at 10,10;ink 5;paper 1;bright 1;flash 1;"Spectrum rools";

  • I almost shouldn't dignify your coments with a response ;)

    the spectrum didn't even have a proper keyboard!

    And you could sit it on your lap while sitting on the sofa and play games in comfort. Anyway, from what I've seen, most cheap PC keyboards use Spectrum keyboard technology just putting solid plastic moving bits over the top.

    the bbc seriously rooled for _serious_ computer hobbyists

    nah, all *serious* computer hobbyists needed was bus lines out the back and IIRC, both the BBC and the spectrum had those.

    the Basic it used had procedures and functions not just gosubs - i never used a gosub in all the years I spent programming on it.

    Basic, who ever used basic? Assembler was where it was at. At which point, z80/6502 becomes horses for courses (my preference was z80 but 6502 was fine too).

    [Later addendum: the Spectrum actually did have functions but they weren't the same as the BBC ones and hardly anyone ever used them]

    Of course, as I've said before, it's all moot since if you put it in any reasonable graphics mode, the beeb had no space for basic programs anyway (the in-line assembler was nice though)

    it was 32k on the Model B, the Model A had 16k!

    And the Spectrum had 41k of available memory

    it had a memory mapped i/o port and four a/d converters (i built a steering wheel out of a 10k pot to play Revs). I miss that stuff on my inferior but faster PC.

    Prefered the separate io bus of the z80 myself (I mean, why tie up valuable memory space for IO). a/d is OK (BTW, you can do that with your PC joystick port if you're careful) but I didn't particularly have much use for it. And of course, the spectrum had a steering wheel too (some hideous ashtray type thing you mashed down on the keys apparently)

    I've still got my two BBC's and occasionally play chuckie egg or frak!

    Ah, yes. The power of the BBC. You didn't have to go to the trouble of using more than the fingers of two hands to count the number of good games

    I computerised my dad's business on it when I was 13 before the company he had the franchise from computerised theirs so our stock levels and money were always what they expected when they audited us (the difference in or out of our pockets - good days and ten years before they caught on to computers - much $-). Try writing a database on a crappy Spectrum.

    People did. The lack of standard floppy drives was always a hold-back for that kind of thing though. And I don't know anyone who would say otherwise than that the Microdrive was a piece of crap. Sure, the Spectrum didn't have analogue ports or floppy drives or "The Tube" but it was a quarter of the price of the BBC, the manual was excellent and you could buy any extra stuff you needed and of course, it didn't suck :P

    Rich

  • I'm glad we Brits had our own computer scene because I think it has given us something unique. If I'd had a C64 I think I wouldn't be such a coder as I am now. The BBC almost forced me to learn about computers in a way the games machines would never have done. Thank you Acorn and thank you Clive Sinclair.

    Exactly. England got a great start in IT because of it. We'd definitely have dropped down a league as a country if it weren't for it. Unfortunately, the lack of decent internet access is starting to pull us back.

    BTW, remember the time that Clive Sinclair was reported as bashing the BCC guy over the head with a rolled up magazine containing an ad which slapped down Sinclair for it's lack of quality control?

    Rich

  • This is yet another example of a piece of "amazing" new technology which net magazines like Wired love to write about. They are always promising something which is a "quantum leap" ahead of current technology, but it's always 5 years down the line. And then, 5 years later do we see this? No, it's just more of the same.

    What is it about the net that encourages these places to publish this sort of rubbish?


    So what about Popular Science? The point behind tech magazines and tech news, at least personally, I like to see new ideas and innovations, whether or not they will be successful in a few years.

    One of the things I think contribute to the notion that this stuff isn't successful is simply because by the time is does come around, so much more has been improve that it doesn't seem like a big leap... -- ever wonder why someone who hasn't seen you for 10 years is struck by 'how much you've grown', and yet the person you've known since then notices nothing? It's perception...

    Right now, anything 5 years down the road seems almost too good to be true. When we get there, it's either been done to death or we've slowly got to the point where it's no big deal, and we forget how good it was when we first heard about it.

    I say good work Slashdot, I/we love to see new technologies posted as news, simply because it encourages inspiration, creativity, ideas and enlightenment.
  • AFAIK all magnetic technologies lack fast access and though MRAM may replace flash someday it cannot replace DRAM because CCD is only technology nowadays which able to deliver fast/cheap/low power consuming (compared to TTL cache memory).
  • 1. Memory that is zero'ed out is no different from random memory, if there are no pointers to it. If the machine is rebooting or powering up, the chipset would most likely just reset the CPU's pointers and go through the normal bootup sequence. It doesn't need to zero out the memory.

    2. The reset switch just triggers the same chipset function as I mentioned in #1.

  • My question is, will we still bother?

    rmstar
  • My BBC Micro used to boot up in about 1.5 seconds, with a tinny double bleep kind of sound.

    And I could play Elite on that... after I spent 20 minutes loading the game from a tape...

  • My Laptop has this feature as well... It seems to work when I choose to activate it... But when '98 chooses to "sleep" and power-down, it will come back on, but it forgets to turn the screen on... So you have to reset it manually, and "in the dark"...

  • the spectrum didn't even have a proper keyboard!

    the bbc seriously rooled for _serious_ computer hobbyists

    the Basic it used had procedures and functions not just gosubs - i never used a gosub in all the years I spent programming on it.

    it was 32k on the Model B, the Model A had 16k!

    it had a memory mapped i/o port and four a/d converters (i built a steering wheel out of a 10k pot to play Revs). I miss that stuff on my inferior but faster PC.

    I've still got my two BBC's and occasionally play chuckie egg or frak!

    I computerised my dad's business on it when I was 13 before the company he had the franchise from computerised theirs so our stock levels and money were always what they expected when they audited us (the difference in or out of our pockets - good days and ten years before they caught on to computers - much $-). Try writing a database on a crappy Spectrum.


    .oO0Oo.
  • I almost shouldn't dignify your coments with a response ;)
    Well I'm glad you did because you answer was very dignified :-P

    the spectrum didn't even have a proper keyboard!
    And you could sit it on your lap while sitting on the sofa and play games in comfort.

    ah, well it always did look more like a remote control

    Anyway, from what I've seen, most cheap PC keyboards use Spectrum keyboard technology just putting solid plastic moving bits over the top.
    hehe those babys are well annoying if you take them apart. I always buy the ones with good old switches.

    nah, all *serious* computer hobbyists needed was bus lines out the back and IIRC, both the BBC and the spectrum had those.
    yeah but... oh i can't think of anything

    Basic, who ever used basic? Assembler was where it was at.
    Well I was just telling ppl. Basic is always regarded as a lower form of life but BBC Basic was at a higher level. To be honest I never really used a Speccy much I'm just predjudiced. But if you had to use Sinclair basic then I'm not surprised you found assembler easier

    the beeb had no space for basic programs anyway
    Yeah I learned the hard way. I started a board game idea for my O'Level computer studies but by the time I'd finished the board there was no room for the logic. I submitted something else I'd already written and got a U for the practical. Luckily I was an expert on Kimball tags etc. and still got an A over all!

    it was 32k on the Model B, the Model A had 16k!
    And the Spectrum had 41k of available memory

    Oh, so you could write greedy programs then ;-)

    I mean, why tie up valuable memory space for IO
    hehe I suppose two bytes can make all the difference

    a/d is OK (BTW, you can do that with your PC joystick port if you're careful)
    Yub I know but I don't think you can get a DirectX driver for 10k pots

    And of course, the spectrum had a steering wheel too
    Was it made out of dead flesh rubber?

    Ah, yes. The power of the BBC. You didn't have to go to the trouble of using more than the fingers of two hands to count the number of good games
    Well like most ppl I'll say Elite, Elite, Elite
    The Repton series, text adventures ....
    In fact I've got about 30 games all of which I still play occasionally. It was a shortfall of the thing but that's what C64's were for

    Try writing a database on a crappy Spectrum.
    I'm sorry for saying crappy, it was the teenager in me. Serialising your data to floppy disk or tape was well handy. 18 years later and the only thing that has changed is the amount of data and the fact I use HTML as the presentation layer!

    The Microdrive was a piece of crap.
    They looked flashy though - I was tempted for a wee while

    It was a quarter of the price of the BBC, the manual was excellent and you could buy any extra stuff you needed and of course, it didn't suck :P
    I still think it sucked but you were obviously happy with it. I've got a ZX81 (with 16k) and that was a serious breakthrough. Hats of to Clive. It's a shame the Spectrum wasn't such a leap as the ZX81 (well there was the ZX80 but you know what I mean - I hope)

    I'm glad we Brits had our own computer scene because I think it has given us something unique. If I'd had a C64 I think I wouldn't be such a coder as I am now. The BBC almost forced me to learn about computers in a way the games machines would never have done. Thank you Acorn and thank you Clive Sinclair.
    .oO0Oo.
  • When I was using gcc *.c -lalleg [demon.co.uk] -o drm [rose-hulman.edu] (Dr. Mario clone) and gcc *.c -lalleg -o whack [rose-hulman.edu] (Hampsterdeath), compiling source code took a long time for me too. But then, I learned how to use GNU Make [gnu.org].
  • Didnt `revs` do this? and possibly aviator, but i doubt it actually.
    frak was *way* overated (though i loved his (orlandos) Zalaga...

    Eagles nest was a classic, planetoid, pacman (back when Atari was suing everyone), android escape, cool text adventures (sphinx, level 9 stuff too, like `time machine`).

    I`ll shut up now...

    A.

  • Couldnt you have a really low off time, if you took a snapshot of ram when you shut down? and from time to time too. Whats the problem with this? Be a good excuse to keep the amount of ram on a system low, surely? Machines are way overspecced, given what work they actually do, these days... (ie. its the os that needs it all, not the apps).

    a.

  • never heard of it, but you could do the same with an action replay cart (from datel).

    a.
  • well, you probably already knew this but you can get beeb emulators (and chucky egg!) etc from http://www.vintagegaming.com/ actually, no mention of the bbc there, but its on the net somewhere, i had a look a year or so ago...

    a.
  • Most of the time spent booting up I would guess is not spent initializing memory (which would only be slightly faster given non-volitile ram), but is spent on probing hardware and making sure nothing has changed since the last shutdown. It's also much easier for programmers to just initialize structures in nice for loops rather than swap the whole thing on and off of disk. The easy way of course is to just start over and reinitialize everything, most kernel programmers don't get much lower level than that (most of this is regular old C code). Personally I'd rather have programmers concentrate on making it so I never _need_ to reboot. Bottom line is how long does it really take to load 128 megs from disk to ram? I bet that's a very small % of your boot procedure.
  • Instant on= Lower Down Time when NT crashes.
  • Looking at the slashdot news for the past few weeks, almost everything is about to be revolutionised.

    We've increased memory bandwidth, and size. We've increased network speed, and decreased the cost. we have fast optical switches. We've increased mass storage space, we've increased mass storage reliability. Flat screens are now 4 times the size and resolution that they were before.

    How long until we get a CPU that will be as revolutionary?
  • popular in UK schools and .....Bulletproof as well.

    I would have thought this would have made them popular in US schools. I could do with a bulletproof PC though, so I could shoot it when I got angry.
  • Elite...... first (only?) game to use two screen modes on the screen at once :)

    First - probably. Only - certainly not. The Amiga did this all the time. Apparently it was used by the spectrum from time to time as well.

    On the subject of Best games, I quite liked Chucky Egg
  • 5 years down the line is used because computer time schedules saturate at 5. Nobody would start a major project if there were no returns for the next 5 years. Therefore 5 = infinity. 5 - 4.9999 = 5.

    Medicine seems to set infinity at 10 years.
  • This is a redundant story. In this [slashdot.org] slashdot article the same magnetic memory technology is dicussed.

    Jeff
  • While I can certainly see a lot of uses for this memory, particularly when it comes to portable music players and the like, instant-on just doesn't thrill me.

    I see uptimes measured in months on my system. The time I lose due to reboots is miniscule. And if I was running Windows, would I really get instant-on? Windows spends most of its bootup time determining which disk clusters are messed up and initialising the hardware devices.

  • As far as I can tell, the memory does NOT support "Instant Access". The main feature of MRAM is that it always retains it's content, plus it has very low power consumption. There is nothing to indicate that it's access times will be any less than DRAM (particularly DRAM in five years).
  • Actually, I read about this technology several weeks ago (characteristic late news on /.), so it's not an April fool's joke. IBM (I think) is working to get this tech out in the next couple of years. One of it's advantages over DRAM is that it takes less current and won't fry like RDRAM.
  • what's with the press people? instant on already exists, it's called hybernation (sp?). it's fast and reliable but has all the problems people have posted here. if some program crashes and renders the system unstable, it stays that way until you turn the computer off, clearing the memory. of course on an non-volatile memory system you couldn't do that, so i don't see why you would use windows2005 or BeOS1.1 or Dumbed_down_Linux v8 with it for just it's instant-on porential. which takes me to my point:

    the real use for this technology is yet to come. instant on will be irrelevant 5 years from now. IF the desktop computer prevails. it'll probably be used as some kind of very fast storage or transport media (a la zipdrives or something). caching comes to mind too. if the most accessed records of a huge database are stored here, instead of on my 100 XByte HDD, i could see a pretty decent performance boost. also, this is the kind of technology that I'd like to see in game consoles, webpads, cellphones and that sort of thing, where software tends to be much more static and stable (remember microsoft still hasn't taken over that field yet). my point is, the whole thing, as is presented, is similar to saying "10 years from now we'll have holographic storage systems 2 billion times as fast and large as today's storage media, that will revolutionize the way you store mp3's!.
  • by muzzy ( 164903 ) on Wednesday April 12, 2000 @01:50AM (#1138551) Homepage Journal

    ... could make all of our computers instant-on! Problem is, 5 years is a long time to wait...

    I don't think 5 years is really "instant-on", this story is contradicting itself.

  • by streetlawyer ( 169828 ) on Wednesday April 12, 2000 @12:10AM (#1138555) Homepage
    Apparently, in five years, we will have multi-gigabyte hard disk drives, a global network of computers, we'll be able to transmit 58.8Kb over voice telephone netowkrs, wireless data networks and x86 chips running at 300MHz will be cheap. Yeah, I'll believe it when I see it [1995]

    Apparently, in five years, we'll all have Xerox PARC style desktop environments, hard disk size will be so big we'll be able to forget about our archive of floppies and we'll have moving pictures on our PCs. Yeah, I'll believe it when I see it. [1990]

    Apparently, in five years, we'll all have affordable IBM computers with hard disk drives in our homes. And we'll all be walking round with mobile telephones. Yeah, I'll believe it when I see it. [1985]

    Apparently in five years, we'll all have over 512K of RAM and we'll be able to do graphics on desktop computers. {Note: I remember hearing someone around this time talk about a "gigabyte" as if it were an obviously made-up word or at best, a whimsical extension of "kilobyte"}. Yeah, I'll believe it when I see it. [1980]

    [....]

    "I can see a global market for maybe five computers"

You can be replaced by this computer.

Working...