Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Robotics Technology Hardware

Memristor Minds, the Future of Artificial Intelligence 184

godlessgambler writes "Within the past couple of years, memristors have morphed from obscure jargon into one of the hottest properties in physics. They've not only been made, but their unique capabilities might revolutionize consumer electronics. More than that, though, along with completing the jigsaw of electronics, they might solve the puzzle of how nature makes that most delicate and powerful of computers — the brain."
This discussion has been archived. No new comments can be posted.

Memristor Minds, the Future of Artificial Intelligence

Comments Filter:
  • Oblig. wiki-link (Score:4, Informative)

    by Eudial ( 590661 ) on Saturday July 11, 2009 @04:15AM (#28658591)
  • by msgmonkey ( 599753 ) on Saturday July 11, 2009 @04:35AM (#28658665)

    That we've developed a whole industry based on an incomplete model, I wonder how things would have developed if the memristor had existed 30 years ago. Exciting times as a lot of things will be re-examined.

    • by Anonymous Coward on Saturday July 11, 2009 @04:47AM (#28658711)

      Probably nothing significant, seeing as you can emulate exactly what a digital memristor does with 6 transistors and some electricity always applied. Memristors in CPU/logic would not be viable because of their low wear cycles and very high latencies. It would make for some nice multi-terabyte sized USB sticks though.

      As for its analog uses, Skynet comes to mind...

      • by Marble1972 ( 1518251 ) on Saturday July 11, 2009 @05:11AM (#28658785)

        Probably nothing significant, seeing as you can emulate exactly what a digital memristor does with 6 transistors

        Exactly right.

        It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required. Faster computers might be nice - but it'll always comes down to the algorithm.

        And actually the sooner we create Skynet - the better the chance we have to beat it. Because if we wait too long - that super fast hardware it will be running will could make it too hard to beat. ;)

        • Re: (Score:2, Interesting)

          by madkow ( 518822 )

          And actually the sooner we create Skynet - the better the chance we have to beat it. Because if we wait too long - that super fast hardware it will be running will could make it too hard to beat. ;)

          Or the better chance we have to learn live with it. James Hogan's 1979 book "The Two Faces of Tomorrow" details a plan to deliberately goad a small version of a self aware computer (named Spartacus) into self defense before they built the big version. When Spartacus learned that humans were even more frail than he and equally motivated by self preservation he chose to unilaterally lay down arms.

        • to implement a proper neural network on a von neumann type architecture, it's like trying to fit a square into a circle. So the developments have been in making special processors that work closer to real neurons but still digital. Memristors allow them to get closer to the real thing. Like the article states they did n't even have the tools to test these because of their analogue nature so we're at the begining here.

          The purpose here is n't to get faster hardware, a computer can add two numbers together

          • to implement a proper neural network on a von neumann type architecture, it's like trying to fit a square into a circle.

            A feedforward neural network can be executed just fine on a von neumann architecture. At some point, we will be able to exceed human brain capacity (ignoring recurrence for now) on von neumann, it's just a matter of time.

            However, can a recurrent network be properly simulated on a von neumann architecture? Not sure. The problem in that case is that multiple things are happening sim
        • Re: (Score:3, Interesting)

          by Requiem18th ( 742389 )

          I don't know, with a 10,000 write limit If my brain was made of memristors I'd be terribly mortified.

        • Except if what I and many other people think is true: That the only difference between our spiking neural nets and Skynet is the processing power.

          • Kludge a lot of state machines together and you can simulate stack machines to a certain limit.

            Kludge a lot of context free grammars together and you can simulate a context-sensitive grammar within certain limits. But it takes infinite stack, or, rather, infinite memory to actually build a context-sensitive grammar out of a bunch of context-free grammar implementations.

            Intelligence is at least at the level one step beyond -- unrestricted grammar.

            (Yeah, I'm saying we seem to have infinite tape and infinite s

            • Intelligence is at least at the level one step beyond -- unrestricted grammar.

              I don't see how you can claim unrestricted grammar when every language I know of uses the concepts of nouns, verbs, etc. Surely an unrestricted grammar would mean completely alien languages which in no way are directly translateable into other human languages.

        • "It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required."

          On the contray, I think you need a algorithmic breakthrough to understand the brain but you don't need a new algorithim to create a brain [bluebrain.epfl.ch]. Humans have built and used many things well before they had a theoretical basis for how they worked, for example people were using levers to build pyramids long before archimedes came and gave us the "lever algorithim".
        • it's an algorithm breakthrough that's required

          That is, of course, making the assumption that intelligence doesn't require Consciousness and that Consciousness can be captured in an algorithm. Two somewhat dubious pre-requisites.

        • Right. AI is a software problem, not a hardware problem. That's not to say that current hardware could run the software should it ever be devised, but once we know what the software is we can build the hardware that will run it. So, how do we come up with the software if we don't have the hardware to run it? It's called philosophy.

        • by ceoyoyo ( 59147 )

          You're still thinking about simulating intelligence using a standard computer, in which case you're right, you need the right algorithm.

          What they're proposing is not to simulate a brain but to build one. There is no algorithm. It might be sensitive to how you wire things up, but probably not excessively so, otherwise it would be very difficult to evolve working brains. The key is getting the right components to build the thing out of.

        • Re: (Score:2, Interesting)

          by Lord Kano ( 13027 )

          It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required. Faster computers might be nice - but it'll always comes down to the algorithm.

          Fast enough computers will allow us to develop algorithms genetically. Come up with a set of parameters and let evolution do the job for you.

          LK

          • You're saying if it's some kind of magic incantation.

            Do you know what genetic algorithms are? It's just a bunch of data, plus a "fitness" function in a loop of say 100.000 runs or whatever, and (more or less) randomly changes the data to see if the fitness function gives a better result. First you'll have to write the fitness function, then you'll have to think about how you randomly change the data, and then you'll have to manually tweak it.

            It's a simple algorithm, which is only usable in special cases. Yo

        • It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required. Faster computers might be nice - but it'll always comes down to the algorithm.

          I fear this may be an oversimplification. Current software can simulate virtually any hardware, be it existing or hypothetical. We also have no problem simulating physics and doing math. These memristors may be novel pieces of hardware, but simulating them seems quite trivial, as computers already have a "perfect" memory.

          However, the ability to physically implement such a device seems why this is so important, and I cannot say for certain that hardware inventions will not contribute to the evolution of AI.

          A

      • Re: (Score:3, Interesting)

        by peragrin ( 659227 )

        Of course if you currently multiply the 100 million or more transistors in a current cpu by 6 you don't have any kind of problem do you? Of course a memresistor is closer in design to a permanent RAM Disk. You can turn off the system as much as you want but it instantly restores you right from where you left it.

        Now that it is proven all that matters is figuring out how best to use it and what limitations it has.

      • by GigaplexNZ ( 1233886 ) on Saturday July 11, 2009 @08:11AM (#28659309)

        Memristors in CPU/logic would not be viable because of their low wear cycles and very high latencies.

        That's a current manufacturing limitation, not something inherent to what a memristor is. Had these been discovered much sooner, we would be much better at manufacturing them and they probably would have made a significant impact.

    • by jerep ( 794296 )

      we've developed a whole industry based on an incomplete model

      Wait you mean this is the first time this happens? I thought schools were the first to do that.

      • Adam and Eve.

        Or, if you don't get the reference, us.

        Humans have been doing this as far back as there have been humans. It is one of the things which sets us apart from the other animals. Or, it might be argued that this is just another way of looking at the only thing that separates us from the other animals.

    • by Yvanhoe ( 564877 ) on Saturday July 11, 2009 @05:36AM (#28658845) Journal
      No. This is a lot of gross overexageration.
      Our computers are Turing-complete. Point me to something that is missing in this before I get excited. This new component may have great applications, but it will "only" replace some existing components and functions. It is great to have it but it is nothing essentially missing.
      • Woops. Posted this below in the wrong sub-thread. Oh, well, post it here, too, with this mea culpa.

        Not until we have infinite tape and infinite time to process the tape are our computers truly Turing complete.

        Moore boasted that technology would always be giving us just enough more tape. I'm not so sure we should worship technology, but so far the tech has stayed a little ahead of the average need.

        Anyway, this new tech may provide a way to extend the curve just a little bit further, keep our machines effecti

    • That we've developed a whole industry based on an incomplete model

      In hindsight, every industry ever is an incomplete model.
      We will always have much to look forward to.

  • by indigest ( 974861 ) on Saturday July 11, 2009 @04:52AM (#28658731)
    From the article:

    What was happening was this: in its pure state of repeating units of one titanium and two oxygen atoms, titanium dioxide is a semiconductor. Heat the material, though, and some of the oxygen is driven out of the structure, leaving electrically charged bubbles that make the material behave like a metal.

    The memristor they've created depends on the movement of oxygen atoms to produce the memristor-like electrical behavior. Purely electrical components such as resistors, capacitors, inductors, and transistors only rely on the movement of electrons and holes to produce their electrical behavior. Why is this important? The chemical memristor is an order of magnitude slower than the theoretical electrical equivalent, which no one has been able to invent yet.

    I think the memristor they've created is a great piece of technology and will certainly prove useful. However, it is like calling a rechargeable chemical battery a capacitor. While both are useful things, only one is fast enough for high speed electronics design for applications like the RAM they mentioned. On the other hand, a chemical memristor could be a flash memory killer if they can get the cost down (which I doubt to happen any time soon).

    • but on the other hand a neuron works with electrochemical signaling and the design seems to be quite good :)

    • How about an optical memristor?

      Why focus on hopefully soon outdated technology. :)

    • Not until we have infinite tape and infinite time to process the tape are our computers truly Turing complete.

      Moore boasted that technology would always be giving us just enough more tape. I'm not so sure we should worship technology, but so far the tech has stayed a little ahead of the average need.

      Anyway, this new tech may provide a way to extend the curve just a little bit further, keep our machines effectively Turing complete for the average user for another decade or so.

      Or not. If Microsoft goes down,

  • in the computer world.

    The question is: will be see the result in our lives?

    I really wish so, but the succes has stalled computer innovation. Thirty years ago we expected to be able to talk to our machines, now those advances can make it finally possible. Will the industry and economics be able to adapt to make it possible in our life time frames?

    • by Yvanhoe ( 564877 ) on Saturday July 11, 2009 @05:39AM (#28658861) Journal
      AI needs new algorithms to progress. Electronics will not change the way we program computers. They are already Turing complete, a new component adds nothing to the realm of what a device can compute. Expect a revolution in electronics, but IT people will not see a single difference (except maybe a slight performance improvement)
      • Re: (Score:3, Insightful)

        by 12357bd ( 686909 )

        Old designs were not fully explored, ie: Turing's 'intelligent or trainable' [alanturing.net] machines. This kind of electronics can do those old concepts viable, that's IMO the NEXT BIG THING, not just algorithms (looped circuitry is not hard to simulate, is hard to predict).

        The Von newman architecture of our 'computers' was just one possibility, not the only or the best, just the convenient. New hardware processing habilities, could lead to new kinds of maybe not 'programable' in the current sense of the word, but 'traina

        • by Yvanhoe ( 564877 )
          Please explain to me how this component extends the range of process we can simulate/predict. I fail to see the link with the Turing's publication you mention that proposes concepts that are completely implementable with a current CPU.
          This is only a component with a new electrical behavior, for heaven's sake ! Completely simulatable, its behavior is linear, I fail to see how it could have profound implications. It will maybe simplify some electric circuits that needed 3 capacitors and a coil, but it won't
          • by 12357bd ( 686909 )

            You did not read my post. It's not only about programing computers, it's all about building new machines. Can we simulate those machines? Yes, sure, but the computational cost it's prohibitive, that's why neuronal simulations are so scarce.

            Read the link about Turing papers, you'll find a very interesting bunch of ideas about 'thinking machines', not 'computers'.

            • by Yvanhoe ( 564877 )
              Neuronal simulations are so scarce because we fail at finding a good algorithm for them to implement complex and interesting behaviors that go beyond approximating a function. We have plenty of CPU power to make neuronal simulations. Simulating a neuron with a capacitor and a transistor would be easy but we never had to make such an acceleration board because the lack of CPU power is not the problem. The problem is that our current algorithms do not scale up into something interesting.
      • Re: (Score:3, Informative)

        by monoqlith ( 610041 )

        No. I don't think the solution will be algorithms running on existing digital electronics.

        Our brain is an analog machine. Its plasticitiy is not limited to two discrete states. Therefore, the 'software running on hardware' model for how intelligence works is not the most efficient explanation. Our brains operate the way they do because of they way they are organized, not because they are programmed in the sense we usually understand it. To put it another way, the software 'instructions' (algorithm) and the

      • AI needs new algorithms to progress.

        That's quite an assertion. But it could also be the case that we have all the algorithms we need to produce a sentient machine that would then independently develop its own intelligence. But that we simply have not been putting the different algorithms together in the correct fashion.

        Maybe the box of Legos we've got already has all the pieces we need to build this cathedral, and we just need to learn to put them together to make archways and flying buttresses rather than simple walls. Maybe we don't need

    • by 4D6963 ( 933028 )

      but the succes has stalled computer innovation

      No, reality got in the way. As much as you can want to have a HAL 9000 in your computer, it's not going to happen, because as far as we know it might just be theoretically impossible to create something like that.

      Thirty years ago we expected to be able to talk to our machines, now those advances can make it finally possible.

      No it's not. What makes you think it's gonna help with anything you talk about? That's typical of throwing the word "neuron" into a te

      • I can point to a common example of a machine that produces intelligence... the human brain.

        So, given that nature did it once I'm confident it's not theoretically impossible.

        • by 4D6963 ( 933028 )

          A machine is a device. A device is a human invention. Men didn't invent brains.

          Besides by theoretically impossible I was talking about doing it algorithmically.

          • You're talking religion, not science.

            The brain is an electro-chemical machine 'designed' by random chance controlled by natural selection.

            The algorithms are in there, and there is no reason to believe they can't be copied by man or even that we might figure them out ourselves.

            If the algorithms were theoretically impossible then your brain wouldn't exist.

            • by 4D6963 ( 933028 )

              That supposes that the entire universe is algorithmically reproducible, which is a view that holds some merit, however I still doubt we will manage to come up with anything resembling that fabled 'strong AI'. At least I can live in the comfort of doubtlessly living to not being proven wrong.

  • by pieterh ( 196118 ) on Saturday July 11, 2009 @06:00AM (#28658903) Homepage

    The amazing thing is that we consider individual brains to be "intelligent" when it seems pretty clear we're only intelligent as part of a social network. None of us are able to live alone, work alone, think alone. The concept of "self" is largely a deceit designed to make us more competitive, but it does not reflect reality.

    So how on earth can a computer be "intelligent" until it can take part in human society, with the same motivations and incentives: collect power, knowledge, information, friends, armies, territories, children...

    Artificial intelligence already exists and it's called the Internet: it's a technology that amplifies our existing collective intelligence, by letting us connect to more people, faster, cheaper, than ever before.

    The idea that computers can become intelligent independently and in parallel with this real global AI is insane, and it has always been. Computers are already part of our AI.

    Actually, the telegraph was already a global AI tool.

    But, whatever, boys with toys...

    • by dcherryholmes ( 1322535 ) on Saturday July 11, 2009 @06:37AM (#28658979)
      But I could stick you on a deserted island all by yourself and you would still be intelligent, right? I'm not denying that we are deeply social creatures, nor that a full definition of an organism must necessarily include a description of its environment. But I think you are confusing the process by which we become intelligent with intelligence itself.
      • by DMoylan ( 65079 )

        but we are social animals. a simple illness or injury will kill a lone human unable to feed themselves temporarily whereas a human in most societies will be cared for until they are literally back on their feet.

        but that is a fully developed adult isolated on an island. a human is intelligent not solely because of their genetics but of the society they grow up in. look at the numerous cases where children have been reared by animals. http://en.wikipedia.org/wiki/Feral_child#Documented_cases [wikipedia.org]

        most of those

      • by cenc ( 1310167 )

        Actually yes and no. This dog has been beat and beat again fairly well in Philosophy of AI.

        I appreciate what you are trying to get at, but your example is flawed. A true Islands examples for examining Intelligence in relation to AI are more like Helen Keller before she learned language or perhaps faro Children. Dropping a fully functional adult on an island, that has already learned language, culture, and essentially has learned to internalize or make self-reporting use of what is normally overt verbal expr

    • Re: (Score:3, Insightful)

      by hitmark ( 640295 )

      and here i keep observing that the overall intelligence in a room drops by the square of the number of people in said room...

    • by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Saturday July 11, 2009 @06:49AM (#28659011)

      None of us are able to live alone, work alone, think alone.

      Did you come up with this because of your own ability to do so?
      Because except for reproduction, we can easily survive our whole life alone.
      Sure it will be boring. But it works.

      The idea that computers can become intelligent independently and in parallel with this real global AI is insane, and it has always been.

      Says who? You, because you need it to base your arguments on it? ^^
      You will see it happening in your lifetime. Wait for it.

    • by 4D6963 ( 933028 )

      The amazing thing is that we consider individual brains to be "intelligent" when it seems pretty clear we're only intelligent as part of a social network. None of us are able to live alone, work alone, think alone. The concept of "self" is largely a deceit designed to make us more competitive, but it does not reflect reality.

      No, you're completely wrong. It's sufficiently obvious why that I don't feel the need to elaborate.

      Actually, the telegraph was already a global AI tool.

      No, it's called a network.

    • by Chryana ( 708485 )

      If, by your definition, the Internet is an AI, then your definition of AI is meaningless (and useless for anyone working in that field). Your post reeks of ill-deserved elitism and the message it conveys is incredibly depressing: individually the human is nothing/we already have AI, so we have nothing to reach for. I'm not going to argue about the first part, since I do not think it deserves any answer, but I'll say about the second part that we would never get true AI if most people thought like you do.

    • I don't agree. Real progress can be made with one person obsessing over an idea. A committee would only serve to retard progress. The memresistor story is a perfect example. The concept of "self" is probably just a consequence of holding a sufficiently detailed model of reality - one that must include the self.
  • Free transistors (Score:4, Informative)

    by w0mprat ( 1317953 ) on Saturday July 11, 2009 @06:36AM (#28658975)
    Transistors are naturally analog, it's only that we force them to be digital. If we are prepared to accept more probabilistic outputs then there are massive gains to be had http://www.electronista.com/articles/09/02/08/rice.university.pcmos/ [electronista.com]. Work is being done with analog computing too.

    I think memristors will be complimentary to existing rather than a revolution on their own yet analog transistors would have George Boole flip-flopping between orientations in his grave.
    • by Twinbee ( 767046 )

      I think even with 1000x performance, it will be hard to return to analog. There's something about the 100% copyability of data, determinism and exactness of digital which analog can't hope to achieve.

      Maybe 1,000,000x would veer me over however...

  • whatever (Score:3, Interesting)

    by jipn4 ( 1367823 ) on Saturday July 11, 2009 @06:49AM (#28659009)

    In the 1970's, the big breakthrough was supposedly tunnel diodes, a simpler and smaller circuit element than the transistor. Do our gadgets now run on tunnel diodes? Doesn't look like it to me.

    • The Esaki (tunnel) diode is a two terminal device which basically exists in two states (I am simplifying, I know) at two different currents. Its weakness is that (a) it requires a current source to keep it in one or the other state and (b) both input (changing state) and output need amplifying devices. As soon as cmos become fast enough things like tunnel diodes were dead in the water because a cmos transistor does its own amplifying, and requires almost no power to keep in one state rather than the other.
  • If brain were indeed made of memoristors and these had finite write cycles, could it be that once we have reached these write cycles, the memoristors stop of being any use. Ofcourse the brain would try to minimise dmage to memoristors by spreading the data around but you will eventually reach a limit and eventually the same memoristors would be overwritten again and again, until eventually you start reaching the write limit for some of these, which might explain why we start losing memory after reaching 30s
    • The old idea that brain cells are lost and new ones don't form has turned out to be just plain wrong. The evidence is that the more you think about a subject, the bigger relevant parts of the brain get. For instance, London cab drivers have to memorise large chunks of the road system to pass a test, and it has been shown that the relevant part of the brain does actually grow during the process.
  • ... don't we have enough people producing this already?

  • by FiloEleven ( 602040 ) on Saturday July 11, 2009 @10:57AM (#28660639)

    Citation [scienceblogs.com].

    See especially points
    6 - No hardware/software distinction can be made with respect to the brain or mind,
    7 - Synapses are far more complex than electrical logic gates,
    10 - Brains have bodies,
    and the bonus - The brain is much, much bigger than any [current] computer.

    It's past time for this idea to die.

    • by ignavus ( 213578 )

      Difference #1: brains are analogue; digital computers are digital; analogue computers are analogue ... um, what was his point again? Computers are no more digital than human beings are males. That is, some are and some aren't.

      Difference # 10 "brain have bodies" - um, my body has a brain. My brain is a body part.

      Could not be bothered reading the rest.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...