Memristor Minds, the Future of Artificial Intelligence 184
godlessgambler writes "Within the past couple of years, memristors have morphed from obscure jargon into one of the hottest properties in physics. They've not only been made, but their unique capabilities might revolutionize consumer electronics. More than that, though, along with completing the jigsaw of electronics, they might solve the puzzle of how nature makes that most delicate and powerful of computers — the brain."
Oblig. wiki-link (Score:4, Informative)
What the hell is a memristor, you ask? [wikipedia.org]
Re:Oblig. wiki-link (Score:4, Funny)
The first place being xkcd
Re: (Score:2)
Re: (Score:2)
This is Slashdot. I think we should assume people are capable of locating Wikipedia on their own
This is slashdot. I think we should assume people are too lazy to search wikipedia (even with a shortcut), let alone RTFA.
Re: (Score:2)
I'm new here. What's this "Wikipedia" and where do I find it? Here. That'll sort this whole thread out...
He asked for an explanation. He may not yet know how to click on a link. Wikipedia is a free,[5] web-based multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Its name is a portmanteau of the words wiki (a technology for creating collaborative websites, from the Hawaiian word wiki, meaning "quick") and encyclopedia. Wikipedia's 13 million articles (2.9 million in the English Wikipedia) have been written collaboratively by volunteers around the world, and almost all of its a
I'm always taken back by this (Score:3, Interesting)
That we've developed a whole industry based on an incomplete model, I wonder how things would have developed if the memristor had existed 30 years ago. Exciting times as a lot of things will be re-examined.
Re:I'm always taken back by this (Score:5, Informative)
Probably nothing significant, seeing as you can emulate exactly what a digital memristor does with 6 transistors and some electricity always applied. Memristors in CPU/logic would not be viable because of their low wear cycles and very high latencies. It would make for some nice multi-terabyte sized USB sticks though.
As for its analog uses, Skynet comes to mind...
Re:I'm always taken back by this (Score:5, Insightful)
Probably nothing significant, seeing as you can emulate exactly what a digital memristor does with 6 transistors
Exactly right.
;)
It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required. Faster computers might be nice - but it'll always comes down to the algorithm.
And actually the sooner we create Skynet - the better the chance we have to beat it. Because if we wait too long - that super fast hardware it will be running will could make it too hard to beat.
Re: (Score:2, Interesting)
And actually the sooner we create Skynet - the better the chance we have to beat it. Because if we wait too long - that super fast hardware it will be running will could make it too hard to beat. ;)
Or the better chance we have to learn live with it. James Hogan's 1979 book "The Two Faces of Tomorrow" details a plan to deliberately goad a small version of a self aware computer (named Spartacus) into self defense before they built the big version. When Spartacus learned that humans were even more frail than he and equally motivated by self preservation he chose to unilaterally lay down arms.
And how exactly do you exactly plan (Score:2, Insightful)
to implement a proper neural network on a von neumann type architecture, it's like trying to fit a square into a circle. So the developments have been in making special processors that work closer to real neurons but still digital. Memristors allow them to get closer to the real thing. Like the article states they did n't even have the tools to test these because of their analogue nature so we're at the begining here.
The purpose here is n't to get faster hardware, a computer can add two numbers together
Re: (Score:2)
A feedforward neural network can be executed just fine on a von neumann architecture. At some point, we will be able to exceed human brain capacity (ignoring recurrence for now) on von neumann, it's just a matter of time.
However, can a recurrent network be properly simulated on a von neumann architecture? Not sure. The problem in that case is that multiple things are happening sim
Re: (Score:3, Interesting)
I don't know, with a 10,000 write limit If my brain was made of memristors I'd be terribly mortified.
Re:I'm always taken back by this (Score:4, Informative)
This [inist.fr] talks about neuronal replacement. It looks like your brain may have a write limit, it just automatically replaces worn out bits.
Re: (Score:2)
I don't know, with a 10,000 write limit If my brain was made of memristors I'd be terribly mortified.
Try not to be so indecisive.
Re: (Score:2)
Except if what I and many other people think is true: That the only difference between our spiking neural nets and Skynet is the processing power.
wrong level of complexity (Score:3, Insightful)
Kludge a lot of state machines together and you can simulate stack machines to a certain limit.
Kludge a lot of context free grammars together and you can simulate a context-sensitive grammar within certain limits. But it takes infinite stack, or, rather, infinite memory to actually build a context-sensitive grammar out of a bunch of context-free grammar implementations.
Intelligence is at least at the level one step beyond -- unrestricted grammar.
(Yeah, I'm saying we seem to have infinite tape and infinite s
Re: (Score:2)
I don't see how you can claim unrestricted grammar when every language I know of uses the concepts of nouns, verbs, etc. Surely an unrestricted grammar would mean completely alien languages which in no way are directly translateable into other human languages.
Re: (Score:2)
I'll admit I'm a bit rusty on the subject, but I'm just not seeing how grammars (context free, context sensitive, unrestricted, whatever) have anything to do with intelligence. What's the claim actually being made there?
A quick Wikipeding says that a Turing Machine is equivalent to an unrestricted grammar, the way a finite state automaton is equivalent to a regular expression. We use Turing-equivalent machines on a daily basis (to the limits of your computer's memory), so I'm not sure what it means to say
Re:I'm always taken back by this (Score:5, Insightful)
"Repeat after me" is really annoying. If you're going to be that irritating you'd better have some pretty strong evidence to back yourself up. Where is it?
Re: (Score:2)
You want evidence that something infeasible (assuming you won't consider that we currently can't put enough neural nets in a supercomputer powerful enough to satisfy your demand) is impossible? I want some evidence that filling a 1 billion cubic meters bag with spiders won't turn into an evil blob-like insect overlord. Where is it?
However if by 'evidence' you mean 'reason', then I'll tell you that researchers spent decades putting ever increasing shitloads of neural networks on ever increasingly powerful
Re:I'm always taken back by this (Score:4, Insightful)
Ah yes, the "our computers are incredibly powerful and we've tried it and it didn't work so the whole class of solutions is obviously ruled out" argument.
Before you make (extremely condescending) statements that something is impossible, you should at least make sure you qualify your terms properly.
"I think it's very unlikely that using current neural network algorithms on computers with current or near future capacities will produce a strong AI" would be a good start.
We certainly do not know what the limits of "neural networks" (as a general class of algorithms) are. We also don't have anything like the computing power to properly simulate a neural network with a capacity where we'd expect to see "intelligence."
You might be correct. Then again, you may well not be. Even if you are, the only people who will listen to posts like yours are people who already agree with you.
Re: (Score:2)
How do you know we don't have the computing power? You're assuming that that's the barrier. "We'd expect to see intelligence" sounds an awful like you have a pre-conception of what you want to see. I'd love to subscribe to your newsletter.
Re: (Score:2)
If you're trying to simulate a brain you have a pretty good idea of how many neurons and synapses are in your simulation, and how closely their behaviour matches those of the real thing. Generally there's a tradeoff - you can make your simulation more realistic at the expense of simulation fewer elements. There was a story a while ago about an effort to very accurately simulate a neuron on a supercomputer. One neuron.
Now, for whatever definition of "intelligence" you have, you can probably find a brain i
Re: (Score:2)
It's possible to simulate a mouse's brain today. That's about 1 gram of brain matter. The human brain weighs about 1350g. Eleven more doublings, maybe throw in a couple more (under the assumption that the simulations need to be more thorough), and you're pretty well there. So before 2030 would be a good guess.
I suspect that, rather than deriving Artificial Intelligence from first principles, we'll just start by trying to map and run the human brain. He's right: it's not enough to pour neurons into a s
Re: (Score:2)
Your argument doesn't sound designed to convince anyone except yourself. If that's your goal, then congratulations, you appear successful. Most people, however, choose a slightly different goal.
To arrive at a simulation of the truth, however, it's generally necessary to define the terms of importance to the discussion and not to overgeneralize. E.g., "neural network" is not a generic term, it's a category, with many differing entities. Many of those differing entities have very different learning patte
Re: (Score:2)
Up to date there have been very few attempts to create a genuinely intelligent entity.
Yeah, why would that be? Perhaps because no one has any fucking clue how to go about doing this?
Re: (Score:2)
How about infinite CPU speed (+ memory). Would that help?
Re: (Score:2)
On the contray, I think you need a algorithmic breakthrough to understand the brain but you don't need a new algorithim to create a brain [bluebrain.epfl.ch]. Humans have built and used many things well before they had a theoretical basis for how they worked, for example people were using levers to build pyramids long before archimedes came and gave us the "lever algorithim".
Re: (Score:2)
That is, of course, making the assumption that intelligence doesn't require Consciousness and that Consciousness can be captured in an algorithm. Two somewhat dubious pre-requisites.
Re: (Score:2)
As there's no evidence either way, metaphysical doubt, it seems to me, is quite a sensible position.
Re: (Score:2)
You don't make progres by randomly flailing around or by applying erroneous principles to the problem at hand.
Re: (Score:2)
Right. AI is a software problem, not a hardware problem. That's not to say that current hardware could run the software should it ever be devised, but once we know what the software is we can build the hardware that will run it. So, how do we come up with the software if we don't have the hardware to run it? It's called philosophy.
Re: (Score:2)
You're still thinking about simulating intelligence using a standard computer, in which case you're right, you need the right algorithm.
What they're proposing is not to simulate a brain but to build one. There is no algorithm. It might be sensitive to how you wire things up, but probably not excessively so, otherwise it would be very difficult to evolve working brains. The key is getting the right components to build the thing out of.
Re: (Score:2, Interesting)
It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required. Faster computers might be nice - but it'll always comes down to the algorithm.
Fast enough computers will allow us to develop algorithms genetically. Come up with a set of parameters and let evolution do the job for you.
LK
Re: (Score:2)
You're saying if it's some kind of magic incantation.
Do you know what genetic algorithms are? It's just a bunch of data, plus a "fitness" function in a loop of say 100.000 runs or whatever, and (more or less) randomly changes the data to see if the fitness function gives a better result. First you'll have to write the fitness function, then you'll have to think about how you randomly change the data, and then you'll have to manually tweak it.
It's a simple algorithm, which is only usable in special cases. Yo
Re: (Score:2)
It's not a hardware breakthrough that'll create a true AI - it's an algorithm breakthrough that's required. Faster computers might be nice - but it'll always comes down to the algorithm.
I fear this may be an oversimplification. Current software can simulate virtually any hardware, be it existing or hypothetical. We also have no problem simulating physics and doing math. These memristors may be novel pieces of hardware, but simulating them seems quite trivial, as computers already have a "perfect" memory.
However, the ability to physically implement such a device seems why this is so important, and I cannot say for certain that hardware inventions will not contribute to the evolution of AI.
A
Re: (Score:2)
Skynet spreads onto almost every computer in the world (ie. there is no central core).
That was just the rubbish from the third movie, which film I'm personally trying to forget. It was also ridiculous: once Skynet nuked everyone and in the process shut down all power distribution and communications networks, all those millions of computers running some little bit of Skynet would be turned off and isolated anyway.Killings us would have been instant suicide for Skynet.
The original 80's vision of Skynet as a vast artificial intelligence living in a cavern somewhere running on its own power s
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Of course if you currently multiply the 100 million or more transistors in a current cpu by 6 you don't have any kind of problem do you? Of course a memresistor is closer in design to a permanent RAM Disk. You can turn off the system as much as you want but it instantly restores you right from where you left it.
Now that it is proven all that matters is figuring out how best to use it and what limitations it has.
Re:I'm always taken back by this (Score:4, Interesting)
Memristors in CPU/logic would not be viable because of their low wear cycles and very high latencies.
That's a current manufacturing limitation, not something inherent to what a memristor is. Had these been discovered much sooner, we would be much better at manufacturing them and they probably would have made a significant impact.
Re: (Score:2)
Not really Turing complete. (Score:2)
Effectively Turing complete within a certain range of speeds and requirements for state memory.
But the tape is finite.
So, yes, glorified calculating machines. (The boundary between is not as clearly defined as you assert.)
Re: (Score:2)
we've developed a whole industry based on an incomplete model
Wait you mean this is the first time this happens? I thought schools were the first to do that.
The first time -- (Score:2)
Adam and Eve.
Or, if you don't get the reference, us.
Humans have been doing this as far back as there have been humans. It is one of the things which sets us apart from the other animals. Or, it might be argued that this is just another way of looking at the only thing that separates us from the other animals.
Re: (Score:3, Insightful)
(Without contempt or disrespect) religion is a great example of how far you can get with an incomplete model. Enlightenment, which some would argue is the highest human state, is taught with nothing more than vague contradictions that hint at a different way of thinking. Most religions use similar techniques to some extent, and I suppose most education must to some degree as well.
That said, I think religion could not have come first, as it's basically a specialised educational system. Besides, you can't
enlightenment (Score:2)
Some people believe that, in true religion, enlightenment is the realization of a rational basis to existence.
That is, half of enlightenment is the realization of the rational basis, and the other half is the realization that mortality pushes that rational basis ultimately beyond (mortal) human reach.
There seems to be some division as to whether giving up on understanding is preferred, since mortality is an absolute limit.
And there seems to be some further division as to whether mortality is really an absol
Re: (Score:2)
Depends on your definition of teaching. Most of the education world would include being a role model, providing examples, as a type of teaching too.
Re:I'm always taken back by this (Score:4, Informative)
Our computers are Turing-complete. Point me to something that is missing in this before I get excited. This new component may have great applications, but it will "only" replace some existing components and functions. It is great to have it but it is nothing essentially missing.
Practically Turing complete. (Score:3, Insightful)
Woops. Posted this below in the wrong sub-thread. Oh, well, post it here, too, with this mea culpa.
Not until we have infinite tape and infinite time to process the tape are our computers truly Turing complete.
Moore boasted that technology would always be giving us just enough more tape. I'm not so sure we should worship technology, but so far the tech has stayed a little ahead of the average need.
Anyway, this new tech may provide a way to extend the curve just a little bit further, keep our machines effecti
Re: (Score:2)
That we've developed a whole industry based on an incomplete model
In hindsight, every industry ever is an incomplete model.
We will always have much to look forward to.
Electrical Memristors Don't Exist Yet (Score:5, Informative)
What was happening was this: in its pure state of repeating units of one titanium and two oxygen atoms, titanium dioxide is a semiconductor. Heat the material, though, and some of the oxygen is driven out of the structure, leaving electrically charged bubbles that make the material behave like a metal.
The memristor they've created depends on the movement of oxygen atoms to produce the memristor-like electrical behavior. Purely electrical components such as resistors, capacitors, inductors, and transistors only rely on the movement of electrons and holes to produce their electrical behavior. Why is this important? The chemical memristor is an order of magnitude slower than the theoretical electrical equivalent, which no one has been able to invent yet.
I think the memristor they've created is a great piece of technology and will certainly prove useful. However, it is like calling a rechargeable chemical battery a capacitor. While both are useful things, only one is fast enough for high speed electronics design for applications like the RAM they mentioned. On the other hand, a chemical memristor could be a flash memory killer if they can get the cost down (which I doubt to happen any time soon).
You're right of course (Score:2)
but on the other hand a neuron works with electrochemical signaling and the design seems to be quite good :)
Re: (Score:2)
How about an optical memristor?
Why focus on hopefully soon outdated technology. :)
Practically Turing complete. (Score:2)
Not until we have infinite tape and infinite time to process the tape are our computers truly Turing complete.
Moore boasted that technology would always be giving us just enough more tape. I'm not so sure we should worship technology, but so far the tech has stayed a little ahead of the average need.
Anyway, this new tech may provide a way to extend the curve just a little bit further, keep our machines effectively Turing complete for the average user for another decade or so.
Or not. If Microsoft goes down,
woops (Score:2)
Meant that in response to this [slashdot.org].
Tha's goint to be the NEXT BIG THING (Score:2)
in the computer world.
The question is: will be see the result in our lives?
I really wish so, but the succes has stalled computer innovation. Thirty years ago we expected to be able to talk to our machines, now those advances can make it finally possible. Will the industry and economics be able to adapt to make it possible in our life time frames?
Re:Tha's goint to be the NEXT BIG THING (Score:4, Informative)
Re: (Score:3, Insightful)
Old designs were not fully explored, ie: Turing's 'intelligent or trainable' [alanturing.net] machines. This kind of electronics can do those old concepts viable, that's IMO the NEXT BIG THING, not just algorithms (looped circuitry is not hard to simulate, is hard to predict).
The Von newman architecture of our 'computers' was just one possibility, not the only or the best, just the convenient. New hardware processing habilities, could lead to new kinds of maybe not 'programable' in the current sense of the word, but 'traina
Re: (Score:2)
This is only a component with a new electrical behavior, for heaven's sake ! Completely simulatable, its behavior is linear, I fail to see how it could have profound implications. It will maybe simplify some electric circuits that needed 3 capacitors and a coil, but it won't
Re: (Score:2)
You did not read my post. It's not only about programing computers, it's all about building new machines. Can we simulate those machines? Yes, sure, but the computational cost it's prohibitive, that's why neuronal simulations are so scarce.
Read the link about Turing papers, you'll find a very interesting bunch of ideas about 'thinking machines', not 'computers'.
Re: (Score:2)
Re: (Score:2)
Algothims are great, really, but I am sorry, I don't believe in magic flowcharts. :)
Re: (Score:3, Informative)
No. I don't think the solution will be algorithms running on existing digital electronics.
Our brain is an analog machine. Its plasticitiy is not limited to two discrete states. Therefore, the 'software running on hardware' model for how intelligence works is not the most efficient explanation. Our brains operate the way they do because of they way they are organized, not because they are programmed in the sense we usually understand it. To put it another way, the software 'instructions' (algorithm) and the
Re: (Score:2)
AI needs new algorithms to progress.
That's quite an assertion. But it could also be the case that we have all the algorithms we need to produce a sentient machine that would then independently develop its own intelligence. But that we simply have not been putting the different algorithms together in the correct fashion.
Maybe the box of Legos we've got already has all the pieces we need to build this cathedral, and we just need to learn to put them together to make archways and flying buttresses rather than simple walls. Maybe we don't need
Re: (Score:2)
Huh??! Please explain how this was used incorrectly in the GP post, and then explain what it means to you.
Re: (Score:2)
Our computers are not Turing complete. To be so, they would require infinite memory and infinite time.
"Turing complete" is one of those things know-it-alls say like "correlation does not imply causation." The vast majority of statements on Slashdot that contain those phrases are made by people who do not have more than a very superficial knowledge of either one.
Which is a pity in this information age. Even Wikipedia has both right: Turing Complete [wikipedia.org], Correlation Does Not Imply Causation [wikipedia.org].
Re: (Score:3)
Re: (Score:2)
"While truly Turing-complete machines are very likely physically impossible, as they require unlimited storage, Turing completeness is often loosely attributed to physical machines or programming languages that would be universal if they had unlimited storage. All modern computers are Turing-complete in this loose sense, or more precisely linear bounded automaton-complete."
Please point me to a problem unsolvable by a Turing-complete machine (you can choose the definition you
Re: (Score:2)
but the succes has stalled computer innovation
No, reality got in the way. As much as you can want to have a HAL 9000 in your computer, it's not going to happen, because as far as we know it might just be theoretically impossible to create something like that.
Thirty years ago we expected to be able to talk to our machines, now those advances can make it finally possible.
No it's not. What makes you think it's gonna help with anything you talk about? That's typical of throwing the word "neuron" into a te
Re: (Score:2)
I can point to a common example of a machine that produces intelligence... the human brain.
So, given that nature did it once I'm confident it's not theoretically impossible.
Re: (Score:2)
A machine is a device. A device is a human invention. Men didn't invent brains.
Besides by theoretically impossible I was talking about doing it algorithmically.
Re: (Score:2)
You're talking religion, not science.
The brain is an electro-chemical machine 'designed' by random chance controlled by natural selection.
The algorithms are in there, and there is no reason to believe they can't be copied by man or even that we might figure them out ourselves.
If the algorithms were theoretically impossible then your brain wouldn't exist.
Re: (Score:2)
That supposes that the entire universe is algorithmically reproducible, which is a view that holds some merit, however I still doubt we will manage to come up with anything resembling that fabled 'strong AI'. At least I can live in the comfort of doubtlessly living to not being proven wrong.
Artificial intelligence? (Score:5, Insightful)
The amazing thing is that we consider individual brains to be "intelligent" when it seems pretty clear we're only intelligent as part of a social network. None of us are able to live alone, work alone, think alone. The concept of "self" is largely a deceit designed to make us more competitive, but it does not reflect reality.
So how on earth can a computer be "intelligent" until it can take part in human society, with the same motivations and incentives: collect power, knowledge, information, friends, armies, territories, children...
Artificial intelligence already exists and it's called the Internet: it's a technology that amplifies our existing collective intelligence, by letting us connect to more people, faster, cheaper, than ever before.
The idea that computers can become intelligent independently and in parallel with this real global AI is insane, and it has always been. Computers are already part of our AI.
Actually, the telegraph was already a global AI tool.
But, whatever, boys with toys...
Re:Artificial intelligence? (Score:5, Insightful)
Re: (Score:2)
but we are social animals. a simple illness or injury will kill a lone human unable to feed themselves temporarily whereas a human in most societies will be cared for until they are literally back on their feet.
but that is a fully developed adult isolated on an island. a human is intelligent not solely because of their genetics but of the society they grow up in. look at the numerous cases where children have been reared by animals. http://en.wikipedia.org/wiki/Feral_child#Documented_cases [wikipedia.org]
most of those
Re: (Score:2)
Actually yes and no. This dog has been beat and beat again fairly well in Philosophy of AI.
I appreciate what you are trying to get at, but your example is flawed. A true Islands examples for examining Intelligence in relation to AI are more like Helen Keller before she learned language or perhaps faro Children. Dropping a fully functional adult on an island, that has already learned language, culture, and essentially has learned to internalize or make self-reporting use of what is normally overt verbal expr
Re: (Score:2)
Sorry, that would be guys with their Matrix / Hal fantasies pissing millions of dollars in research money away in labs around the World, with no real coherent plan or idea of what it is they are after.
Thus, we have the nearly weekly sensational articles that claim 'we stuck our peckers in a light socket in a new way and discovered AI and God all at the same time' type articles on slahdot over and over and over again.
On a practical note however, I do suspect it will be the porn industry that ultimately makes
Re: (Score:3, Insightful)
and here i keep observing that the overall intelligence in a room drops by the square of the number of people in said room...
Re:Artificial intelligence? (Score:5, Insightful)
None of us are able to live alone, work alone, think alone.
Did you come up with this because of your own ability to do so?
Because except for reproduction, we can easily survive our whole life alone.
Sure it will be boring. But it works.
The idea that computers can become intelligent independently and in parallel with this real global AI is insane, and it has always been.
Says who? You, because you need it to base your arguments on it? ^^
You will see it happening in your lifetime. Wait for it.
Re: (Score:2)
The amazing thing is that we consider individual brains to be "intelligent" when it seems pretty clear we're only intelligent as part of a social network. None of us are able to live alone, work alone, think alone. The concept of "self" is largely a deceit designed to make us more competitive, but it does not reflect reality.
No, you're completely wrong. It's sufficiently obvious why that I don't feel the need to elaborate.
Actually, the telegraph was already a global AI tool.
No, it's called a network.
Re: (Score:2)
If, by your definition, the Internet is an AI, then your definition of AI is meaningless (and useless for anyone working in that field). Your post reeks of ill-deserved elitism and the message it conveys is incredibly depressing: individually the human is nothing/we already have AI, so we have nothing to reach for. I'm not going to argue about the first part, since I do not think it deserves any answer, but I'll say about the second part that we would never get true AI if most people thought like you do.
Re: (Score:2)
Free transistors (Score:4, Informative)
I think memristors will be complimentary to existing rather than a revolution on their own yet analog transistors would have George Boole flip-flopping between orientations in his grave.
Re: (Score:2)
I think even with 1000x performance, it will be hard to return to analog. There's something about the 100% copyability of data, determinism and exactness of digital which analog can't hope to achieve.
Maybe 1,000,000x would veer me over however...
whatever (Score:3, Interesting)
In the 1970's, the big breakthrough was supposedly tunnel diodes, a simpler and smaller circuit element than the transistor. Do our gadgets now run on tunnel diodes? Doesn't look like it to me.
It was 1960s, and they were quickly obsoleted (Score:3, Informative)
Could this explain memory loss in old age? (Score:2)
New brain cells are created through life (Score:2)
Like we need more Artificial Intelligence... (Score:2)
... don't we have enough people producing this already?
The brain is not a computer. (Score:5, Insightful)
Citation [scienceblogs.com].
See especially points
6 - No hardware/software distinction can be made with respect to the brain or mind,
7 - Synapses are far more complex than electrical logic gates,
10 - Brains have bodies,
and the bonus - The brain is much, much bigger than any [current] computer.
It's past time for this idea to die.
Re: (Score:2)
Difference #1: brains are analogue; digital computers are digital; analogue computers are analogue ... um, what was his point again? Computers are no more digital than human beings are males. That is, some are and some aren't.
Difference # 10 "brain have bodies" - um, my body has a brain. My brain is a body part.
Could not be bothered reading the rest.
Re: (Score:2)
You clearly did no further reading other than the topic headings I provided. I should expect no more from an AC, yet somehow the optimist in me keeps wanting to believe that people are seeking truth instead of trying to reinforce what they already know.
If you (or anyone else) can raise reasonable objections to what's in that article, I'm willing to explore the topic. Otherwise, don't waste my time.
Re: (Score:2)
No, he made some excellent points, which you decided to ignore.
The article makes for fascinating reading, but it isn't at all clear what you're trying to accomplish by citing it. The brain/computer analogy is a useful one for dealing with certain issues. It has its limits. Ultimately, computers today are based on different principles than the human brains.
The article is just a warning (seemingly directed towards cognitive scientists) to take care in drawing conclusions about how the brain works from thei
Re: (Score:2)
For who? Think of it as mem'ristor. There how hard is that? It is true that the pronouciation of the letter "r" is quite different in Sierra Leone and Japan, but its hardly a major problem, and the presence of the "m" in front of it isnt a problem for anyone I know.
The idea that the devices are a "major breakthrough" is a problem though - how do these differ from any amount of other devices producing "negative resistance" through phase change
Re: (Score:2)
Putting "mr" in a word can lead to pronunciation difficulties, just google for words containing "mr" then exclude all abbreviations of mister to find how rarely the sequence it's used. Renaming it to "memistor" would help greatly. Also, the wikipedia page for memristor already contains a reference to memistor.
The 'm' and 'r' are in different syllables, so it's really not an issue. I assume you can handle 'Tim Robbins' so you can handle 'memristor'
Re: (Score:2)
Perhaps he thought it was pronounced "me mristor", as in: "Oill get me mristor out torday!".
Re: (Score:2)