Nanotechnology And The Law of Accelerating Returns 156
digitect writes: "The article More More More at Reason is a good overview of the increasing rate of acceleration for technology. It includes references to nanotube technology, nanobots and estimations of gross computing power in the near and far future.
Frankly, I doubt we will ever develop computers with the sophisticated power of even a mouse brain, although many may protest that we already have exceeded their gross power. I believe that things like perception and reasoning are beyond the scope of raw power. But it's a fun read anyway."
Re: Strictly speaking, Moore's Law is about... (Score:2)
Storage!
So far, it has pretty much also held true for "computing power" (doubling every 18 months). New kinds of killer apps become possible when really humungous amounts of storage -- without significant energy drain -- are available... For example, kids could routinely "scan" every school-related document. (Think how much more you might now understand if you were able, whenever you wished, to "intelligently search" everything you ever read -- or wrote!)
This author is claiming something not even remotely connected to Moore's Law, though. We cannot even teach robots to climb stairs yet (unless it's a set of stairs they already "know"), yet he thinks they will become able to "think" (not only move autonomously but also perceive, relate apparently uncorrelated storage items, solve non-algorithmic problems, etc.). I don't believe that is going to happen in any significant way within the next century, let alone the next decade. Didn't I just read something about how nanotechnology so far seems (in the lab) to be associated with ominously low energy input/output returns?
Or a dog (Score:1)
Or a half witted dog [aibo.com] for $1500
It's staggering... (Score:1)
Seeing such a complete lack of faith in humanity is more of worry than the possibility of technology running rampant just like it always does in hollywood sci-fi's.
I say bring it on... I can't wait to upgrade my computer from mosquito power to pidgeon power !
Re:Ever hear of quantization? (Score:1)
There is a large and growing problem in the modern world ... a belief that everything can be 'quantified'. Sure, if you had a model that perfectly replicated every molecule, down to the sub-atomic level, for a volume of space defined by light-speed for the duration of the experiment (e.g. for running the experiment for one second, you need to encompass an area with a radius of 300,000 kilometers), you could accurately predict the future. Realistically, when you start looking at just about anything at the atomic level, it defies prediction. Atomic bombs are about the easiest reaction to simulate, and they're still building computers to adequately model _that_ reaction. At the moment, it's - surprisingly - a rather limited set of parameters - although the results match RW test results OK. Now, that's for a reaction in which the various transition states and interactions are known (e.g. proton hits another nucleus, resulting in ...).
Compare that to, oh, DNA. Double helix, with a rather large number of combinations. And it tends to change it's shape in interesting ways. Allowing new chemical bonding. Attempting to model any 'reaction', such as thought processes, without knowing all the reactions that make up one component, is pretty much a shot in the dark, and has virtually zero chance of being correct. The idea that all we need is a bigger and better computer is, ultimately, fallacious. Remember the old acronym GIGO? The same thing applies to attempting to solve equations with incomplete data. The desire to quantify the unquantifiable is just plain stupid - it shows a lack of understanding of what you're attempting to model, and a lack of understanding of chaos theory.
Re:Evolutionary Software (Score:2)
However, the link you provide made for an interesting read. Genetic Programming (as they describe it) seems to be the most likely candidate for the kind of outcome you describe. It appears to permit program evolution to change algorithmic structure, rather than just algorithmic constants (which Genetic Algorithms change). I'm not totally convinced that even the insane improvements in computing power will enable Genetic Programming to produce anything beyond laboratory curiosities.
Nonetheless, it would be interesting to see one of these programs be developed to drive a car, or some such "AI"-type problem.
Re:Computing power of a brain (Score:1)
I'm really not being argumentative here. I really do want to understand what kind of computation (or whatever) is going on in my mind, to make me, me.
By the way, I do not say computers can't be intelligent/conscious/sentient.
Re:Tron quote (tiny correction) (Score:1)
What drives this (and Moore's law too) (Score:1)
The goal of "electronic brains" is outdated (Score:3)
The tremendous speed increases in computing hardware are often mistaken for something deeper. We're writing larger applications, yes, but they're not necessarily more stable or more advanced in a way that's different than simply adding more features. If anything, we're starting to come to the realization that simpler is better, or at least that having straightforward goals is much better than shooting for extremes.
Take compilers, for example. In the 1970s, two top goals of compiler writers were "incredibly high levels of optimization" and "automatic correction of user errors." Today the goal is more conservative: "go for a straightforward implementation that will have the fewest problems." It isn't worth doing over-the-top optimization if you're trading a 0.5% speed increase for greatly increased code complexity. As a result, more compiler writers have taken a conservative approach. In terms of correcting user errors, it is simpler and more predictable to simply report errors as they are found. Trying to be smart causes more trouble than it is worth ("How can my program be wrong if it compiles and runs?").
Complexity is a limiting factor in grandoise plans for AI.
Re:Perception and Reason (Score:1)
For the sake of us all, please follow this link. [learningco...school.com]
Re:Perception and Reason (Score:2)
I know many formally trianed typists who have been typing at keyboards less time then I have that have carpal tunnel.
I show no sign of coming down with it. Classic typing detroys ahdsn and I have no urge to do that to msyelf, though I appreciate that slashdot's lack of a spell checker might make my idosyncracies somewhat difficult for the reader.
If you can find me a typing system or device that is relatively easy to learn and non-repetitive, I might be very interested.
I DO have a twiddler on order for some light weight computing experiments and plan to learn to use it. Maybe that will help.
Evolutionary Software (Score:1)
While I do concede that there is no way that C++ or Java (or anything else even remotely related) are going to be the languages that we'll be using to program "brains" with in the future, I think you are failing to recognize "evolutionary programming" as a possible solution.
Producing software by hand that is complex enough to achieve awareness may be an NP-complete problem, but evolution has solved many NP-complete problems in the past =) Additionally, I don't think that software evolution has to be on the scale of eons either; Maybe it doesn't exactly follow Moore's law, but I would think it falls somewhere close. The biggest problem with evolutionary software design, as I understand it, is defining goals and success tests. The research already done in this field is fascinating, and yields software that defies all the rules of traditional software design, often utilizing the hardware in incredibly unorthodox ways that suggest an eerie knowledge of the underlying atomic structures of the silicon.
Here's a fun site with more information on Evolutionary Computing (EC):
The Hitch-Hiker's Guide to Evolutionary Computation
http://alife.santafe.edu/~joke/encore/www/
In my opinion, the coupling of nanocomputing with evolutionary software, will yield computing brains which are indistinguishable (and eventually superior, I suppose) to their biological counterparts, save for the underlying material composition.
paulb
Re:Computing power of a brain (Score:1)
Adrian Thompson [susx.ac.uk] has been researching hardware evolution; using genetic algorithm as a feedback loop for programming FPGAs. In some cases the optimal solutions achieved obviously relies on complex electromagnetical resonance from seemingly unconnected parts of the circuit, behaviour not anticipated by todays testing suites.
Also, if by regularity of digital circuits you mean its deterministic logic, it's ability to not be affected by random fluctuations of its environment, I'd like to point out that that might not be a positive property in this context.
Re:Computing power of a brain (Score:1)
I don't really see the problem. Theoretically it sounds entirely possible to build a machine that would react like that.
Personally (the magic word), I'd say that this whole conscience thing is overrated. You think of it as some sort of entity, soul. As far as I'm concerned, chair as conscience just like you do -or don't-, you just react (read-reply-read-reply) to your environment in more complicated way than the chair. Or slug. Or slime mold.
Re:Apples and Oranges (Score:2)
What I'm really trying to get at is what is the minimum number of neurons needed to think. Put another way, if we subtract out everything the brain must do that isn't thinking, does it reduce the complexity needed to think?
In a similar fashion, birds were an existance proof for heavier than air flight. Flapping wings are a very complex solution to flying. Once it was determined that wings could be used only to generate lift and that thrust can come from elsewhere the problem was simplified.
We know that you do not need a complete set of functioning sensory systems to think. Helen Keller was a fine example of that. And Stephen Hawking does fine without the ability to control his muscles.
On the other hand, if one is blind or deaf those neurons are still available to think with. And don't have to deal with any sensory input.
Any AI would have the need for input. We need to determine how complex that input need be. Current computers can 'reason' about the physical world without human like senses. As a trivial example consider Mapquest. It can answer spacial questions about the world.
One of the reasons artifical vision is so difficult to implement is that there is too much info. The system can't determine what is relevant. Having an alternate mechanism to introduce info into the system simplifies the problem.
I realize that we use the same neural machinery for multiple tasks. Thus I would ask is dreaming, visualizing, etc. needed for thinking? Or is it a side effect of how one particular system, the human brain, was built.
In a round about way the point I'm trying to make is that it may be that using the complexity of the brain as a measure for what is needed for an AI is actually an upper limit because the brain has tasks other than thinking.
The brain may also have redundancies not needed in a minimal AI solution. People can get by with just one hemisphere. Does this mean we can cut our complexity estimate in half?
And that given that the brain was evolved and not engineered is there a more elegant solution to designing a thinking machine?
Steve M
Re:Computing power of a brain (Score:1)
Everyone agrees (I think!) that humans have conscience thought. What about monkeys that can use sugn language and are sad when their babies are taken away? What about a dog? A bird? A slime mold?
Just recently, on NPR, I heard a story about possible intelligence in slime molds. It was debunked by an expert who convincingly claimed it was not. It seems we sometimes attribute internal thinking too much to external behavior.
Re:Vernor Vinge has something to say... (Score:1)
Re:They are all rooted in atoms (Score:2)
Also, just because you can describe configurations of atoms in space (if we could) doesn't mean you can therefore model identical macro behavior of a cell - that is, unless your model of force atomic behavior was extremely good.
My point, however, was that let's say we model a one cell amoeba. We would also have to model its environment to duplicate said behavior. If it isn't getting any feedback through interactions with the world, it isn't doing anything.
This reasoning is actually short-sighted (Score:4)
Re:changing perceptions of what the future holds (Score:1)
If you're a strong materialist, how can you avoid it? I'm not asking this as a rhetorical question - I really want some ideas.
> It is a matter on which I'm sure no one here is qualified to make any valuable judgments
Translation: "Quiet, ya dumb slashdotters!"
> so let us not discuss it.
And so therefore we must.
Re:So how do you all think... (Score:1)
Put succinctly: Moore's Law is a special case of the more general "Law of Accelerating Returns."
I know of computers that **HAVE** Intution (Score:3)
I refer, of course, to the human brain. All it is, is an organic, massively-parallel processing environment with an equally massive and complex boot-rom, and a uniquely flexible OS.
So, why is it ok to have a jellyware CPU system with intuition, and not a hardware one ?? The difference is likely to be a matter of "hardware" complexity, and the "software" running on it. . .
Re:Apples and Oranges (Score:1)
Re:Computing power of a brain (Score:1)
"Personally" is the magic word. I can't look inside my head as 3rd person...I'm in here. I don't claim it is soul necessarily, and am certainly not trying to argue for or against the existence of God here. There could be some purely mechanistic, computational explanation for my awareness/sentience/whatever. I just haven't seen one yet I think is correct.
Almost certainly it is possible to build a machine that reacts sufficiently like me that an observer could not tell the difference. But then the interesting question to me would be one that the behaviorist would never ask. Would that machine experience the same sensation of I-ness that I do? Perhaps it is implementation dependent. Perhaps some implementations of my behavior are "conscious" and some are not.
Re:Computing power of a brain (Score:1)
If there were a machine that reacted like you to outside world, it wouldn't be you, necessarilay. But if there were a machine whose internal neural networks would run in similar patterns to yours, then it would be you.
What I'm saying is that... well, sorry, your last paragraph just made me remove some words. This "I-ness" could indeed be just another feature in the network, but your question "just WHAT is consciousness" as some sort of fundamental question feels strange. You don't have the exact mechanism for anything in human brain from motorics to quantum mechanicss, why should this be anything special?
Sorry, I just got the impression that you were saying consciousness is somehow special, something that mechanist world couldn't create.
P.S. I always wondered about those people who offer souls, dualism, quantum mechanics as sentience tool and all that as solution to consciousness problem... wouldn't that just move the problem bit farther instead of solving? Doesn't every universe need clearly defined rules, making it just another MechaVerse? And just why would this "randomness" in QM create consciousness? I thought these seemingly random events are just as bound to laws of nature as anything, only with several equal outcomes instead of one.
Re:Computing power of a brain (Score:1)
Re:This reasoning is actually short-sighted (Score:1)
My barber was an encrypter stationed in Germany during the Korean War. (Lots of great stories, and I've discussed Cryptonomicon with him.) Anyway, his unit got a prototype fax machine back in the early fifties. It was slow as molasses in January, but did work. The brass decided not to use it, though, as it was determined to be too slow and imprecise. So the guys at different bases that had these machines would fax Playboy magazine photographs back and forth.
Glad to see that things haven't changed a bit. :)
Re:Perception and reasoning are already understood (Score:1)
Why not just assume that we have an infinite power source and all components are infinitely reliable? Then you wouldn't need all that disk. :0)
Penrose - computable problems (Score:1)
He attacks this issue differently from Turing (there's an old saying that most people can't pass the Turing test anyway) and takes an approach similar to Kurt Gödel's work in incompleteness of formal systems, which showed that any system you can come up with will always have propositions that cannot be proved or disproved in that system.
He argues that certain things that humans do all the time, like comprehending paradoxes, self-reference, abstract assocation, world modeling, etc. cannot be done by any deterministic system (including digital computers), in a finite amount of time.
He also has a controversial theory that brains use quantum mechanical effects to employ "multiple universes" to get from point A to point B. His argument is way out there, but it's pretty airtight.
The Sir Roger Penrose Society [welcome.to] discusses this a lot and has lots of links to other similar discussions.
Re:Penrose - computable problems (Score:1)
First: there is no evidence that Gödel's theories don't apply to human intelligence just as much as they do to the automated reasoning processes that they were originally targeted at debunking.
Second: Gödel's theories only apply to deterministic systems. The moment you start including random numbers in the equations, you cannot get a result anymore. Your automated reasoning system can break out of the 'rut' that it was in, and up to the next class of problems. Doesn't this sound exactly like the way humans approach mathematics? Ask any mathematician who has struggled on a hard problem for years - the solution comes in a moment of inspiration - just another word for a random thought unlocking the secrets of the problem.
So, thinking machines must be nondeterministic. Fine; we already know how to make circuits behave in a nondeterministic manner.
Simple Answer: (Score:1)
Simple answer: One has a soul, the other doesn't.
Current scientific "knowledge" refuses to believe the existence of the soul, because it can't prove that it exists. Until scientists can overcome that hurdle, computers will never be more than sophisticated adding machines.
At one time, I believed that it would never happen, but recent advances in the open-mindedness of some scientists has made me believe that there could be hope some day.
At one time, it was considered scientific heresy to suggest that animals had emotions, creativity, or self-awareness - but some animal researchers are beginning to understand now.. and this is the beginning.
Re:Computing power of a brain (Score:1)
I think I had you for a professor once. I shot your dog.
Re:Computing power of a brain (Score:2)
This sounds smart, but as someone who has designed analog neurosystems in silicon, I'd say that in this day and age you can get far more bang for your buck using digital simulations of analog circuitry. The reasons for this include the power of digital design tools (VHDL, etc.), digital testing suites, regularity of digital circuits, and the generality of digital machines (no application-specific silicon required).
See airplane flight vs. bird flight on this one...
So how do you all think... (Score:1)
----
Re:Computing power of a brain (Score:1)
-_Quinn
Re:Or a dog (Score:1)
He presents a cogent argument that we are five orders of magnitude away from computing power that simulates the human brain. He lists the steps as the intelligence of:
an insect,
a lizard,
a mouse,
a monkey,
and human.
Moore's law says five orders of magnitude is 17 doublings. 34 years for the pessimists who use 4x every 4 years. Make that 33, 'cause the book came out last year.
Re:Perception and reasoning are already understood (Score:1)
-_Quinn
"The Machine Stops" Author (Score:1)
Is E.M. Forster
Steve M
Computing power of a brain (Score:5)
Frankly, I doubt we will ever develop computers with the sophisticated power of even a mouse brain, although many may protest that we already have exceeded their gross power. I believe that things like perception and reasoning are beyond the scope of raw power.
Just to offer my viewpoint... The brain is slow, but massively parallel and interconnected in a vast array of various neural networks. Inherently, the brain is analog -- down to the quanta of electrons involved in the chemical reactions.
In order to simulate that in a computer that executes things very quickly, but serially, would require a HUGE AMOUNT of computing power. You'd have to be able to simulate time-slices as small as those significant in the brain.
However, if we were to take several slow processors, and network them together in parallel, we'd probbably get a lot closer for a lot less.
I don't believe consciencenous is anything special. Its just the superposition of hundreds or thousands of neural networks all owrking together. Heck, at one time, man-kind thought the motion of the planets and stars were just too complicated to ever figured out, so they were labeled as something mysterious and never to be known. We shouldn't make that same mistake with the brain and mind simple because it appears at present to be too complicated to figure out.
Raw power vs. sophistication (Score:1)
But even supposing we have the equivalent raw power of a mouse's brain, it doesn't mean a thing if we don't have a clue how a mouse's brain works.
-y
First Hype! (Score:1)
I'm continually amazed at Moore's law, and how long we've managed to keep it going. My little web page to guestimate hard drive prices [tsrcom.com] has been revised twice because it wasn't optimistic enough.
That being said, I think it would take about 10^9 current generation systems networked together to approximate the learning skills of a single two year old. (OK, perhaps I'm being optimistic) If the current trends hold, that means we can take off a factor of 10 every five years, so it's at least 45 years until our computers are as smart as a 2 year old.
On another point, I fail to see how a new technology is going to eliminate the need for capitalism to keep us all motivated.
Mike Warot, Hoosier
Computing power of a brain: biological Beowulf ??? (Score:2)
However, if we were to take several slow processors, and network them together in parallel, we'd probbably get a lot closer for a lot less.
Interesting concept. Has anyone tried AI using clustering technologies, rather than brute-force computing ????
Re:perception (Score:2)
I didn't look for the /. story, but here [nyu.edu] is the web site for the silicon mouse you are referring to.
Steve M
Re:This reasoning is actually short-sighted (Score:1)
mental puzzle (Score:2)
Think of all the unemployment.... (Score:1)
With this ability, lawyers would all be out of work - speedy instant justice with no protracted trials, wrongly accused, etc etc. It's like some kinda dream world.
Oh wait, even 30 years ago we were promised Mars colonies and flying cars in everybody's garages.
I want my flying car.
Re:Perception and reasoning are already understood (Score:1)
You ever used OCR? It can pick up multiple-font (and/or damaged) characters at a very high accuracy without any training or 'thinking' at all.
Aside from that, you have a point -- an outstanding question in AI is how to connect the visual system to a symbolic one that will help interpret the visual data.
Incidentally, you don't need emotions to create a course of action, just a lot of horsepower. (Remember Deep Blue?) I'd also doubt you need emotion to create goals; certainly the three laws* would suffice for goal-creation, and don't necessitate emotion.
But in general, the theory isn't there; we're still waiting for the psychologists to develop a hard science.
-_Quinn
Doesn't nanotech... (Score:1)
but seriously, I love the promise nanotech has, and frankly would love to see what it can and will offer.
Looks like the Sony ad people liked it too, what with the PS9 ad they have for the PS2
------
http://vinnland.2y.net/
Re:Computing power of a brain (Score:1)
Re:Intution (Score:1)
Which kinda highlights the ignorance most computer types have regarding biology. PNP junctions ain't tuff. Let's take a typical neuron ... interconnected with a few dozen other neurons - capable of dealing with multiple inputs. Each inter-neuron connection comprises several close 'ends' (to use a non-technical term ;-). So what affects the transmission of an impulse? Well, scientists are still working on that. The various chemical levels (Seratonin, to name a highly known one). Virtually every chemical in the body (Had a fright? Epinephrine (a.k.a. adrenaline under the old nomenclature) enhances the transmission of info for certain synapses (those associated with movement, strangely enough). Had another nearby neuron going off recently? Degrades transmission slightly. Same neuron going off twice in a row? Slightly harder to get the signal through. All those chemicals also have a big effect within the neuron itself.
A bit is either on/off. There have been _some_ multistate bit experiments (most recently Intel with its memory), but generally it hasn't gone anywhere. All computer comparisons are BINARY. The neuron is affected by tens of thousands of chemicals. It's affected by other neurons. Every damn one of those interactions is gradiated, i.e. NON-BINARY. Did the nail in you foot cause just one molecule of epinephrine or one hundred to reach that neuron? How does that interact with the other ten thousand chemicals, each of which has the same variable effect?
Are you familiar with the exponent and factorial functions?Ten to the eleventh neurons. Varying numbers of connections, from a min of two to over fifty. Varying number of synaptic gaps. Several thousand chemicals floating around your bloodstream on a regular basis. You start getting numbers that (grossly) exceed the number of atoms in our planet (let alone our system), using the most conservative estimates. Now, given that a binary 'bit' requires quite a few molecules to build, it's kind of, well, ignorant, to view a neuron as a 'glorified adding machine', when an adding machine equivalent to the human brain would require more space than our planet.
Re:Perception and reasoning are already understood (Score:2)
Yeah - obviously you can always hard code something. The "specific keys with a different font" example was only used in context of thinking. For example, my keyboard has some keys where the text is below the actual key - but we intuitively know that the text below the key relates to the key above it. If I encountered another keyboard with the same key, but with the text on the key, I would automatically conclude that they have the "same" key - as in function. Just represented differently. Obviously a computer program cant figure that out. It would need some sort of reasoning to figure that out - yet a straight program that recognizes objects wouldn't have a chance in heck.
IMO, It's only that our current systems aren't sufficiently complex and generalized to deal with said problems.
Re:So you believe in the soul? (Score:1)
Re:Simple Answer: (Score:1)
I beg to differ, speaking as a non-christian, I could argue that people have no more soul than a TI-82 (now, the TI-85's are a different story) due to the fact that given simple stimulus you will evoke a simple and directly predictable response - should you know enough about the person's psychology.
Everything is relative and based upon quantum probabilities (rather than Aristotlean binary logic), the problem with our current computers is that they are based upon Aristotlean logic, that is - ignoring the logical third possibility: maybe, and extensions thereupon (10% maybe, 20% maybe, 30% maybe, etc.)
As the function of the human brain is based upon assigning a probabilistic value to an observed relation (ask Pavlov) through positive and negative re-inforcement, the logic of our wetware is such that we have a near-infinite degree of %maybe available to define our observations.
Our neurons create new connections based upon our observations (pos/neg), which in turn influence future actions.
Personally, I don't know whether humans have a "soul" or not, although the more I see us seeking to define it, the more I see it running away - after all, definition into such an inflexible logic would surely thwart something rooted in such a flexible system.
I would like to think we do, but that will just be something I keep as a big maybe for now.
Of course, if the soul and body co-exist, a Quantum Logic computer would indeed have both.
Re:Perception and reasoning are about organization (Score:1)
Of course, he has no idea what exactly happens cognitively from wherever the auditory pathway projects -- but it is a good start.
Mr. Watts offers supposition that neuroscientists really do understand specific brain functions - just that our information is so fragmented that it is very hard to understand how these systems work.
Supposedly by 2020 we should have the computing power to do higher order cognitive functions; the question is - will we have the algorithms?
Re:Computing power of a brain (Score:4)
Proper design is left as an exercise for the reader.
Re:Apples and Oranges (Score:2)
Are we forgetting just how much biological regulation a mouse brain has to maintain? It's not just all "calculations"....
I've often seem arguments against AI based on the complexity of the brain. Human not mouse. But the above quote shows why this may not be as large an issue as generally believed.
The question is, how much of the brain is devoted to non-thinking tasks and does this significantly reduce the number of neurons needed for thinking?
Here's one example. There is a lot of neural machinery devoted to visual processing. Yet one doesn't need to be able to see to be able to think. So can we subtract the neurons and the corresponding connections devoted to the visual system from the number of neurons needed to think?
What about the brain resources devoted to the other senses? Or those used for muscle movement?
Steve M
Perception and reasoning are already understood (Score:3)
I believe that things like perception and reasoning are beyond the scope of raw power."
Actually, these two areas of artificial intelligence that are probably understood better than any others. (Language and Memory--now those are problems people are still clueless about, IYAM.) Neuroscientists have mapped out the perceptual system with great detail (at least the visual perceptual system), and there are some fairly advanced neural network models that embody these findings. On the other hand, Newell and Simon were able to understand and explain many kinds of "reasoning" very well in the 1970s--today the main descendent of this work, "SOAR", can work with 10,000 or more rules. It can fly planes in simulated combat and make strategic and tactical decisions. Maybe it is unable to do everything a pilot does, but I would argue that it is still reasoning.
So, it is technically correct to say that these things are beyond the scope of raw power, but the theoretical advancements have already been made. The only thing holding these system back from real-time performance is raw power.
computer power vs. qualitative computation (Score:1)
My Octane pulls busses all day long. But the qualitative difference between that and a human brain are enormous. It's apples and dumptrucks.
Today: Frank takes ill [ridiculopathy.com]
Comparing brains and computers (Score:1)
Correctly revired you can play QuakeIII on the mouse-brain?
Correctly programmed (and fit with legs) the computer can run around sniffing for cheese?
Computers and brains has very different ways of working. I cannot see these comparisons have very much meaning without it being specified how the comparison is being done.
Quality, not Quantity!! (Score:3)
Processors provide the "parts" of computation by physically performing the actual instructions used. These computers basically allow numerical operations, memory access, and branching. That doesn't seem like much, but it's "Turing complete," which means that (if you buy the Turing hypothesis) everything which is computable can be computed with such instructions. We have all the parts of computation we need, and they're getting faster all the time.
But the software still lags. We have "computationally intense" software, but that's not the same as complex software. 3D games always push the envelope of computer capability because just when you think you've got enough computing power, id throws more triangles and more textures at the problem. That's a quantitative change, but not a qualitative change.
When we look at all of the other software produced, it seems that if the software is marginally complex (think of your favorite program here), it's buggy as hell. Reducing the bugs in the software requires more effort; an exponential amount of effort as the complexity increases.
That's why we've seen the speed of computer hardware shoot through the roof, and the complexity of computer software plod along, unable to keep up. Producing complex software is an NP-complete problem. (/me ducks the flames of the math people in the audience.)
If you'll permit me to play pundit for a second: I think we'll reach these so-called "milestones" that the AI people and the nanotech people keep giving us and realize that while we can manufacture a computer with the MIPS/FLOPS/whatever of a mouse/dog/human brain, we don't have the slightest idea how to string all of that power together to actually perform the operations of the mouse/dog/human brain.
Your computer will get 10,000 fps with 6e10 textured polygons in Quake XXXVI, but it still won't be able to learn a new language.
Re:Perception and reasoning are already understood (Score:2)
Well deep blue is just something programmed to play chess by a bunch of people which iterates through a huge game tree with pruning help from the programmers, is it not?
Anyway, I'm vague because I don't sufficiently understand this stuff. That said, you can obviously hard code goals - but how would a program create new goals without emotion? For example, let's say thinking of making a lot of money makes me happy as it is an instrumental goal to doing things that will ultimately make me happy (i.e., travelling, time with loved ones - which are instrumental goals towards blah blah etc). Without the emotion I would not be able to add new goals, as nothing would motivate me to do so. If I was waiting for the subway and I saw someone getting beat up, I might feel anger and perhaps fear. Whichever one won out as a result of a cognitive process (i.e., thinking I might be next, or believing I can "take that guy") would affect goal oriented behavior.
You might say that we could do the same thing by calculating value in some sort of action matrix - but when sufficiently complex, wouldn't it be an emotion equivalent?
My ideas may just be outdated, as they were primarily formed by a 1995 book titled (Descartes' Error : Emotion, Reason, and the Human Brain) by Antonio Damasio.
Re:Perception and reasoning are already understood (Score:1)
I think we'll have computer intelligence before we even come close to understanding intelligence. That's because our first intelligent computers will be products of brute force: simulate all possible brains and ask all of them to flash the screen if they understand us. The ones that flash the screen are possibly intelligent.
hit and miss (Score:2)
progress while others are commercial lemons.
Successes are silicon chips, optical cable,
and screen displays. Lemons are hi-T superconductors and buckyballs.
duplicating sophistication of mouse brains (Score:2)
For a contrary opinion, you can read The Emperor's New Mind [amazon.com] by Roger Penrose. While it's well worth reading, Penrose's argument against artificial intelligence seems to be that intelligence requires quantum uncertainty, and that computers are deliberately designed to avoid the effects of quantum uncertainty.
This argument fails to pursuade me for two reasons:
Motivation and capitalism. (Score:1)
The elite will be the ones actually DOING things - creating the content to amuse the masses, doing the research, directing the engineering. They won't be elite because they have been elevated, they will be the elite because they are the few with the actual motivation to DO anything.
I had a sig, but it was stolen by communists.
Re:Computing power of a brain (Score:2)
Re:Perception and reasoning are already understood (Score:2)
Re:perception (Score:3)
Do we have to understand it, first? It seems like if we could cheat a bit, and just model a mouse brain inside a physics simulation, we'd have a computer engaging in perception-like tasks. The obvious drawback to this is that it wouldn't be capable of doing anything a mouse couldn't do (and would lead to a flurry of /. posts along the lines of "Imagine a Beowulf cluster of these things! They could run through mazes and find cheese! The possibilities are limitless!"). However, by modelling a mouse brain, scientists would be able to better "fiddle" with it and understand it, possibly leading to a more practical understanding of perception.
And to get further off on a tangent (but hopefully remaining within the realm of a worthwhile discussion), we suddenly open up a whole can of worms with regard to creating a machine-based consciousness. In my own opinion (and this is just opinion here), a hypothetically powerful/complete enough simulation of a human brain approaches consciousness. I'm of the opinion that an actual living, breathing human is just such a simulation via chemical means. I'm a little afraid of the ethical consequences when we gain the ability to create neural networks with complexity rivaling that of our own brains; one could argue that it's of even greater ethical concern than human cloning. A human clone, at least, has the benefit of being inarguably human (barring something really weird like a gorilla/human hybrid), and would thus be protected by normal laws.
Re:Apples and Oranges (Score:1)
It would probably reduce the potential problem sets that the AI could handle. That is, if we didn't have any other mechanism to introduce concepts to them. If you don't understand the world around you spatially, it would be a lot harder to think about abstract interactions between them.
We also use some of the same "neural machinery" when imagining, dreaming, visualizing, whatever. I'm pretty sure the same goes for sounds as well.
Bad Karma (Score:3)
Goodbye Karma.
A large, networked system of analog neurons might just do the trick here as for creating a system with the intellegence of a mouse. But absent any good way to deliver, register, and respond to stimulus, this would be one crazy machine. It simply wouldn't have enough information to act, any way to deal with information sent to it, or any way to figure out whether its actions were appropriate or inappropriate (it would need a complex systems of rewards and punishments and some sort of inherant internal mapping of neurons to stimuli and responses.) To wit, experiments have shown that if you cluster a bunch of analog neurons together, it will think random thoughts until you bother to shut it off.
Plus it would need to "eat", self-repair, purge unneeded inputs (both by discarding unsupported hypothesises [Is this a cat? It does not look like a cat. It is not a cat.] and if it eats it will have to poop), and eventually defend itself against hazards. In other words, mice will be "better" for a long long time.
-Ben
Yeah right.. (Score:2)
I remember back in the early 80's when people were like, By the year 2000 robots with computers in their heads will be doing all of humans menial tasks, we will just do the creative stuff. Okay since I have a robot making my bed and cleaning my apartment for me right now I can assume thats true. Yeah nanotech robots will do all the work too, especially since they are so small, they will have all that space for computing power.
>Around 2030, we should be able to flood our brains with nanobots that can be turned off and on and which would function as "experience beamers" allowing us to experience the full range of other people's sensory experiences and if we find ordinary experience too boring, we will have access to archives where more interesting experiences are stored.
Sound like a PS2 commercial???
Okay again, Im going to be the skeptic, in 30 years we are going to let nanotech robots into our brains and manipulate what we are thinking about, yeah nobody will have a problem with that
I mean foresight is good but have a little common sense.
Everytime there is a new technology people think that it will drastically change the world. The internet is great, but it hasnt changed the world that much, I still shop at the mall, talk to my parents on the phone. I still have to study for my tests and I still have to pay the bills.
Re:good Tron quote for this (Score:2)
Many people I know have already gotten a head start on this one, so I'm not sure how much of a real impact this will have on the world. But personally, I like this bit myself...
Around 2030, we should be able to flood our brains with nanobots that can be turned off and on and which would function as "experience beamers" allowing us to experience the full range of other people's sensory experiences and if we find ordinary experience too boring, we will have access to archives where more interesting experiences are stored.
Beamers, you say? Wow! I sure do love electronics, don't you? Dude, I've seen what happens when my toaster breaks down. Except when that happens, all you lose is breakfast. If one of these "beamers" decides to get ambitious, you end up stuck in John Malkovich's head or something. That also brings up some damned interesting and abusive uses of this technology. To what extent would we be capable of "experiencing the full range of sensory experience"? How much information gets broadcast to our own brain? If our senses are telling us we're experiencing artificial events from someone else's brain, then do we forget who we are? This is getting into the realms of philosophy, and I don't think I even want to begin delving into the implications here. Just know that there are many, and they're not all Utopian.
These robots, when they were developed, would do all the world's work: People could sit back and enjoy themselves, drinking their mint juleps in peace and quiet.
Say, I think I read a book about this somewhere... The Time Machine, perhaps? Is anybody else dismayed by the notion that this technology would allow us to become lazier than ever? I'm sure there are others, I'm sure I'm not in the minority in thinking this. Now, mind you, I really like nanotechnology. I think that it's capable of revolutionizing every corner of life, and could perhaps make many jobs automated, making services much, much cheaper, lower cost of living, and give people much more free time. Or just leave everyone out on the street desperate for a job, when companies don't bother lowering their prices on items that now cost practically pennies to make, standard of living stays the same, and you're left with scores of people out looking for a way to keep from starving to death for one more night.
Within 10 years, revolutions in genomics, proteomics, therapeutic cloning, and tissue engineering will be adding more than one year every year to human life expectancy
Wonderful! And when people stop dying, we'll have to colonize the seas. When the seas get full of folks, we'll burrow underground. When we've reached the limit to how much our natural resources can sustain us, we'll all turn into cannibals or something. Personally, I'm not a huge fan of living forever. I think the real issue is not expanding our life span, so miserable people can live their miserable lives for another miserable fifty or so years, but rather trying to improve quality of life, so that the few years we do have aren't so miserable. We've got six billion people on this spinning ball of dirt and water, and well over half of them (I don't have the stats in front of me) are dying of malnutrition, while we in the more developed nations waste enough food to sustain a few dozen small countries. Isn't it ironic that in a world where most humans are starving to death, the US is dealing with the growing rate of obesity. Doesn't this seem a tad unbalanced? I suppose the moral of this particular story is that we need to improve quality, not length, of life, and this can only be done by properly distributing the resources we have, if nanotechnology is going to have any kind of positive effect on the world.
As a side note, who will be getting these treatments? The wealthy? That's going to cause some serious social problems, if we suddenly end up with the rich crowding the world. And they won't be rich forever... so what happens then? So fine, we make it available to everyone who wants it. So that's great, who's going to fund this project? It's not going to be evenly and fairly distributed, it's just not. So only the rich/powerful/important will get the treatment, and the gap between the rich and the poor will widen beyond repair.
Okay. I'll stop now. The moral of this whole story is that I just don't think we're ready for this kind of revolution. We have to figure out what's really important about the quality of the life we're lenghtening before we make ourselves immortal. We have to learn more about the true nature of the self before we start bombarding our brains with other people's experiences. And we need to seriously get our collective heads our of our communal asses before we start restructuring society the way it will need to be when human workers become obsolete. But these are just things to think about. In the end, there's not a whole lot that I can do or say to stop this phenomenon. But we are indeed, as the Chinese curse proclaims, living in "interesting times."
/* Steve */
Check out Geniebusters (Score:2)
This article ought to be required reading for anyone writing about nanotechnology.
Western bias (Score:2)
It's an interesting western philosophical bias that minds are somehow seperate from bodies. In my opinion, simply replicating what the brain does won't be enough.
The brain recieves input and stimuli from all of the body's systems. The brain is also messy and imprecise internally. I think all of those factors will actually turn out to be important.
It may also turn out to be the case that the brain depends on quantum phenomena to function. Yet another thing to worry about.
Re:Perception and reasoning are already understood (Score:2)
I don't subscribe to the conjecture that machines will never be sentient - just that at current complexity and understanding, it's somewhat illusory.
Re:Computing power of a brain (Score:2)
All I know is in the tiny bit I've dabbled in neural networks, I've seen a lot. I've seen memory, self-organizing maps, boundary detection, etc. -- all from very simple and small neural networks. Now when you consider that there are billions of cells in the human brain, you multiply those simple capabilities I listed before enormously!
I don't see humans as being anything more than one step away from the rest of the animals. We happen to have excessively large brians for our side (only humans and dolphins have significantly heavier than average brains for their body size), and because of that surplus of nerual power, we *seem* smarter -- just as a Pentium III computer seems able to do more than an 8086 (speech recognition, for example).
Re:Penrose - computable problems (Score:2)
I think a great deal of insight can be found by realizing how the brain works. Imagine your brain isn't really in your body, but connected to a vast computer that is simulating stimuli and reacting to output from your brain (i.e. The Matrix). Your brain acts as a black box to the world. It develops over a number of years through trial & error testing dozens of different inputs (light, sound, taste, smell, touch, temperature, internal signals, time-domain information, etc.). If anyone were to take the time build a system they believe to be comparable and let it develop over 20 years -- then see how it compares -- we might be in for surprise.
As an inkling of what we might find -- people have developed a robot that can control its robotic arm and "sees" through a video camera. It starts off with a clean slate, but over a duration of hours, it begins to learn that when a certain signal (started at random) is sent, this "thing" it seems "moves". Eventually, it "realizes" that the "thing" is its "arm". Does it really realize it, or has it just associated certain outputs with certain inputs? Do we realize our arm is ours?
Keep in mind, my daughter is 5 months old and just now realizing her feet are hers -- at least, I suppose that she is realizing it.
So you believe in the soul? (Score:2)
How does he think people do it? Does he believe in a soul that handles all of this?
It's very simple. We're made of matter. Therefore anything we do can be done 'artificially' once we can manipulate matter on a small enough scale.
_____
Re:Perception and reasoning are already understood (Score:2)
How do we measure technical progress? (Score:2)
But do we really progress so much faster today than yesturday? And how do we measure this progress and conclude it's exponenetial?
Isn't it a bit "crude" to determine our technical progress on the basis of how many transistors we can cram into a small piece of silicon, or how much faster we can take a vaccum cleaner from concept to product because of CAD/CAM? Does that really mean that we are advancing so much faster in technology today?
The human genome project is used in the article as an evidence of the accelerating progress.
Let me use an analogy: If I lived in the 16th century and where a human calulator named Babbage and was bored with writing calculation tables and therefore made a machine that could do the necessary calculations 10 000 times faster than a human. Does this mean that the techninal progress just rose 10 000 points?
Point being, the genome project finished early because of raw computing power and refined methods. And that is not proof of exponential technical progress!
/Patrix
"And if any one should ask me, "Whence dost thou know?" I can answer, "I know, because we measure; nor can we measure things that are not; and things past and future are not." But how do we measure present time, since it hath not space? It is measured while it passeth; but when it shall have passed, it is not measured; for there will not be aught that can be measured. But whence, in what way, and whither doth it pass while it is being measured?"
http://www.ccel.org/fathers2/NPNF1-01/TOC.htm#T
Chapter XXI.-How Time May Be Measured.
Re:Quality, not Quantity!! (Score:2)
Yes this is true. However I think it's a little unfair to think that those who introduced the moores law as a function of animal intelligence comparison (i.e., Kurzweil in his book) are using it as the only basis of their prognostications.
Although I believe you are right that Kurzweil and others are a little out there in terms of being realistic. Humans are notorious optimists. Just because it's vogue to predict what's going to happen in 2050 doesn't mean we haven't learned from what Turing predicted 50 years before.
Re:perception (Score:2)
I suppose not.
Now, it you could simulate a consiousness would it be able to understand itself? If you could figure out how to keep it from making logical errors without eliminating creativity, it would be smarter than people. Trust and ethics are another matter.
Re:Perception and reasoning are already understood (Score:2)
WRONG. Let's say we have infinite computer power, infinite memory, and infinite disk space.
Do we have the algorithms to create emotion and therefore general systems to create goals and course of action. No. A program that iterates through a bunch of weighted goals doesn't have emotion (or at least enough "emotion" to understand the most crude positive and negative feedback).
How would the computer think upon long and short term goals?
Would a visual subsystem be able to recognize objects in space; for example 50 different types of chairs, desks, pens, books, cars, whatever? Nope, because the current systems are completely symbolic. Can such a system understand what an object is for, where it belongs, etc? Nope. At least not without some sort of language other than machine language and very simple inference, statistical, and very simple goal based reasoning.
Let's say you teach a computer to recognize keyboards and then keys, and then specific keys, whatever. How does it recognize different size keys (for example, I have a ms internet keyboard with litte non standard keys on top) or specific keys with a different font.
etc etc.
good Tron quote for this (Score:3)
changing perceptions of what the future holds (Score:3)
Heisenberg doesn't apply (Score:2)
On the other end, building these bad boys, we can achieve the same effect if we work with matter on a LARGE enough scale. Right now it's an even bet whether the first device to pass the Turing test reliably will be made of lots of very tiny things, or will be gigantic and fill up a warehouse.
But one way or another, it will be done.
--
The Origin of the Transhumanist "Singularity" (Score:2)
The idea that progress is going through a sharp turn upward is not supported by the Kurzweil's reference to the "exponential", a curve that looks basically the same at any scale -- but on a more radical mathematical formulation that goes to infinity in finite time -- specifically by Friday, 13 November, A.D. 2026 (give or take). No, this isn't just some New Age eschatology -- it was actually arrived at by looking at historic data and extrapolating into the future.
Here is an excerpt from "Spasim (1974) The First First-Person-Shooter 3D Multiplayer Networked Game [geocities.com]" that discusses the origin of the Transhumanist conception of "The Singularity":
They were trying to realize a man-machine cybernetic vision of this magical little gnome named Heinz von Foerster [univie.ac.at] and needed an email system to go along with it.
...
When the semester was over, I threw a few things into my '64 Chevy Impalla, and headed east on Interstate 80 across the Illinois border for Urbana and CERL. It was my first paying job as a programmer.
Arriving at the Mecca of networking and meeting the magical little gnome who founded second order cybernetics [vub.ac.be] (symbolized by the Ouroboros [best.com]) in his Biological Computer Laboratory [uiuc.edu] was an amazing experience.
...
A vital side note: Heinz von Foerster had published a paper in 1960 on global population: von Foerster, H, Mora, M. P., and Amiot, L. W., "Doomsday: Friday, 13 November, A.D." 2026, Science 132, 1291-1295 (1960). In this paper, Heinz shows that the best formula that describes population growth over known human history is one that predicts the population will go to infinity on a Friday the 13 in November of 2026. As Roger Gregory [thing.de] likes to say, "That's just whacko!" The problem is, after he published the paper, it kept predicting population growth better than the other models. (see section 4.1 "Systems Ecology Notes" [umass.edu]) One of Heinz's early University of Illinois colleagues was Richard Hamming of "Hamming code" fame [fi.edu]. Once while visiting the Naval Postgraduate School, I asked Dr. Hamming what he thought of Heinz von Foerster. Professor Hamming's response was "Heinz von Foerster: Now there's a first class kook!" I suspect Heinz's publication of, what Transhumanists [go.com] call, "the singularity [go.com]" had really gotten to Hamming -- not that Heinz wasn't eccentric enough get Hamming's goat in any case. Well, to continue this digression so as to give the damn Transhumanists a much-deserved keyboard lashing: It's one thing to be a guy like Hamming and denounce Heinz as a "kook" for following his formulae where they lead -- it's another to turn Heinz's formulae into a virtual religion, call it "the singularity" and totally forget where the idea came from the first place. I suggest the Transhumanists cite Heinz in the future whenever they refer to "the singularity" and think about his assumptions -- the primary one being that societies success varies directly with population size. It might be good to see if his model fits the data subsequent to the last check of which I am aware -- 1973 -- which just happens to be right at the point high population density societies decided to abandon their forward progress toward the space frontier.
Vernor Vinge has something to say... (Score:2)
Perception and Reason (Score:2)
IMO perception and reasoning are emergent properties of our neural networks. If we look at the reserach doen i neural networks we see what I believe are simpler, but no less real, properties of a similar kind emerging.
Lets eamine one example: The CMU autonomous driving van. This was a project started under the DARPA Umanned Combat Vehicles project. (They were tryign to build Bolos, for any Kieth Laumer fans out there.) This is a van that a neural network can pilot down a lane on a road under a wide variety of drivign and visability conditions, at about 60mph.
Some interesting things about this project:
(1) they did not "program; the avn to do this. The reserahcers had no strategy in mind. They merely taught the neural network with video input and driving control output. It strategized by itself, this is what neural networks do.
(2) They attnepted traing unde ra number of different conditions. When tehy examiend the rules the system had coemn up with afterward, they found the ruels vaired widely dependign on the trainging conditions BUT the perofrmance of any set of rules seemed to be a constant. (That 60mph).
There is an intersting article they wrote called "Exposing the hidden layer" that ran in Byte about 15 years ago.
SO what cane we say about the autonomous van? Well, it problem solved in a creative way finding its own solutions. It also leraned to pick out key visual elements to drive that solution.
To me, this IS perception and reasoning ability, albeit with a limited scope. I think we tend to mysticize our own abilities way too much. In the end I don't think they are functionally any diffrent, just more complex because of the size and sophistication of our neural nets.
It's inevitable (Score:2)
Once you reach this threshhold of creation, where the creators don't have to personally design every connection and node being put together in the new brain, then the potential definitely exists for a brain to be assembled which will outstrip our own in pure cognitive ability.
Furthermore, to borrow the chaos theme from Jurassic Park, once you've unlocked that kind of flexibility, sooner or later somebody is going to screw up or purposefully design a brain which isn't subject to their expected constraints. (Not taking into account the ethical dilemma of deciding whether a brain which is more "sentient" than you are, should be your slave.)
My personal hope is that before our own creations start their own evolutionary path and leave us in the dust (if they decide we're in the way, kiss your carbon-based ass goodbye...) we come up with the technology necessary to transition our OWN evolution into the new one (so that WE are the seeds for the next evolutionary stage).
Yeah, it sounds WAY too science fiction - but what other options are there besides firmly clamping down on the advancement of technology to prevent that kind of cognitive ability from being created (somehow, I have a mental image of Frank Herbert's Dune ban on "computing machines").
Re:Perception and reasoning are already understood (Score:2)
Re:How about this solution (Score:2)
Ever read Minksi? (Score:2)
Consciousness emerges and control becomes semi- autonomous when the complexity of a system becomes otherwise unwieldy and unmanagable.
Re:Computing power of a brain (Score:2)
you know, i used to think that same thing, but lately i'm not so sure. the more i think about it, the more there's a limit to how things can just happen without intent. when you get down to the pre-big bang singularity, or the combinations of conditions required for inteligent life, you start wondering if the laws of physics are really everything. and after that you start wonderin where the laws of physics came from...
is being conscious really that simple or is it that it's too complicated for us to understand with the available information? if that's the case, then that explains why we try to oversimplify it...
Coincidence (Score:5)
Thats funny, thats exactly how my boss thinks work gets done too.
Solves one problem (Score:2)
--
"astonishment" as a yardstick (Score:2)
So say we consider how comfortable someone from 100 years ago would feel after 5 years in the modern world. Now we ask, if we took someone from 900 ce, and dropped them into 1000 ce, would it take more or less than 5 years to come to that same level of acceptance? Admittedly, it's really crude, but it does give a qualitative measure of the rate of progress.
But in general, the argument for the singularity (which is the inevitable outcome if progress is in fact exponential) is in the computer field, progress is limited by the tools you are using. These tools are limited because they were the limit of what could be made with last years tools. An easy analogy is that it's much easier to write a really advanced IDE inside of Visual Studio than it is to write a really advanced IDE in notepad in assembly language. So as the tools advance, the next set of tools can become even more advanced, and those tools determine the progress of all other aspects of technology as well.
It's not just compiler writing, of course. There's also better industrial robots allow you to build even better industrial robots, or lots of computer power lets you use really computationally intensive techniques to build the next batch of processors.
I personally doubt the singularity hypothesis because I think there are constraints on progress that are invariant. For example, finite energy constraints, finite limits on the speed of communication, resource limits, and most importantly, the limitation of how fast people can adapt.
"Exponential progress" is not just because that's what's observed. It's because any time the value of the next time step is an increasing function of the value of the previous time step, you have an exponential process. Do you think technological progress is dependent on our current level of technology (i.e. is it faster than the middle ages)? If so, technological progress is exponential.
Re:good Tron quote for this (Score:2)
Its always been supposed that we were all going to stop working when robots could take over all our tasks - but isn't it the truth that we just shift to other tasks that computers aren't good at?
Even if we produce robots that are superior for job tasks, guess how much it would cost to manufacture plus upkeep of a robot to act as a janitor or fast food clerk compared to a humans minimum wage?
Re:Partition magic warez (Score:2)
A mother raising children is not a "worker" as pertains to the economy. She impacts the economy in a variety of ways, and may be quite important to society as a whole and/or to the quality of the children she raises. But she does not produce goods and, in terms of the economy, is therefore not "productive."
perception (Score:2)
Peception can be achieved when we understand it.
I remember an article about a simulated mouse brain that could recognize words spoken by many different voices. The article pointed to a site where some nutty proffesor had made a little puzzle to annoy his peers rather than publish a paper explaining his results. It seemed prommising.
I can't find that article now, so I may have just dreamed it.