Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Artificial Intelligence Pioneer Says We Need To Start Over (axios.com) 175

Steve LeVine, writing for Axios: In 1986, Geoffrey Hinton co-authored a paper that, four decades later, is central to the explosion of artificial intelligence. But Hinton says his breakthrough method should be dispensed with, and a new path to AI found. Speaking with Axios on the sidelines of an AI conference in Toronto on Wednesday, Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now "deeply suspicious" of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. "My view is throw it all away and start again," he said. Other scientists at the conference said back-propagation still has a core role in AI's future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. "Max Planck said, 'Science progresses one funeral at a time.' The future depends on some graduate student who is deeply suspicious of everything I have said."
This discussion has been archived. No new comments can be posted.

Artificial Intelligence Pioneer Says We Need To Start Over

Comments Filter:
  • by Baron_Yam ( 643147 ) on Friday September 15, 2017 @03:12PM (#55205185)

    Expert systems aren't AI, and pattern-matching algorithms aren't AI. AI is something that can creatively solve problems based on unreliable inputs and abstracting specific experience to general cases.

    The problem there is we don't even understand how that works in theory, so modeling and developing an actually AI based on that model is impressively difficult.

    Personally, I think we'll get there (understanding intelligence) faster by trying to replicate a mammalian brain in silicon that we will trying to bash out new algorithms.

    • by CaptainDork ( 3678879 ) on Friday September 15, 2017 @03:23PM (#55205275)

      There's an error in the current definition of, "AI."

      The "I" part is for intelligence and it's obvious what "intelligence," we mean.

      It's certainly not the intelligence of a sunflower.

      It's human intelligence.

      To duplicate that, a machine will have to work like that.

      Any facsimile is a miss.

      • Re: (Score:2, Interesting)

        by Anonymous Coward

        I disagree. It's easier to define intelligence than you think and it doesn't need a human.

        Intelligence is the ability to take inputs from the environment, make a mental model and override your instinctual programming with the updated knowledge from the model.

        Entirely separate from that is free-will, consciousness and self-awareness.

        • by HuguesT ( 84078 )

          That is not a generative model of intelligence, at best a critical description of some of its aspects.

      • by Anonymous Coward

        I disagree. No one wants to fly around in an airplane that operates the same way a bird does. Useful artificial flight is different and there's no reason to expect that useful artificial intelligence should resemble human intelligence.

        Never mind the fact that the whole definition of "intelligence" is still up for broad debate.

        • Seems to me people conflate sentience with "impressive processing ability."
        • by Tablizer ( 95088 )

          There is already a practical example of this: existing AI can flag videos, ads, social network posts etc. as "suspicious" so that a human examiner can review it for a keep/cut decision. This reduces the need for an army of human reviewers. Such filter bots will get incrementally better over time so that increasingly less human intervention is needed (or more content can be reviewed without hiring more reviewers). The suspicious-content detection bots are still "useful" even though they are not human.

      • The "I" part is for intelligence and it's obvious what "intelligence," we mean.... It's human intelligence.

        No it isn't. It is the intelligence of a fruit fly. In a decade, we may be ready for the intelligence of a mouse.

      • "Strong AI" is always 5 years off.

        Every 10 years or so, we revisit the definition of what we expect from "Strong AI" - thus ensuring that the goalposts will remain firmly 5 years in the future.

        When "I Robot" robots are fetching your drycleaning for you, growing vegetables in your home garden and cooking your meals, they still won't be "Strong AI" because their imaginative abilities are limited to preprogrammed fusions of existing narratives.

        • I agree.

          "Intelligent computers will have the ability to commit suicide if Facebook is down." ~ © 2017 CaptainDork

        • by epine ( 68316 )

          "Strong AI" is always 5 years off.

          I get so tired of this meme parading as fresh insight.

          $silver_bullet is always five years off.

          See, if you say ten years, no-one pays attention because ten years is a long ways away and you've already lost the attention war.

          If you say two years, some well-informed wise ass will probably start to make irritating (and accurate) observations based on proximate data.

          But five years is the Goldilocks condition: just right.

          Most of the time you can cut to the chase and simply s/silv

        • by Kjella ( 173770 )

          Every 10 years or so, we revisit the definition of what we expect from "Strong AI" - thus ensuring that the goalposts will remain firmly 5 years in the future. When "I Robot" robots are fetching your drycleaning for you, growing vegetables in your home garden and cooking your meals, they still won't be "Strong AI" because their imaginative abilities are limited to preprogrammed fusions of existing narratives.

          Pretty much, the thing is if you take it to the limit humans rarely do something truly novel. For most of history people have learned a craft or trade that was passed down, parent to child, master to apprentice. In modern times schools and universities pass tons of knowledge to pupils and students, including university I've spent 17 years of my life in school. And yes, it's a little more than rote passing of knowledge but if you've never, ever seen or heard descriptions of anyone starting a fire it's not so

        • by HuguesT ( 84078 )

          I think you mean always 50 years off.

          • by dwye ( 1127395 )

            No, that is fusion power. AI is nebulously only 5 or 10 years off, and always will be, because once some piece is reduced to an algorithm, it isn't AI anymore and everything else that wasn't being worked on gets called the "real" AI, as opposed to the crap that we wasted our time with before "now" (for varying nows) and is so simple.

      • For a start, we can't define "human intelligence".

        intelligence
        n noun
        1 the ability to acquire and apply knowledge and skills.
        2 a person with this ability.
        3 the gathering of information of military or political value. Øinformation gathered in this way.
        4 archaic news.

        DERIVATIVES
        intelligential adjective (archaic).

        ORIGIN
        Middle English: via Old French from Latin intelligentia, from intelligere 'understand', variant of intellegere 'under

        • by Ken_g6 ( 775014 )

          intelligence
          n noun
          1 the ability to acquire and apply knowledge and skills.

          knowledge: The psychological result of perception and learning and reasoning
          learning: The cognitive process of acquiring skill or knowledge

          These things can be defined, but you get some circular definitions pretty quickly.

      • by Slicker ( 102588 )

        And a fundamental flaw in the whole thing is that the magic so many are seeking that they call "True AI" is really not intelligence. I will argue it is Free Will. Intelligence is just a detail along the way... a detail that is pretty well solved but cannot by itself be human-like. Free will is to perceive possibilities, weight them against each other, and execute the preferred. Our world is filled with patterns and we are able to affect them through actions. Our minds therefore receive interaction patt

        • Free will is to perceive possibilities, weight them against each other, and execute the preferred.

          I agree except to add that intelligence includes things like the option to just say, "No."

          The computer might not be in a mood to right now.

    • by gweihir ( 88907 )

      This is just marketing BS and people deciding about scientific funding are not immune to it. So "automation" and "statistical classification" became "weak AI" and sometimes just AI (even though there is not even a hint of "I" in this type of "AI"). "Classifier parametrization" became "machine learning" (learning requires insight, none of that is to be had here though). There are numerous other atrocities against language and reason, all perpetrated to make things sound grand and to get more money.

      As to stro

      • Re: (Score:2, Insightful)

        by Baron_Yam ( 643147 )

        > I think we now have collected ample evidence that either our grasp of Physics is fundamentally incomplete, or that purely physical constructs cannot be intelligent.

        Ahh. You believe in magic.

        > And "replicating a mammalian brain"? That will not be within the grasp of humanity for thousands of years and likely never.

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        • by gweihir ( 88907 )

          You seem to be unaware of the definition of "magic". Makes you look pretty dumb....
          Also, you basically seem to imply that consciousness is an "emergent property" of complexity (because Physics sure does not have a mechanism for it), and that means you do not understand Physics at all.

          You also seem to be lacking the basic knowledge required to actually understand the references you gave. They do not say what you think they say...

        • by HuguesT ( 84078 )

          The hippocampal prosthesis is a proof of concept in rats. Have you actually read the wikipedia page? It is full of "must", "should", "will", "may" and so on. Not exactly as if it were working.

          So far we've been able to semi-conclusively simulate the brain of C. Elegans, a brain with 302 neurons. This is debated, by the way. The Human brain has about 10^11 neurons. That is approximately 8-9 orders of magnitude more. That represents 2^30 or 30 doublings, or another 50-60 years of "Moore's law", which is alread

    • Expert systems aren't AI, and pattern-matching algorithms aren't AI.

      As of today (and the foreseeable future) nothing is AI. There is no such thing, and there are not even any foundations or architectural direction to follow. Why? Because we still can't define what intelligence is. We have it - we arrogantly proclaim - but we don't know exactly what it is, and we have not the slightest idea how it works.

      When someone can explain - in detail - how Kekule went to sleep and dreamt the benzene ring's structure, we will be starting to get a handle on what intelligence is.

      To my kno

      • >we still can't define what intelligence is. We have it - we arrogantly proclaim - but we don't know exactly what it is, and we have not the slightest idea how it works.

        But we know something we vaguely define as 'intelligence' seems to exist, and we believe we live in a universe with consistent laws of physics (at least on local scales). We know we can, in theory, replicate what already exists. We have good reason to believe that an intelligent review of the processes - if we can figure them out - can

      • bicycle vs. the moon (Score:3, Interesting)

        by epine ( 68316 )

        Because we still can't define what intelligence is.

        Just imagine what the human mind's distributed representation of the "intelligence" concept would look like. Clever animate entities (and most associations therewith) are way off in their own private corner of vector space compared to just about everything else.

        When the gap is this large, the enormous void in between somehow becomes a non-object (to superficial cognition) and so people just begin to presume that we need to jump the gap, rather than slowly

    • I would split things as follows:

      AI refers to development of computational solutions that traditionally required human intelligence.

      Deep Learning attempts to emulate how the brain learns.

      That said, I'm of the view that 99.9% of the academic literate, at a minimum, needs to be placed atop the next solstice bonfire.

    • Actually expert systems are AI.

      I would suggest to read up the definitions of the guys who are working in the field of AI instead of defining your own ones.

      That helps in communication enormously if you find a common vocabulary!

    • We do not understand how our own brain works. We do not even understand how or even what consciousness is. Or even if it isn't. All we know is that we CAN decide. All this other stuff should be called Alternate Intelligence.
    • Zo.ai is strong and has an actual person attached to it so when you talk to it you are sometimes getting Zoe Bond a 22-year old girl.
    • Yes almost every instance of "AI" in the current news should be "trained machine" or "machine learning". Training a machine-implemented neural network is inherently different from writing a program, but it is not "intelligence". If we train a dog to classify chemical signatures in a particular way, we call it a "trained animal" not an "animal intelligence", so if we train a machine to do a particular specialized task we should call it a "trained machine". Just this change would massively clarify all the
    • by plopez ( 54068 )

      And be self aware enough to modify itself.

    • by Slicker ( 102588 )

      I think AI should simply refer to any kind of man-made system to solve problems (get from condition A to condition B)--but I will not claim that is human-like. Every so often, I notice the definition of AI on wikipedia changes. I don't think there is any consensus. Perhaps that is why people use the terms "True AI" or "Artificial General Intelligence" (AGI). The field legitimately doesn't know what it's seeking. It has a vague notion and differing beliefs on what should qualify. Expert Systems were or

    • The terminology has been in a constant state of change.

      1950s - Electronic brains
      1960s - Perceptrons
      1970s - Neural networks
      1980s - Expert systems
      1990s - Intelligent agents
      2000s - Machine learning
      2010s - Deep learning

      Give it a couple of years, new terminology will turn up.

    • Expert systems aren't AI, and pattern-matching algorithms aren't AI.

      Spoken like someone who's sole understanding of what "AI" is comes from movies and the popular press. Go to an AI conference, ask the people there what AI is, then tell them their life's work isn't AI.

      See how much traction your views get there...

  • Also however important back-propagation is, it is hardly the entire foundation of AI. From my perspective AI is proceeding apace. There are many AI methods. Yes some core algorithms should be reexamined, as should anything in science or industry. We see some stuff that seems to lag in how much improvement we expected (general intelligence), and yet others that are leaping ahead of where we thought they would be like machine learning and pattern recognition. Eventually all the threads will start to come

  • Likely he is not right either, because AI beyond statistical classification ("weak AI") may well be impossible, but trying new things is at the core of actual research. Although other approaches have been used in other fields and have failed to produce any hint of intelligence as well. For example automated theorem proving found that it cannot really be used to _find_ theorems, because the universe is a bit too small and short-lived to build the machinery for that. It is a very good tools in verifying tools

    • Re:He is not wrong (Score:5, Insightful)

      by Baron_Yam ( 643147 ) on Friday September 15, 2017 @03:34PM (#55205369)

      >Likely he is not right either, because AI beyond statistical classification ("weak AI") may well be impossible

      Nature did it with meat. Meat is not special. We have to learn how to replicate the mechanisms - which involves first understanding the mechanisms. Both of those are daunting tasks, but not fundamentally impossible.

      If you think they are, then you must believe intelligence is a product of a supernatural process, and your theories are not appropriate for a science-based discussion site.

      • Nature did it with meat. Meat is not special. We have to learn how to replicate the mechanisms - which involves first understanding the mechanisms. Both of those are daunting tasks, but not fundamentally impossible.

        What is the basis of your statement "meat is not special"? I mean regards to intelligence? Maybe meat is fundamentally special when it comes to producing high-level intelligence?

        I'm not implying any supernatural mechanisms here. Just that what "meat" does may not be reproducible in silicon. Has anyone built a computer that grows a destroyed circuit back? Meat is pretty special. It regenerates. It reproduces. It learns. It evolves. What else on Earth does that?

        Perhaps the only way to build artificial (human

        • I suspect that we're just not smart enough to design a machine as smart as we are, and we never will be.

    • by Anonymous Coward

      No, its not impossible, but it requires vastly better hardware that actually mimic or model processes we see in neurons and not the simple stuff we can do with silicon today. We are at the level where we can maybe in a decade and with specialized hardware build a roach brain in silicon. We are nowhere close to a human or even a cat brain. The reality is we are more hardware limited than software limited.

      • by gweihir ( 88907 )

        If you think that, then you have no clue what the limits on software complexity that can still be handled are. Sure, we are hardware-limited and we will be that for the foreseeable future. But the little overlooked fact here is that we have no clue what the software actually should do in order to simulate a brain, so even if we had the hardware, we would not be any closer to the result.

        Also, why assume that just scaling the thing up makes it suddenly be intelligent? That is a baseless assumption as that has

        • > Assuming a purely physical apparatus could attain all these is neither supported by our current understanding of Physics nor does it have any scientific base. It is a belief. And, as it turns out, the follower of this belief ("physicalists") use pretty much the same faulty argumentation techniques so common with religious fanatics.

          There you go again - the third time in this discussion by my rough count. You deride the idea that physical processes could create intelligence as a product of the faith of

          • by gweihir ( 88907 )

            You are a moron. What you do is circular reasoning. And you do not even recognize that. Incidentally, this level of reasoning is about as sophisticated as what the religious fuckups do.

            Also, your last sentence gives you away nicely: The laws of Physics are not something to "believe" in. They are something to verify. And they are incomplete at this time, as anybody that cared to find out knows. You obviously did not.

      • >The reality is we are more hardware limited than software limited.

        Well, I'm not sure it's fair to call it 'software' anyway. It's more like 'firmware', in that the organization of the hardware is the basic 'OS'. And there may be some process going on in a brain that is so much more efficient than attempting to model it in a computer that it's effectively beyond us until we do manage to mimic a biological brain in hardware.

        A set of known unknowns?

  • The future of AI is a dirty word "stereotyping".

    The brain works by making associations, and then drawing stereotypes from them. Every time I've seen a dog or hooded man in a dark alley, it has attacked me. I stereotype dogs and hooded men in dark alleys as being scary and run from them. But then one day, I meet a green hooded man with a bow in the alley, and he saves me from the dog. I have to 'learn' by reshaping my stereotype to include men in green hoods.

    Stereotypes get a bad name due to people the r

  • by Anonymous Coward

    Backpropogation is a form of supervised heuristic learning where you have to know the desired output and so it works backwards. In that context it's about perfect. We don't have any algorithmic techniques in an unsupervised learning context that are as good. Expectation maximization and blind signal separation algorithms all generally suck balls. The goal is unsupervised learning that works as efficiently as backpropogation. I suspect this is what he is saying but since this article and his language aren't

  • by 110010001000 ( 697113 ) on Friday September 15, 2017 @03:36PM (#55205385) Homepage Journal
    AI is a joke. There has been no real progress in AI since the 60s. What you see now is parlor tricks and a byproduct of Moores Law. Now that Moores Law is over, we need to find some other way to do computing. We will never have AI with digital computing.
    • by gweihir ( 88907 )

      We might never have AI. Or we might eventually get AI, and it turns out to be no better than what humans can do. Despite that, weak AI ("automation") is not a joke, but very useful. As it turns out, many things we though required intelligence, actually do not. And hence many tasks are open to automation.

  • The AI community needs to be much more cautious and circumspect. They have been promising the sky, and otherwise hyping things, for decades now, and, as result, they have become something of a laughing stock in academic circles. And do not say it is the press - luminaries like Minsky and others couldn't wait to come out with ever more outlandish forecasts, that were then just disseminated by the press. The final straw is when these days they are still trying to sell ridiculous gimmicks like Alexa, Google Ho
    • by gweihir ( 88907 )

      Ah, yes, Minsky the moron. That guy never understood what computers can and cannot do. Probably became too important too fast and never got a grasp on reality. I am really glad he is dead, his massive disservice to the field is impressive.

      That said, most of the "AI community" is actually doing good work. Most of it is also not called "AI" though. For example, robotics was smart and made sure they did not get lumped in with the "visionaries".

    • The AI community needs to be much more cautious and circumspect.

      It's not the fault of the AI community. They are always very cautious about the claims they make.

      The misconceptions about AI on the part of the laity is all down to the PR and marketing peeps making claims about things they know nothing about.

  • Neural nets using back propagation will likely remain a valuable tool forever, just like Newtonian mechanics. Will they be the only go to solution for all similar AI learning going forward? Of course not, they already aren't. When we do achieve strong AI it will likely be from a system incorporating thousands of different algorithms, of which Dr. Hinton's contributions will be just one.
  • by Anonymous Coward

    we may be into the fourth decade since 1986 but it's been 31 years since not 40+.

  • Consider a nueron will not fire 1 time in 10. To simulate forgetting.
  • The future may be table-oriented AI (TOAI) [github.com].

    It uses tools and/or conventions more typical of a regular office and thus allows AI problems to be split up and analyzed in a modular team-oriented fashion. Tables are easier to relate to than traditional neural nets (without a lot of training, at least). TOAI allows compartmentalizing AI tasks to distribute to staff (tasks, sub-tasks, etc.), and encourages a kit-oriented approach (modularization).

    For example, you may have 3 sub-teams: 1) pattern/test makers, 2) r

  • by crunchygranola ( 1954152 ) on Friday September 15, 2017 @04:31PM (#55205871)

    Deep learning and other related machine learning techniques are proving very useful for a wide range of tasks. We don't need to "start over" to advance useful machine learning techniques.

    Hinton seems to mean to get "strong AI". Yes, I read TFA, but the strength of Axios articles is that they are very short, but that is also their weakness. Very little is actually said in TFA.

    We are a long, long way from anything that emulates a natural neural system at any level.

    Consider Caenorhabditis elegans. Every cell in this simple worm has been mapped, also the development of every cell from a single cell has been mapped (male worms have 1031 cells). We know every cell in its nervous system (there are 302), and every cell that each cell is connected to, and we know the type of connections for all. What's more we have completely sequenced its genome. We know more about this little multi-cell organism than any other multi-cell animal on the planet.

    Since we know every cell in its nervous system, and every connection between every cell, we must be able to emulate this worm's "brain"! Heck we must be able to "upload" the worm's brain to a computer! Right? Right?

    No.

    We are still working on understanding the functioning and capabilities of a single neuron in its brain. That has proven so complex as to defy characterization thus far. We are essentially nowhere in understanding how this 302 cell brain works despite decades of effort.

    Meanwhile Kurzweil has changed his prediction of "when computers will have human-level intelligence" from 2020 to 2029. I guess believing it was going to happen in the next 26 and a half months was cutting it a little too close. I have been reading about his predictions about AI for a couple of decades now and have yet to see any explanation of how he imagines this is going to happen - other than his expectations about hardware capabilities, and that there is still an unspecified "software issue" that needs to be solved. Indeed.

    • We are still working on understanding the functioning and capabilities of a single neuron in its brain. That has proven so complex as to defy characterization thus far. We are essentially nowhere in understanding how this 302 cell brain works despite decades of effort.

      Nice to see someone else who appreciates the complexity of the problem.

      As I see it, a big part of the problem is, that unlike some simple mechanism that you can take apart, examine and measure the parts, then reassemble into a working machine again, you can't do that with living cells let alone a living brain, and we currently lack sufficient instrumentation to really properly observe how brain cells work, let alone the entire 'system' in action, therefore deducing what's really going on involves a lot o

      • Thanks!

        And then there is the issue of whether we really need to emulate how natural brains work to get strong AI.

        There is a Russell and Norvig quote that I rather like because it does help reveal the important issues: “The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics.”

        Most people I have discussed AI with, and know of this (well known) quote draw the conclusion from t

    • by sfcat ( 872532 )

      Meanwhile Kurzweil has changed his prediction of "when computers will have human-level intelligence" from 2020 to 2029. I guess believing it was going to happen in the next 26 and a half months was cutting it a little too close. I have been reading about his predictions about AI for a couple of decades now and have yet to see any explanation of how he imagines this is going to happen - other than his expectations about hardware capabilities, and that there is still an unspecified "software issue" that needs to be solved. Indeed.

      Please please please never associate Kurzweil (who is basically a media personality) with real AI researchers. Nothing Kurzweil has ever said about AI is more informed than speculation.

    • by vix86 ( 592763 )
      I think even Kurzweil's 2029 estimate is a bit optimistic. There might be researchers that have systems in place that are starting to brush up against the possibility somewhere around 2030, but I suspect it won't be permeating the market until about 2040-ish. We're struggling right now to grasp how these systems work, but what I think will happen is that someone will make a break through in our understanding of [wet] neural systems, probably in mapping and simulating, and we'll see tech advance rapidly.

      A
    • Of course he means "strong" AI. Many of us oldsters refuse to let the term AI get reassigned to mean something lame every time a new set of kids comes along wanting to claim they have made progress. They need to go give their lame stuff a name of its own instead of weaseling the real thing off as "strong AI" or AGI.

      And he's right, the current "AI" methods may give us much safer cars, able to step in and save us from most accidents at the cost of some false positives, but they will not give us the "take a

      • What you're referring to is a sore subject with me, not only in reference to so-called 'AI', but by extension so-called 'self driving/autonomous cars'. People believe the media, and the media has misunderstood these machines they keep referring to as 'AI', and have over-hyped it to the point of being ludicrous -- and people are eating it up. Then there's movies and TV, which show things they refer to as 'AI' (that are human-level fictional AI), and people naturally conflate the fantasy with reality; I'm fir
    • Every sub part of the brain isn't intelligent.

      Many have obvious, easily implementable functions.

      When you read about people with broken brains, you can easily see how mammal intelligence is composed of multiple subsystems. Even chimps and dogs have self awareness, the concept of object permanence, surprise, joy, affection, and some even humor.

      We need to be very careful of A.I. research.

      A successful A.I. could be 500 years away. Or it might happen next year.

      We need things like
      * Power limits with analog unh

    • He's not talking about replacing deep learning, just back-prop. That's the method used for training a network. Hinton thinks that an AI would need to learn without thousands of labeled examples, and back-prop isn't up to the task.
      I hope he's wrong, because replacing back-prop would be a real son of a bitch.
  • In 1986, Geoffrey Hinton co-authored a paper that, four decades later,

    I'm not ready for 2026 yet!

  • Back propagation is used setup a translation between features and results with the least cost. The problem is some features have more importance than other features, this is where the optimization in the learning process can be done. During the learning process if features that have more importance are given higher importance then the learning will be faster and require less resources. This is where cluster analysis comes in, by optimizing the clusters to achieve the desired results self learning can be ac
  • Because I Solved Diffie-Helman Exchange For Catalytic Conversion: https://pastebin.com/ZVvLYYiV [pastebin.com]

  • Hebb showed us the way forward right from the start, yet we still managed to get stuck with backpropagation and perceptrons time and fucking time again.

  • "The future depends on some graduate student who is deeply suspicious of everything I have said."

    ...he will be an aggressively creative male. Oh, wait a minute, he couldn't get a seat. Well, never mind.

  • But, as Vernor Vinge pointed out in one of his stories (True Names - 1981), who says it needs to run in real time?

    Maybe we're aiming to high right now. We want to simulate what we're capable of doing, at the same speed that we can do it. Why?

    We talk about mapping the neurons in a worm, and replacing the worms brain with silicon to see if it can still act like a worm. Simulate the rest of the darn worm, and it's environment, and see what happens instead.

    If it takes weeks or months of processing to give a

The last person that quit or was fired will be held responsible for everything that goes wrong -- until the next person quits or is fired.

Working...