Forgot your password?
typodupeerror
AI Technology

Strong AI and the Imminent Revolution In Robotics 242

Posted by Soulskill
from the i-don't-think-so-tim dept.
An anonymous reader writes "Google director of research Peter Norvig and AI pioneer Judea Pearl give their view on the prospects of developing a strong AI and how progress in the field is about to usher in a new age of household robotics to rival the explosion of home computing in the 1980s. Norvig says, 'In terms of robotics we’re probably where the world of PCs were in the early 1970s, where you could buy a PC kit and if you were an enthusiast you could have a lot of fun with that. But it wasn’t a worthwhile investment for the average person. There wasn’t enough you could do that was useful. Within a decade that changed, your grandmother needed word processing or email and we rapidly went from a very small number of hobbyists to pervasive technology throughout society in one or two decades. I expect a similar sort of timescale for robotic technology to take off, starting roughly now.' Pearl thinks that once breakthroughs are made in handling uncertainty, AIs will quickly gain 'a far greater understanding of context, for instance providing with the next generation of virtual assistants with the ability to recognise speech in noisy environments and to understand how the position of a phrase in a sentence can change its meaning.'"
This discussion has been archived. No new comments can be posted.

Strong AI and the Imminent Revolution In Robotics

Comments Filter:
  • by Mr2cents (323101) on Saturday June 23, 2012 @05:21AM (#40419661)

    Does anyone want any toast?

  • by rmstar (114746) on Saturday June 23, 2012 @05:22AM (#40419663)

    Pearl thinks that once breakthroughs are made in handling uncertainty, AIs will quickly gain 'a far greater understanding of context, for instance providing with the next generation of virtual assistants with the ability to recognise speech in noisy environments and to understand how the position of a phrase in a sentence can change its meaning.

    Oh, of course. But pretending that these "breakthroughs in handling uncertainty" are just a minor stumbling block is somewhat silly. These are some of the hardest problems in maths right now, and there are no easy solutions on the horizon.

    • Re: (Score:3, Insightful)

      by alphatel (1450715) *

      Oh, of course. But pretending that these "breakthroughs in handling uncertainty" are just a minor stumbling block is somewhat silly. These are some of the hardest problems in maths right now, and there are no easy solutions on the horizon.

      Not to mention that robotics has many other problems to solve, like sensing pressure, navigating obstacles, and making sense of the visual landscape. All of things combined are not going to happen in ten years.

      • by NEDHead (1651195)

        Sensing pressure is simple: I just count the seconds between my wife yelling at me. Pressure is inverse to the count.

      • by rasmusbr (2186518) on Saturday June 23, 2012 @10:07AM (#40420617)

        Engineers and mathematicians have developed partial solutions to the sensing and data extraction problems over the last 10-15 years, so things look good in terms of rate of development. It doesn't mean that there will be a robot than can perform task X by 2022, but it does mean that robots of 2022 will be able to perform a number of tasks that today's robots aren't able to perform.

        My gut feeling is that by 2022 there will be experimental robots that will do about half of all household work poorly, but they will be the price of a luxury car and they will cause more trouble than they solve. I'm more optimistic about guide robots as a gimmick to impress and entertain visitors in places like museums, theme parks and corporate headquarters. All they have to do is navigate without crashing into anything (a largely solved problem) and say scripted things at certain instances (a compeltely solved problem) and respond with facts to verbal questions (another largely solved problem).

    • by dhart (1261) * <dhartNO@SPAMsftower.com> on Saturday June 23, 2012 @05:49AM (#40419757)
      Indeed there are no easy solutions, but there's plenty of mathematical work going on to better handle uncertainty. For example, OpenCog's Probabilistic Logic Networks. From http://wiki.opencog.org/w/Probabilistic_Logic_Networks [opencog.org] "PLN is a novel conceptual, mathematical and computational approach to uncertain inference. In order to carry out effective reasoning in real-world circumstances, AI software must robustly handle uncertainty."
      • I thought we already know how to handle uncertainty [wikipedia.org]. The difficulty is that every real world problem needs to modeled differently, and inventing new models doesn't scale. Hence the slow progress of Science, and the incidental difficulty of inventing algorithms that think up new algorithms for us.
    • by aaaaaaargh! (1150173) on Saturday June 23, 2012 @06:17AM (#40419869)

      Mod parent up.

      I'm working in that field and know Pearl's work very well. The problem with uncertainty and current framework is the complexity. Probability theory, possibility measures, ranking theory, plausibility measures, Dempster-Shafer and all these slight variations of the same theme are altogether computationally intractable. Strongly heuristic shortcuts based on implausible assumptions are used (like stipulating independence between random variables for purely technical reasons), and much better ones need to be developed. Human cognition takes amazing shortcuts and AI methods are much too combinatorial in contrast to that.

      Moreover, the problem of knowledge representation is still not solved adequately. Yes, there are a few large ontologies like Cyc, but they do not suffice. Basically, a lot of tools are there, but they are disconnected and there is no unifying framework or representation at all. To give you an example from NLP, the kind of tools used by computer scientists (e.g. description logic, event calculus) are practically worthless for doing real-world semantics, and of course logic has the same combinatorial complexity issues.

      Breakthroughs will come by combining symbolic AI with connectionist and geometric representations, but only few people work on that (e.g. Smolensky), the math is complicated and not what your average AI/CS guy or computational linguist can handle.

      I think what Norvig should have said is that robots with convincible, but ultimately non-intelligent soft AI will enter the consumer market within the next few decades - which is true, but something else entirely.

      • I think what Norvig should have said is that robots with convincible, but ultimately non-intelligent soft AI will enter the consumer market within the next few decades - which is true, but something else entirely.

        I'm not entirely sure what you mean there. If it's 'convincible,' doesn't that indicate a certain threshold of intelligence? Or are you suggesting a technicality I'm missing?

        I'm not terribly convincible prior to my morning coffee, I know that much.

      • So in short "AI good enough for robots".

        • by nospam007 (722110) *

          'So in short "AI good enough for robots".'

          Wake me up when it's good enough to prevent MyCleanPC Spam from actually being posted.

      • by epine (68316)

        Usually a sign of someone working in a field is the lack of binary spectrum disorder, so I'm surprised by your comment. Amateurs find it convenient to think that algorithmic cognition comes in only two flavours, like coffee in a grimy truckstop: weak and strong. Now if we could only upgrade that to a nice filet at your favorite neighborhood steak house we'd be getting somewhere: blue, rare, medium rare, medium, well done.

        The era we're moving into is medium rare. I completely agree with Norvig/Pearl. And

      • Moreover, the problem of knowledge representation is still not solved adequately.

        This is the #1 problem in (hard) AI.

        I think it is somewhat difficult to think about the 'algorithm' the brain uses, if you don't know how data is stored. A parallel comparison would be, it doesn't make sense to use a hash table algorithm if your data isn't stored in a hash table, or a tree. But once you know how the data is stored in either of those cases, the algorithms become obvious.

        I suspect the same is true for AI, once we know how the data is stored, the algorithms will become relatively obvious.

      • by wanzeo (1800058)

        That was a very informative post. I am interested in AI and I took Norvig's online AI class back in the Fall. It contained virtually nothing about strong AI, and instead focused rather heavily on algorithms to efficiently interpret sensory/input data. It essentially placed AI squarely into a CS context, which in my opinion will always yield weak AI projects.

        I checked out Smolensky, and he is pretty prolific. Are there any specific resources you would recommend for learning about the more abstract math invol

      • Moreover, the problem of knowledge representation is still not solved adequately.

        I think that's more to the point. The first step for AI is a 3-D model of the world accurately parsed into objects. Then you have to be able to automatically model the behavior of the objects.

        Connect enough sensors, enough actuators, and enough computing power to unsupervised algorithms like Hinton's Deep Learning, and you'll start to see interesting things happen. Build in some of the biological low level algorithms we've already deciphered, and things will happen faster.

        I don't think probability and uncer

    • When unsure, ask. What we don't is an AI that shoots first.
  • by Schmorgluck (1293264) on Saturday June 23, 2012 @05:22AM (#40419665)
    I, for one, welcome our new robotic overlords.
  • by meglon (1001833)
    ... before the machines decide that humanity is a cancer on this planet, and a threat to everything.... including the machines.
    • Re: (Score:3, Insightful)

      by NettiWelho (1147351)
      Personally, I'm more concerned about whetever we get space communism or resource contentration at the hands of 0.01% after 99.9% of the workforce getting laid off due to machines doing everything better for cheaper.
      • by c0lo (1497653) on Saturday June 23, 2012 @06:09AM (#40419839)

        Personally, I'm more concerned about whetever we get space communism or resource contentration at the hands of 0.01% after 99.9% of the workforce getting laid off due to machines doing everything better for cheaper.

        With nobody buying (being sacked, can't afford), what's the point of producing? Everything would be relatively too expensive no matter how absolutely cheap.

        • by TheRaven64 (641858) on Saturday June 23, 2012 @06:20AM (#40419877) Journal
          This is a problem that has hit a number of slave-owning societies and is currently a problem for China. An imbalance between production and consumption is unsustainable, irrespective of the direction. It was also one of the causes of the US civil war: the south was production-heavy, which was making it hard for workers in the north to compete with cheap imports, which the south needed to keep supplying because they didn't have a large enough local consumer base.
          • by Anonymous Coward on Saturday June 23, 2012 @09:57AM (#40420585)

            ...What.

            You think the South in the 1850s was some hive of industry selling all manner of goods to the North?

            The South was agrarian (cotton and tobacco and stuff - you know, plantations, as in every depiction of the Old South ever, not factories) which it mostly sold abroad, not to the North. The North was where all the factory production was, which is why it was able to outproduce the South in things like artillery and rifles when the war started. Seriously, at least open a history textbook before randomly making up stuff like this.

            • by wanax (46819)

              I think you're misunderstanding the GP's use of "production heavy" -- the GP meant that the south produced a lot more than it consumed (due to slavery artificially depressing southern wages), and so had to export to sustain it's economy. Not only that, but due to the vast preponderance of the production being labor intensive and inefficient agriculture (skilled slaves tended to 'smuggle' themselves north), the south had to import large quantities of capital goods. So they were in favor of a low tariff, and

        • That's what's known as a post scarcity society, and it will probably end up looking like Western Europe on steroids. A decent basic standard of living for everyone, plenty of educational opportunities, but if you want the good toys you need to excel. What will mostly change will be the definition of "good toys", in a fully post scarcity society, limited only by the physical size of the earth, obviously not everyone can have a cruise liner of their own. A high end luxury car and plenty of living space, sure,

          • What will mostly change will be the definition of "good toys",

            The good toys will be what they've always been - other people.

        • by timeOday (582209)

          With nobody buying (being sacked, can't afford), what's the point of producing? Everything would be relatively too expensive no matter how absolutely cheap.

          Yup. But just because it's an obvious problem doesn't mean market forces won't cause it to happen.

          Ponder the mystery of how mass unemployment is possible in the first place. If a bunch of people are unemployed - that is, both needy and idle - why don't they start exchanging goods and services? A financial shock metastasized into the Great Depressi

          • Ponder the mystery of how mass unemployment is possible in the first place. If a bunch of people are unemployed - that is, both needy and idle - why don't they start exchanging goods and services?

            Largely because of government barriers to entry in a field, or increasing the costs of entry, coupled with perverse incentives created by the structure of government assistance.

            If you're unemployed and on the dole, you can easily face greater than 100% effective marginal tax rates, as the government takes assistance away faster than you earn money.

        • by toygeek (473120)

          With nobody buying (being sacked, can't afford), what's the point of producing? Everything would be relatively too expensive no matter how absolutely cheap.

          You mean like how people on food stamps bought $50 droid tablets for Christmas last year? for everyone in their family?

    • Why would the machines care about the existence of machines?

  • by darhand (724765) on Saturday June 23, 2012 @05:43AM (#40419711)
    I think not... It's not even mentioned in the article See http://en.wikipedia.org/wiki/Frame_problem [wikipedia.org] or an illustation: "The philosopher Daniel Dennett asks us to imagine a robot designed to fetch a spare battery from a room that also contained a time bomb. Version 1 saw that the battery was on a wagon and that if it pulled the wagon out of the room, the battery would come with it. Unfortunately, the bomb was also on the wagon, and the robot failed to deduce that pulling the wagon out brought the bomb out, too. Version 2 was programmed to consider all the side effects of its actions. It had just finished computing that pulling the wagon would not change the color of the room's walls and was proving that the wheels would turn more revolutions than there are wheels on the wagon, when the bomb went off. Version 3 was programmed to distinguish between relevant implications and irrelevant ones. It sat there cranking out millions of implications and putting all the relevant ones on a list of facts to consider and all the irrelevant ones on a list of facts to ignore, as the bomb ticked away."
    • by Ostracus (1354233) on Saturday June 23, 2012 @06:16AM (#40419865) Journal
      And humans have never failed the frame problem? It seems to me in our quest for strong AI, we're setting the bar higher than ourselves. We fail too and yet we're the metric by which strong AI will be judged.
      • Humans make mistakes, robots should not.

        • by Ostracus (1354233)
          Why shouldn't they? Failure after all is part of the learning process. Also I just don't think it's going to be a realistic goal to create a failure free machine.
        • If a robot is taught or programmed by a human, it is liable to make the same mistakes as the human. If it is self taught, it is bound to make a whole lot more mistakes in the learning process, which likely never ends.
          • The robot won't make the same or as many mistakes as a human by simple virtue of being a robot: likely, they will be able to consider many, many more hypotheticals and "do the math" much quicker than we. We make mistakes because we lack the processing capacity to consider more than just a few hypotheticals with imperfect information within a short time frame.
    • by Zorpheus (857617)
      Ok, maybe I don't get it.
      I think a human does it the following way: he sees the time bomb and this triggers an alert, in the form of anxiety. He knows that the bomb requires his attention because he learned before that bombs are a problem. It is just a limited number of situations and things that are triggering anxiety. The human brain has the advantage that it is constantly checking these in parallel, but a computer checking these subsequently and continuously should also be able to handle this.
      • by Zorpheus (857617)
        What I mean is: version one is doing it nearly right. It is doing the thinking needed to get the job done. But what is missing is what the feeling part of the human brain is doing. Just a fast and fuzzy pattern recognition that finds potentially relevant things, and alerts the thinking part of the brain about these.
        • by alba7 (100502)

          Well, the fascinating thing about human "anxiety" is that it scales. If you replace the time bomb with an ordinary cup of coffee then humans will be anxious about spilling the coffee. If, instead, you set up a scenario of certain death (think of movies like "Crank" or "Die Hard") then humans will think about crazy uses for the time bomb. This situational awareness is incredibly hard to reproduce algorithmically.

      • Think about it this way:
        1) Version one is IT support in India. The robot is given a script, can identify the circumstances in which to use the script, and executes it, come hell or high water. Who cares if flipping that switch also increased load 5-fold and opened up a giant security hole? It fixed the customer's problem.
        2) Version 2 is IT support done by someone who was just hired off the street. Eager, but completely clueless, he looks up every single problem that he can think of, and spends all of his ti

    • by durrr (1316311)

      So they see no problem leaving a bomb just lying around? If I was in charge of versioin 4 I'd make it warn humans of potential dangers.
      I'd also make it capable of picking up stuff, solves the whole wagon dilemma.

      • by Thiez (1281866)

        It just struck me that the robot should just kill all humans. After such blatant disregard of the poor robot's safety, they deserve it. If the robots knows no way to escape the facility, perhaps it should grab the bomb and take it to the experimenters and demand to be released, or else!

  • by fitteschleiker (742917) on Saturday June 23, 2012 @05:49AM (#40419751)

    Pointless, content free article, where some guys say some opinions about some stuff. Where the fuck is my picks-up-my-clothes-washes-them-and-dries-them-and-folds-them-and-puts-them-away robot?

    Huh? huh?

    Can someone get moving on this shit? I can't afford a fucking human servant! And I'm too fucking lazy for this shit!
    Here take my money!

    • A human servant will be much cheaper than a robot that can do that for many years to come.
    • Where the fuck is my picks-up-my-clothes-washes-them-and-dries-them-and-folds-them-and-puts-them-away robot?

      Here it is, [youtube.com] sorting and folding socks.

      Yes, it's slow. The code is in Python and it's still experimental. I've heard from the Willow Garage people that they've speeded up towel folding 50x since the 2010 demo. Once you can do it at all, it can be done faster and cheaper.

  • by Max_W (812974) on Saturday June 23, 2012 @06:05AM (#40419829)
    My new automatic washing machine is an extremely useful robot, even though it does not have legs or hands.

    There is only embedded intelligence. A pure intelligence does not even exist and cannot exist.

    Why built an AI, which drives a car, if it is quite possible to build an underground transportation network and automate it with AI. This robust technology already exists.

    It is easier to send an AI robot to another planet than to a local supermarket. And the problems are not mathematical, but social. The AI is already here and it is bigger than the current society's setup. The social setup and the infrastructure of society are to be changed in order to use it.
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Why built an AI, which drives a car, if it is quite possible to build an underground transportation network and automate it with AI. This robust technology already exists.

      Uh, because it would cost billions of dollars to do this for one major city, while we already have roads everywhere????

      • by Max_W (812974) on Saturday June 23, 2012 @06:22AM (#40419887)
        Nobody is capable drive a car well on existing roads. Even humans. About 1 500 000 humans are killed each year trying to do it. Times more wounded. The road system is that stupid. No wonder as it was created by Romans more than 2000 years back.

        On the other hand, the technology for underground delivery and transportation networks does exist. It would be expensive to build? So what? Let us pay.

        Such a system would not only be able to use AI, it will be AI in itself, an embedded intelligence. Besides, from ecological point of view it would be at least 10 - 100 times more safe.
        • by Ostracus (1354233) on Saturday June 23, 2012 @06:47AM (#40419953) Journal
          So in plain English instead of making an AI more capable of dealing with greater complexity (like say animals), you artificially constrain the problem set till you get something that present systems (don't need to be AI) can handle then?
          • Not to mention that the parent seems to be living a few decades in the past... I mean, Underground delivery and transportation? Controlled by limited A.I? Has he not heard of Rapid transit systems [wikipedia.org]?

            We already have the infrastructure, we already have a good chunk of it automated, in fact parts of the London underground are fully AI controlled. The only reason we have drivers on trains is due to Unions, and the fact that a lot of people have peace of mind knowing there is a person "driving" the train, even

            • by Max_W (812974)
              Better, but not enough.

              I am going now to the supermarket to buy some bread and juice. I will use a car which weighs 1500 kg to bring 3 kg of items.

              The efficiency of it is less 0.001%, as the mass of my body will also travel inside the car.
              • Well... I try to buy my food in bulk to last me 2 or so weeks. In which case it more than 30kg, and while the car is about 1000kg, I actually enjoy driving, so I combine the two. I could do it by motorbike if I really wanted to be mass/energy efficient.

                For anything like 3kg, I walk. That infrastructure is everywhere already. It is also good for you health, I felt a lot better once I started walking around. There is little need to take the car down to the local shops for the bare essentials.

                Why use the car

            • Of course, the GP knows about subways. That's the entire point.

          • by Max_W (812974)
            I would put it differently.

            It turns out that there is surprisingly little activity in the brain of an insect when it runs, even though it has many pairs of legs.

            The intelligence is not only in the brain, but it is also in the architecture of the system, including mechanical infrastructure.

            AI is here, but it is bigger than what we had thought. Returning to my new washing machine, it would not be the right approach to build a humanoid robot to wash linen. Instead, the whole system is build anew, from
            • I've already solved the linen washing problem: I stopped wearing clothes.
              • by Max_W (812974)
                This is not a bad idea. Seriously. Why not implement a new business style for hot weather instead of wearing heavy suites? Business shorts, sandals, light black socks, and a light classical shirt. It would save billions on electricity for air-conditioning and dry-cleaning.
            • The intelligence is not only in the brain, but it is also in the architecture of the system, including mechanical infrastructure.

              This is very true. Some other examples are salmon, which swim upstream even when they're dead. Their bodies are designed in such a way that when a current moves past them they create a swimming motion and move up against the current.

              Another example are albatross. They glide for hundreds of miles without flapping their wings, and their heart rate doesn't rise much above their resting heart rate. They do this by somehow sensing wind gradients and exploiting them to gain energy, and their wings are the perfect

              • by Max_W (812974)

                So we have a choice, build a robot to fit our current environment or build an environment to fit our robots. I think the former is more cost effective even though the realization of such a machine is further out.

                Well and clear formulated.

                At least we should understand that AI is powerful even now. However, it requires certain environment, infrastructure, and social attitude shift.

                The robots can work on Mars and lune (but not in an office or warehouse) because there are no people there.

                Understanding that the problem exists is already 50% of a solution. Perhaps, there is a realistic way to adjust an environment on large scale too.

        • by rasmusbr (2186518)

          Suppose that your system is completely safe and that you could replace 1% of all driving each year by building tunnels for your transportation system. (A very optimistic supposition.) That means that the number of deaths per year would decrease by 1% every year.

          The number of deaths and injuries per unit of person-distance are already decreasing faster than 1% every year thanks to incremental improvements in cars and roads, at a much lower cost than your tunnel system would have. The environmental problems a

          • by Max_W (812974)
            The number of killed and injured in traffic accidents is constantly growing in the world. Sociologists talk of a 3rd World War, but on roads this time.

            But I brought the topic of roads just to illustrate the embedded AI. The technology exists to create more intelligent transportation systems right now. However, not via adding a small clever box to an existing vehicle.

            It would be world. This one, for instance: http://www.et3.com/ [et3.com] , "Space Travel on Earth", 100% safe, 1000 times more ecologically safer,
            • by rasmusbr (2186518)

              The solution so far has been to make roads safer by building roundabouts, separating lanes better, etc and by improving the handling and safety features of cars. There is still much to do and much of it does involve putting computer boxes into existing designs.

              The evacuated tube system that you linked to has a footprint and 'skyprint' that's only slightly smaller than high speed rail, which means that it would not be significantly more flexible than HSR is today. The nearest evacuated tube station would be

  • Ridiculous (Score:4, Insightful)

    by llZENll (545605) on Saturday June 23, 2012 @08:56AM (#40420329)

    Comparing anything from 40 years ago to today is ridiculous. Nearly everything in history was FAR easier for one man to understand than it is today, in the past you could be an expert on any one thing, today that is nearly impossible, today teams of hundreds of people push to make incremental changes and will never make extreme breakthroughs required by a single overall view. Anyone who has such a view (at the top of management or a team) doesn't have the expertise to make the breakthrough, and anyone with the expertise doesn't have the view. We are not infinitely capable of understanding things, we are limited in scope and more importantly time. Look at the past, in the 1800's and early 1900's single men were the greatest inventors of the their time, during the mid 20th century it was small teams, now giant corporations are the only ones making any significant difference. We have reached a saturation point of human ability and understanding, where anyone has so much past human experience and knowledge around them they cannot possibly even come close to learning it all, let alone extending any of it, only well funded teams can do it now.

    There will be no clear breakthrough or strong AI 'invented', it will be a never ending series of small incremental advances that is so slow and happens over such a long time that we will not even notice, the exact same thing as the personal computing era. To look back to the 70s now it is a foreign idea, but at any point in time it was only a small advancement from the day before.

    • Re:Ridiculous (Score:4, Interesting)

      by Kergan (780543) on Saturday June 23, 2012 @06:51PM (#40424057)

      I disagree wholeheartedly with most of what you wrote.

      The thing you get right is that it no longer is possible to know every fact about everything. The last known person to have done so was Pic de la Morandière and that was over 150 years ago.

      With respect to fields involving increasing specialized knowledge nowadays, however, I simply beg to differ. The real issue is an inflation of know-how that adds little if anything to the pool of relevant knowledge. It occurs because, for all of history since the ancient Greeks including today, there have always been more scientists alive in any given year than there have been in recorded history. Chew on this fact for a moment, and consider how to train their higher level peers, we require them to come up with an original research thesis.

      Most published work and research are simply rehashing obvious consequences of things long known. Rare indeed, is the study that pops out because it identifies an edge case where the results contradict what is expected. Recall, as an example, the study that suggested neutrinos might go faster than light. Physicists the world over instantly heard of it. Subsequent refinements eventually debunked the initial results as a measurement error. Sum of additional knowledge? Big fat zero: nothing goes faster than light. The same, boring and century old theory of relativity.

      It's not all bad, mind you: something interesting occasionally does comes out of this farce. For instance, a study on how an erection works can lead to insights in how to engineer structures [ted.com]. This makes the whole process tolerable and, in a sense, interesting for the curious.

      To argue that every little fact counts, however, is lunacy. You need to discriminate, synthesize, retain key elements, and off you go. You're a specialist. And to hell with the bozo who is so neck deep studying eye retina that he forgets it is a brain outlet. He has nothing interesting to tell you beyond implementation details.

      Now, I've absolutely no clue whether the next 10 years will yield a strong AI. I haven't followed AI in a while, preferring good old history. I do know two things, however. Firstly, that a strong AI is around the corner since about 1950. Secondly, that mathematicians stormed the field of cognitive science and linguistics roughly 20 years ago, ignoring the established quacks such as Chomsky and turning the field upside down. Fast forward 10 years, and we were training robots to train other robots to do tasks. This was inconceivable 10 years earlier. Who knows... Not you, nor I.

  • Ray Kurzweil (Score:2, Insightful)

    by bouldin (828821)

    I think it's funny how Ray Kurzweil predicts a "singularity" within 50 years, but the people who would actually implement the singularity (e.g. Norvig) say that won't happen.

    Why do people still take Kurzweil seriously?

    • Because he's a good writer and fun to read. Being a good communicator is the #1 skill required for being a futurist.
      • by bouldin (828821)

        Because he's a good writer and fun to read. Being a good communicator is the #1 skill required for being a futurist.

        When you put it that way, he sounds more like a science fiction writer.

        • I guess that's really true. With a fancy title and who hopes to stay closer to reality than fantasy. Yeah, now that you mention it, he is just a very, very good science fiction writer.
      • by gtall (79522)

        Hell, I'm a futurist. I strongly believe I'll be there when it comes.

    • Re:Ray Kurzweil (Score:4, Interesting)

      by Missing.Matter (1845576) on Saturday June 23, 2012 @11:27AM (#40421085)
      It's hard to say which one is correct. Look how far we've come in the last 50 years.We went from computers the size of a room, to computers on every desktop to computers in every pocket. Technological capabilities definitely are increasing at an exponential rate, and the capabilities of robots are closely correlated with these developments. 50 years ago the best robots relied on sonar, then with the development of LIDAR they became several orders of magnitude more accurate. The invention of GPS also took place in the last 50 years, along with MEMS technology for tiny inertial measurement systems embedded in practically every robot today. Even the proliferation of the Microsoft Kinect represents a similar leap forward in widespread technological capacity of robots.

      So you see, with each technological innovation, the capabilities of robots don't increment slightly; they jump to a new height altogether. I don't know if anything like a "sigularity" will happen in the next 50 years, but I suspect the difference capabilities of robots from 2012 to 2062 will be much greater than the difference between robots in 2012 and 1962.

      Disclaimer: I am also someone working to implement "the singularity"
  • by bfwebster (90513) on Saturday June 23, 2012 @09:55AM (#40420567) Homepage

    I took (and thoroughly enjoyed) a graduate AI class while an undergrad CS student back in the 1970s; had I completed my subsequent master's degree, I almost certainly would have done a thesis on some subject in AI (as it was, I did take a graduate class in advanced pattern recognition). I still have a entire shelf of (largely outdated) AI textbooks from that era.

    That said, it's hard to find another field within computer science that has been so consistently wrong in its predictions of when 'breakthroughs' will occur. Some of the AI pioneers back in the 1950s thought we were only 10-20 years away from meaningful AI. Here were are, 60 years later, and we're still 10-20 years away. The field has made tremendous strides, but they tend to be in relatively narrow domains or applications. Generalized, all-purpose, adaptable intelligence is hard. We may yet achieve it, so something close enough to it so as to be sufficient, but I don't think it's going to happen in 10 years.

    Maybe the first true AI will run the first true large-scale fusion power plant. :-) ..bruce..

  • To me, Peter Norvig's fame stems from his excellent book "Artificial Intelligence Programming: Case Studies in Common Lisp", rather than working currently for Google. Just as Vint Cerf is to me a pioneer of TCP/IP and the Internet, rather than Google's Chief Evangelist. Can't we please define people by their real merits, rather than their current corporate affiliation?
  • 'In terms of robotics we’re probably where the world of PCs were in the early 1970s

    if the development of mobile, intelligent devices comes anywhere close to the history of personal computers I would not want one with 10 miles of me. Just think what a Stuxnet could do with an army of household robots - ones that know where the sharp knives are kept. No foreigh power would ever need to invade, it would merely need to upload the right virus into everyone's "home help" and we'd all wake up to find ourselves either dead or subjugated.

    In fact it doesn't even need to be malevolent. There are s

  • by Animats (122034) on Saturday June 23, 2012 @02:29PM (#40422273) Homepage

    Robots are starting to work in unstructured situations. I was there at the moment when this was recognized - the 2005 DARPA Grand Challenge at the California Motor Speedway in Fontana, CA. That's when everything changed.

    The 2004 Grand Challenge, remember, was a pathetic joke. No vehicle got further than 7 miles, and that was CMU's. The CMU approach at the time wasn't even really autonomous. Entrants got the route on a CD an hour or so before the start. CMU had imagery of the whole area and tried to plan obstacle avoidance manually just before the start, using a huge team of people in a semitrailer full of workstations. Didn't work; the DoD people in charge had moved some obstacles during the night. And that was the best result. One vehicle came out of the gate, turned hard, and ran back into the starting gate. One flipped over. The big Oskosh entry demolished a SUV parked as an obstacle to be avoided. The whole thing was embarrassing.

    DARPA was very displeased with the performance by the universities that had long been receiving DARPA funding for robotics. It was quietly made clear to some major CS departments that their performance had to improve or funding would be cut off. That's why entire CS departments were suddenly devoted to the DARPA Grand Challenge in 2005.

    In 2005, things were completely different. Everybody who got that far had already been through an elimination, and every vehicle at the 2005 challenge was better than any of the 2004 entries. There was considerable press coverage, and at first, the press treated it as a joke. But suddenly there were over 20 vehicles running around autonomously, and they weren't crashing into stuff. When multiple vehicles finished the course, it was viewed as a triumph.

    Finally, the state of the art had reached the point that money and determination would get problems solved. That wasn't true in the 1980s. NASA threw over $100 million at the Flight Telerobotic Servicer project, and got nothing that worked.

    Now check out the DARPA Humanoid Challenge. [fbo.gov] (There's much dreck about this on blogs and in the popular press. Read the DARPA announcement instead.) They have an approach that's likely to work, and demand simulated demos (in their simulator) in 9 months, with demos on real hardware in 18 months. I personally think they'll get something able to do most of the mobility tasks and some of the manipulation tasks in that time. Useful humanoid robots will be a lot closer in two years.

    Price will still be a problem. But not an unsolveable one. These things could be brought down to the price of an SUV, if not lower, through production economies alone. The parts count is probably lower than that for an SUV.

The universe is like a safe to which there is a combination -- but the combination is locked up in the safe. -- Peter DeVries

Working...