Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Photorealistic, Reliable 3D Mapping For Robots 33

An Anonymous Coward writes: "Hans Moravec at Carnegie Mellon University has updated his DARPA-funded MARS program page , including new info about the possibily of having photo-realistic 3D mapping for robots in the near future. "Our machines will navigate employing a dense 3D awareness of their surroundings, be tolerant of route surprises, and be easily placed by ordinary workers in entirely new routes or work areas. The long-elusive combination of easy installation and reliability should greatly expand cost-effective niches for mobile robots, and make possible a growing market that can itself sustain further development...We expect our new data to bring us further towards photorealism, and more importantly extremely reliable 3D maps." Check out all the slides and movies at the bottom of the page."
This discussion has been archived. No new comments can be posted.

Photorealistic, Reliable 3D Mapping For Robots

Comments Filter:
  • I can't wait until this tecnology is used to create a detailed 3D envirment of my home city so I can play counterstrike in it.
    It would also be great for all other sorts of simulations when the actual place couldn't be visited.
  • The Dalek couldn't climb stairs until the 6th Doctor (or was it the 7th?). Thank God we're waaay ahead of them now!
  • Sounds like they implemented an industrial version of the lego robot reported by slashdot earlier in this article. [slashdot.org]

    Neat stuff...
  • I am a student studying computer vision. It has long been known that CMU, and especially the famous Dr. Takao Kanade have been interested in autonomous vehicles - that and stereo vision have been the major focus of that school. They're very good, but this is really not just an event at their school. Its more like a community (the vision robotics community) effort. Even Slashdot has posted [slashdot.org]a few of the other contributions.

    HOWEVER, its still a long time coming. In addition, space projects and industry require much more precision and accuracy than academia can offer alone. Keep in mind that CMU already did the "No hands across America" project, where there cars "drove" (they controlled breaks and gas) 99% of the way across the United States autonomously. That was a while ago - so was their total virtual environment mapping dome. Have you seen any autonomous cars for sale? What about 3d videos that you can see from every point of view?

    We still have a long way to go.

  • by hooded1 ( 89250 ) on Thursday March 22, 2001 @07:14PM (#345872) Homepage
    While on a tour of the MIT AI labs i was shown a project they were working on, which has similair components to this. Essentially what they were doing was, creating a system in which they could point to a part of the room (with their hand) and vocally tell the computer to project some image there. The project is called Hal The Next Generation Intelligent Room [mit.edu]
  • Well, its not exactly a program...more like a system. The algorithms are published every year in academic conferences such as Computer Vision & Pattern Recognition (CVPR), International Conference on Computer Vision (ICPR), and IEEE Conference on Robots and Automation (not often). There are a lot of 'ifs' here to getting it to work. IF you have access to a University library, and IF you have an extensive background in discrete calculus and signal processing, IF you can program well, and IF you have the expensive equipment, then you can go ahead and make it work. Actually, CMU will probably give you some code if you ask - if you're at a University. Acadamia is where sharing originated after all...
  • by mtDNA ( 123855 ) on Thursday March 22, 2001 @07:21PM (#345875) Homepage
    Moravec's approach is a classic example of the SMPA (sense-model-plan-act) approach to mobile robotics. A lot of people think this is a dead end - not least among them Rodney Brooks [mit.edu], who advocates what is called the behavior-based approach. Behavior-based robotics basically relies on integrating several independently operating reflexes into a robot, which is much more lifelike. A nifty intermediate approach is taken by Ronald Arkin [gatech.edu], who seems a little more pragmatic (and less dogmatic).

    You can read some superficial information about all of these guys (and others) in the book Robo sapiens.

    A review of Robo sapiens can be found here [popbeads.org].




  • This Looks Familiar

    L. Ron has some strong ganja!

  • > 3d videos that you can see from every point of view

    Maybe not, but I heard there were some hot and heavy DVD's with this little feature.

    Not that I'd know or anything..

  • actually I think the newer robots use both sides.

    Brooks never managed to get his beaviour based approach to higher levels than simply evading objects. If you read his (older) papers, you can understand why :)

    So .. I think they use beaviour based functions for the low-level functions (evading objects, etc), and SMPA for the higher levels, like map building, navigation, reaching a goal, etc.

  • by mduell ( 72367 ) on Thursday March 22, 2001 @07:29PM (#345879)
    bring us further towards photorealism, and more importantly extremely reliable 3D maps

    Ok, my only question... have they used it for porn yet? (they always seem to use technology first ;p)

    Other posts you are likely to see:
    1. I want a 3D map of Natalie Portman!
    2. Can they make a beowulf of these?
    3. Im gonna pour hot grits on the robot!

    Ok, I just needed to get that out of my system.

    Mark Duell
  • Hmm...you seem to have the wrong impression. It ONLY steered, and didn't do breaks or gas...
  • As someone who stares out the window at these buildings every day, I can attest to the fact that the models generated by this system are excellent (I should also point out that, although I met Seth Teller once, I don't recognize any of the other names on the page, so I'm not tooting my own horn). They work great for blocky buildings, which are prevalent around Tech Square, but I'll reserve judgment until I see how well they do with the Stata Center [mit.edu] designed by Frank Gehry (which right now is a big hole in the ground, but in a year or two should be an ugly wart on the corner of Vassar St.). 4 or 5 years ago there was a student (Ig) in a nearby lab who designed a CCD camera whose output, rather than being the brightness of the object being imaged, was the motion of that object. Unfortunately I don't think anyone in the AI lab picked up on the design; it would make these sorts of models much easier to generate.
  • Somehow this brings up all of those memories of bad robot movies where the robot goes beserk because of a glitch in the program or a hacker, or some other thing.

    The positive side to this is that this is a necessary prerequisite for things like the robots from the Jetsons.

    Image for the future: the Microsoft OS for Robots. Now why does this produce the reaction I an sure it produces? and why does it make me nervous?

  • Watch them post a link to this [theregister.co.uk] tomorrow afternoon, though. See, news (like the RIAA monitoring IRC, Gnutella, Freenet, FTP, the Web, and just about every other goddamned thing that sends out packets) is only NEWS when the most EYEBALLS will see it. You'll read this story again tomorrow afternoon, but I'm giving it to you now. Oh, there's screenshots of it [7amnews.com], too.

    - A.P. (-1, offtopic, I know.)

    --
    * CmdrTaco is an idiot.

  • There's no reason that Brook's subsumption architecture has to work at the level of raw sensory input - it can just as well work on internal models. If fact, this is along the lines of what Minsky is proposing in his "Society of Mind" - that we're based on a set on low level interacting intelligences.
  • by isaac_akira ( 88220 ) on Thursday March 22, 2001 @08:43PM (#345885)
    Moravec's approach is a classic example of the SMPA (sense-model-plan-act) approach to mobile robotics.

    No it isn't. It's just a vision processing system that creates an internal model of the 3d environment around it. Once you build that model, you can do whatever type of reactive behavior or goal-oriented planning you wish.
  • I can't be the only one thinking this technique could be used for some really trippy effects in a sci-fi film...
  • Semi-automated model acquisition in urban areas was achieved around five years ago at U.C. Berkeley. The commercial version was Canoma [metacreations.com], which MetaCreations is trying hard to sell off.
  • ...navigate employing a dense 3D awareness of their surroundings...

    ...be easily placed by ordinary workers in entirely new routes or work areas...

    we call 'em "MCSEs"


  • by Animats ( 122034 ) on Thursday March 22, 2001 @09:39PM (#345889) Homepage
    Sigh.

    Early thinking (60s-70s) really was to build a detailed model of the world, grind it down to simple primitives, and run a logic-based planner on it. That had a terrible time dealing with uncertainty and required a very regular world.

    Moravec introduced the idea of "certainty grids", which are probablistic occupancy maps. Originally, he used this as a means of getting useful data from ultrasonic rangefinders, which are very low resolution devices with slow data rates. (I've built a robot that works that way myself, and you really can get maps with more resolution than the sonar beam by taking enough samples as the robot moves.) As enough compute power became available, he moved to laser rangefinders (better resolution, but clunky rotating mirrors) and finally to passive stereo imaging.

    What you get out of systems like this is a map of the neighborhood showing what's open space and what isn't. This is a good input to a repulsive-field type path planner. There's no need to extract a "primal sketch" or do any object recognition just to accomplish navigation using this approach. It works quite well; the CMU Navlab vehicles have been cruising around offroad on this technology for years now. The Denning guard robots used this technology with sonars.

    Extracting range data from stereo imagery was Moravec's thesis topic in the 1960s. It took a mainframe computer 20 minutes per frame back then. Now it can be done in real time. There's commercial software [ptgrey.com] for this. Two cameras are good; three cameras are better. It's actually not that hard; it's basically done by convolution. It's not done by edge recognition any more. Convolution is computationally expensive, but simple. We finally have enough compute power to do this stuff.

    I've commented on Brooks' work previously, so I won't say any more about that now.

  • ah, I see it now. The way you said it was a bit ambiguous. I must be thinking of another project which steered and did brake/gas (there are a couple prototypes I've heard about). I think.

  • The guy who did this research, Hans Moravec, is a bit of a utopian optimist... he's a good man, but in my views his hopes for the future are a bit unrealistic...

    if i recall correctly, in 100 years (or was it 50?) he claims robots will operate fully autonomously, and act more or less as intelligently as a standard-issue human being. in a little longer, he expects robots will inherit the earth, the stock sci-fi drama of robots being superior to us in every aspect, rationally, physiologically, even creatively/emotionally, and we either make ourselves cyborgs/robots, or the robots proper become the mightiest animal in the urban jungle.

    so... maybe the proverbial grain of salt is in order, but of course it's wonderful to see this kind of vast technological progress.

  • by Anonymous Coward on Thursday March 22, 2001 @10:44PM (#345892)
    The problem I've always had with Brooks' work (besides the whole "Subsumption Arch is the ONE TRUE WAY" ego trip) is that he always tried to claim some kind of biological relevance where THERE WAS NONE.

    I am currently a graduating senior in both biology and computer science, and am very interested in the integration of both, in the areas of neuroethology and biomimetics. Most of the people doing biomimetic robotics (i.e. robots quite strictly based on biological systems, theories, data, and constraints) don't like Brooks for that reason. He created robots, vaguely insect like - they used no real biomechanical data, neural control data, etc., and seemed to suggest that there was some real insect structural and behavioral aspects to them. I also think he's somewhat of a playboy, going from so-called "insect-like" robots (Genghis, Attila, etc.) to his media/attention/funding grabbing monstrosity COG (not that I think $$$ going to AI research is bad in any way), which he makes claims with hubristic abandon about its learning abilities.

    I bet in a few years, when interest and funding dies, and he sees what a complex, deep hole he's dug himself into, he'll think of something new to grab headlines about...

    BTW for those really interested in more "hard" work on biologically inspired control and networks, look at Eve Marder's page at Brandeis (which I don't have with me at the moment) and have a nice gander at

    http://neuromechanics.cwru.edu

    The Case Western Reserve University's new graduate program in neuro-mechanical systems. There, Dr. Roger Quinn, and many other researcher are working on some great biomemetic projects, including a robo-roach, and cricket. They use hard biological data to design these guys with. They also do significant work in neural basis of behavior, biomechanics, and neural-controlled prosthetics. I'm also plugging it because there is a 50-50 chance I will be attending the PhD program there next year in sunny, gorgeous Cleveland, OH!

    Sincerely,
    Kevin Christie
    kwchri@wm.edu
  • Why play CounterStrike in a simulation of your city when the real thing is all around you. Get some friends together, buy some guns, and play CounterStrike for real.

    Er... Well, y'know. You can't make an omelette without um... destroying a forest. Or something.

  • Get some friends together, buy some guns, and play CounterStrike for real.
    Personally, I like being able to taunt my friends, after I kill them (and vice versa!).
    --
  • LOL... I wish I knew that this would be coming up... Hans is intending to hire me in a couple of months to do the next step.. Putting this in an actual robot. We're not sure exactly what I'll do. He's been busy on the presentation for the last few weeks for us to discuss it.

    Most of the stuff you see is data collected several years ago. The office scenes are from 1996.

    Gimme a moment to set up a URL.. I'll spit out a binary and a datafile so you can navigate the room yourself. See the result of the program, in all it's brokenness and accuracy. :)

    Current status: (which was interrupted recently due to making the report)

    He's about to collect a new dataset with a trinocular vision system, and redo the code to build the occupancy grids. The new dataset should have fewer errors and 'streak' artifacts. (There are subte reasons why it's screwing up on that dataset) His code should be able to build a new occupancy grid every few seconds.

    The next stage should include putting it in a real robot. (Which I've seen.. It's a cute, its wheels look like saw blades.) building maps automatically. The eventual target is an external 'head' that can be bolted onto any robot.

    (One thing I should finish before he gets back is a new viewer for the occupancy grids. So I can get a 'birds eye idea' of what they look like.. Eyeball them for myself before building code that will be playing with them.)

    Too bad I'm a CS Theory weenie instead of robotics.. But it will be one hell of a cool next year.

    PS: Tip for everyone... Dumb luck strikes, but you have to make your own luck and grab it when you get it. I met Hans only a couple of months ago. Hans was walking out one day. (By the newly key-carded doors). I was talking to him and asked if he thought the CMU administration were malicious, he said he thought they were inept. I asked what his research was, and it went from there.
  • http://www.ifpi.org/

    Worldcom [worldcom.com] - Generation Duh!
  • are the algorithms / software ever going to be released as open source ? it would be kewl to hack on this.
  • Read more often. The algorithms are already publicly available. CMU publishes details of all the stuff they do in scientific journals. Its like atomic physics used to be: the knowledge was available, but very few people understood it. Hack all you want...if you understand it. Check these conference proceedings: CVPR, ICPR, IEEE Conference on Robotics & Automation, and of course, CMU's website.
  • Someday, they'll make a loophole for Battlebots, allowing for any bot that is not remotely controlled to use a radio jammer.

    THEN things will get interesting. . .

    (think, the spinning bots will have a 360-degree field of vision, each frame updated once per revolution. . . better drivers, theoretically faster reaction time.. . .)
  • yeah but the code is much better and the papers dont tell you EVERYTHING you need to know..usually important stuff is omitted or missing which the code can clear up really fast.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...