Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology Hardware

A Mobile Robot For Modeling The World In 3D 115

Roland Piquepaille writes "A German team from Fraunhofer AIS has coupled a fast autonomous robot with a 3D laser scanner to digitize the environment. The team reports about their work in this article, one of fifteen on the subject of machine perception published by ERCIM News. "Kurt3D is an autonomous mobile robot equipped with a reliable and precise 3D laser scanner that digitalizes environments. High quality geometric 3D maps with semantic information are automatically generated after the exploration by the robot." This overview tells you more about the four-step method used to generate 3D models with this robot and contains several pictures of Kurt3D and its 3D laser."
This discussion has been archived. No new comments can be posted.

A Mobile Robot For Modeling The World In 3D

Comments Filter:
  • I have a client who's willing to pay one meelion dollars to the man who makes this robot look like a shark...
  • Out of curiousity... (Score:4, Interesting)

    by Pakaran2 ( 138209 ) <windrunner@@@gmail...com> on Monday November 03, 2003 @01:36PM (#7378610)
    How does this thing figure out distances? Does it time the return of the laser reflections?

    I also can't help wondering how it models the tops of things - it looks like it's fairly squat.

    What's the advantage of a robot like this versus describing every object by hand, as 3d animators do (typically in some kind of interpreted language).

    It seems like writing "there's a sphere of radius 3 centered here" would take less time than waiting for the robot to scan it.
    • by apraetor ( 248989 ) on Monday November 03, 2003 @01:42PM (#7378658)
      Parallax would make sense. That's how most (all?) optical rangefinders work.

      --matt
      • Actually, this is how the old range scanners used to work. The ones these days either measure the time of flight of the laser beam or its phase shift (more accurate).
        • by Dashing Leech ( 688077 ) on Monday November 03, 2003 @04:28PM (#7380436)
          Not really true. Lidars and Ladars use time of flight (TOF) methods and phase shifting. These are used for long distance measurements (tens of meters to kilometers). Current accuracy of TOF is about 1 cm, with improvements using phase shifting. But measuring close objects can be hard and less accurate because the flight time gets so short.

          Most laser scanners for close scanning (cm to several meters) use triangulation. Wide FOV versions can have ~1 mm precision and cover medium volumes. Narrow FOV versions can be precise to ~0.025-0.1 mm but often can only see at very close range (~10 cm to 1 m) over small volumes. One exception is the autosyncronous scanner from NRC of Canada [iit.nrc.ca] that can measure on the order of 25 microns (~0.025 mm) over large volumes and a wide FOV, by using a narrow FOV camera that automatically follows the laser spot across a wide FOV. This also makes it "random access" which means it doesn't have to do raster scans (but can) but can trace out any shape you want.

          Neptec Design Group [neptec.com] has developed one of these for use in space [nrc-cnrc.gc.ca]. Right now, Neptec's laser scanner is being included as a required 3D scanner for analyzing the shuttle thermal protective system on orbit (tiles, RCC panels) for return-to-flight, as a result of the Columbia Accident Investigation Board report [www.caib.us].

          A good review of TOF and triangulation scanners (and structured light / fringe), including commercially available ones, is given in this paper [iit.nrc.ca], and here [geomagic.com] is a good list of some scanners and their type.

    • by merlin_jim ( 302773 ) <{James.McCracken} {at} {stratapult.com}> on Monday November 03, 2003 @01:44PM (#7378676)
      What's the advantage of a robot like this versus describing every object by hand, as 3d animators do (typically in some kind of interpreted language).

      It seems like writing "there's a sphere of radius 3 centered here" would take less time than waiting for the robot to scan it.


      well, it's like the difference between what the public perceives a dictionary as, and what a dictionary actually is.

      For instance, when I was a senior in high school, Webster's started including the word ain't. Now some teachers were very upset by it while others were ecstatic.

      Then my english teacher put it in perspective.

      Many people belive that dictionaries define a language. They do not. They describe a language.

      Same thing here. Sure you could model a building by hand, but what you get is a definition of an ideal building. Whereas 3-D laser scanning describes the building as it is, very precisely.

      Real world examples where this is a good thing?

      Well recently they did some 3-D scans of stonehenge. The scan data was precise enough to show markings on many stones that had never been seen before (too shallow / worn)

      Or imagine a world of the future based on some form of 3d on-demand printing that's cheaper and stronger than traditional fabrication. We already have that in certain fields, BTW... it's quickly growing to be universal. You have a 3D laser system that precisely measures an existing building, and then a printer that prints new structures to be joined to the building instantly, automatically precisely sized and positioned.
      • another use:
        Take a digitized model of your house and import it into Quake.
        • The first thing I thought, was to put it in a vacuum cleaner. Along with a surface scan, and some kind of radar to detect immediate danger (like pets, kids) this should make it pretty easy for it to navigate.
        • another use:
          Take a digitized model of your house and import it into Quake.


          I think the office/workplace would be much better!

          (I only have a 1br, so it would get old, fast)
      • Makes sense. I was thinking of it only in terms of a problem in AI of getting a scanning robot to work in order to get 3D objects into a computer.

        If a grad student is modeling a bridge, he will realistically describe it in 3D (allowing him to incorporate things like material strength) rather than building a model and having a robot scan it. But of course the real world is different.
    • by Squeebee ( 719115 ) <squeebee@[ ]il.com ['gma' in gap]> on Monday November 03, 2003 @01:44PM (#7378677)
      Well, first of all laser rangefinders are nothing new, and yes, timing of the return trip is where it's at.

      Modeling the tops of things is probably going to be a disadvantage for this one, but typically shape and height are enough for most scenarios, what the top looks like is not usually as much of an issue (though we can likely determine if the top is round/triangular/flat if we can get far enough away).

      The advantage of this over an animator's definition is accuracy. If you want an exact 3d model of a building for, say architectual purposes, you want to know exactly where that sphere is in the room, not some abstract rendition by an artist (not to mention that my office has no spheres in it, but much more complex objects instead.

    • You can find "laser" rangefinders in magazines such as US Cavalry. Typically these actually use an infrared beam rather than a true laser. And yes, it works by determing the time it takes for the reflection to be returned.

      The advantages of having a robot do this type of work rather than a typical 3D animator are several. First, they can work anytime, at odd hours. Second, robots don't ask for a raise. Third, they don't take shortcuts unless they're programmed to. Can't say the same for any 3d modell
    • The laser range finder used is produced by SICK. I'm not sure which model they are using but check out this one [www.sick.de] for example.

      The distance to objects is determined using a technique called "time of flight measurement" so yes, it's basically the time it takes for the laser to reflect.

      We considered using one of these when building a mobile robot a while ago but they are quite expensive and we ended up with... Well... A robot without laser range finders.

      • The distance to objects is determined using a technique called "time of flight measurement" so yes, it's basically the time it takes for the laser to reflect.


        Is there any mention of that in the site? I couldn't find any reference to the technique used, but simply measuring the time it takes to reflect doesn't seem compatible with the resolution they mentioned, 10mm. This translates to a 3 picoseconds resolution in measuring time, or a 30 GHz clock frequency in the counter.

    • I heard their presentation on it at the Recent 3DIM conference. It uses a time of flight laser scanner, but their addition to it is that it's on a tilt angle which allows them to take a wider range of images. The Time of flight laser they use is acurate to about 2 CM
    • Comment removed based on user account deletion
    • > It seems like writing "there's a sphere of radius 3 centered here" would take less time than waiting for the robot to scan it.

      Think about that again. Considering the resolution of a "laser camera," the only things that exist that are spheres are the sun & moon (the robot can't gat far enough away from Earth to recognize it as a sphere). Everything else is complex and would NOT be as simple as saying "there's a sphere of radius 3 centered here."

      Think about how long it took computer games to look
  • by cjpez ( 148000 ) on Monday November 03, 2003 @01:36PM (#7378613) Homepage Journal
    I think this robot could have many practical applications in the field of mapping out office buildings for inclusion in FPS games. Frag your coworkers!
    • Just "upgrade" the lasers and have the robot frag your coworkers.
    • There are converters from .DWG to ??? for Quake maps, i remember thinking i should do that to entertain my coworkers.. but alas, another interesting project i cant possibly have the time to do.
    • I was thinking the exact same thing - except live action. Think of this - set up ultrasonic position emitters around the building, give each player a few receivers - two on head for location and heading of head, two on gun. Now we have exact position of heading and gun.

      Give each player VR glasses - transparent ones, so we can alpha with the natural environment. Now, scan the building with the robot to do collision detection and occlusion detection for simulated objects such as projectiles and monsters.
    • by Myself ( 57572 )
      Funny enough, I came up with this exact idea a few years back, before Columbine and everything made it politically incorrect.

      In addition to automatically building Quake maps for the building of your choice, it would help make up for my terrible sense of direction indoors. I get turned around in houses the first time I visit. Larger structures like hospitals and schools are downright labyrynthine. Having a map that builds itself during my travels (a la the self-revealing map in an RPG) would be a boon.

      My v
      • this is one of the many uses i would like to have for a Head Mounted Display with a camera... there was a story months back about a camera/display app that would create like a virtual doom game in real life. one of the cool things associated was that it could print information about objects ontop of them in the display... like looking at a building would then have its address printed on the screen in the foregrond of the building, etc. VERY COOL stuff.
    • why not get it to start compiling a map of a city? how many geeks are in NYC and would stick one of these laser setups on the dash of their car? granted, it wouldn't be autonomys, but combine it with a gps unit to let it know what area of the city it is mapping then upload to a opensource mapquest.com sorta setup and have the entire city get mapped!
  • Robots (Score:4, Funny)

    by dolo666 ( 195584 ) * on Monday November 03, 2003 @01:37PM (#7378623) Journal
    From the article: "Precise digital 3D models of indoor environments are needed in several applications, eg, facility management, architecture, rescue and inspection robotics."

    This made me chuckle, to think we'd be getting replacements for management, in the form of cute robots that can't talk.

    I'm waiting for a robot I can fight martial arts with. Any chance of us getting one of those?

    It's nice to hear things about stuff like Kurt3D. I remember when I used to think R2-D2 would be hela cool to have around as a buddy.

    He could tweet and chirp away while I explained that moisture vaporators are not the same as carbon units.
    • Re:Robots (Score:3, Informative)

      by CGP314 ( 672613 )
      I'm waiting for a robot I can fight martial arts with. Any chance of us getting one of those?

      Here you go. [slashdot.org]
    • Re:Robots (Score:1, Interesting)

      by Anonymous Coward
      "I'm waiting for a robot I can fight martial arts with. Any chance of us getting one of those?"

      The pre-Qing Dynasty Shaolin Temple (ie prior to destruction) is rumoured to have had a hall of wooden men, basically articulated attack puppets actuated by a mechanism triggered by pressure plates on the floor. Monks had to go through this hallway to "graduate". IIRC there were 18 such dummies, each which had a specific method of attack.

      If this legend is true, the engineering boggles the mind.

      In the mean time,
  • Man, as if it's bad enough for builders that some architect can come around and harass you for being 1 inch off with a divider wall, now the architect will just send the robot down to measure out the entire building in 3d and point out any screwups!
  • by burgburgburg ( 574866 ) <splisken06NO@SPAMemail.com> on Monday November 03, 2003 @01:42PM (#7378659)
    Or 3D digitize our human features, then contact the base station so that they can begin fabricating replicas. Considering how many times even semi-autonomous robots have conspired to overthrow humans, you'd think that researchers would stop giving them the tools to try, try again.

    I, for one, do NOT welcome our human form replicated robot overlords. Who's with me? John and Sarah Conner? That makes three. Who else?

  • Find the power plug (Score:5, Interesting)

    by RobertB-DC ( 622190 ) * on Monday November 03, 2003 @01:42PM (#7378662) Homepage Journal
    Great article (hope it doesn't get /.'d). While they seem to be working on large-scale room features (wall, door, floor, ceiling), I can see the next step being an autonomous robot that can find and identify such basics as a light switch and a power (mains) outlet.

    I remember years and years ago, a robot had been developed that could optically recognize a power outlet and plug itself in... but I don't think it did much else. This would have been early 80s, probably, so we're talking Z-80 vs. Pentium.

    Future recognition goals:

    * Refrigerator door (fetch beer, please)
    * Small child (danger! sticky fingers! run away!)
    * Other robots for romantic interludes:
    (IF Query(Other_Bot, EXCHANGE_CODE) == TRUE Extend_Programming_Probe(Other_Bot))
    • * Other robots for romantic interludes:
      (IF Query(Other_Bot, EXCHANGE_CODE) == TRUE Extend_Programming_Probe(Other_Bot))


      A truly intelligent robot that queries another machine and receives the Exchange code as a response would cut off it's own programming probe as opposed to interacting with such a dangerous piece of code...

      I mean who wants Welchia on their robot?
  • by mr_luc ( 413048 ) on Monday November 03, 2003 @01:50PM (#7378729)
    I tried to get Kurt3D to create a laser scan of the Hall of Mirrors in my glass house, and the resulting mesh was almost complete gibberish.

    Also, I am now blind.
    • Laser rays are not reflected by mirrors because they operate on a different wavelengths.
      • Lasers are not reflected by mirrors, eh?

        Wait. How do we make lasers again? My memory is a little fuzzy. ;)
      • Please don't tell my cat this, she will get very depressed that she can no longer chase the lasers I bounce off all the mirrors in our home for her. As of course a laser contains no light, so operates outside the light spectrum. Never minding that infared can be both reflected and refracted as well, withness the door chime sensors using a mirrored reflector to return the beam from emitter to detector. Or the old trick of using your mirror to turn on the roommate's TV.

        But of course, you are in mensa, so I a
  • Mobile? (Score:1, Insightful)

    by Anonymous Coward
    It looks barely mobile. The greatest problem is that it is wheeled, which instantly reduces its versatility. Even worse, the wheels are very small and the undercarriage nearly scrapes the ground. If the goal is-- as the headline claims-- to model the world, you'd think they would want a land-based platform capable of either navigating extreme terrain, or an aerial platform that could ignore navigational problems posed by arduous terrain (e.g., a sattelite, airplane, or dirigible).

    As is, it is limited to ex
    • Re:Mobile? (Score:5, Interesting)

      by mr_luc ( 413048 ) on Monday November 03, 2003 @02:02PM (#7378839)
      Err.

      Well, the stated purpose of this thing says nothing about it being used outdoors or to model large-scale terrain features. I mean, that's implicit in its design. This thing is designed to reproduce controlled environments.

      And I don't know why you would think that is limiting! Maybe if you're thinking from the standpoint of a modeller/animator. Or maybe you just read the headline, and said 'omg it si small it cannot model WORLD omgomgomg'.

      I see a couple of truly kickass uses for this thing. The first is adding texturing ability (you'd probably have to get dozens and dozens of scans, and have some good algorithms, to come up with good and relatively complete texturing, but I gotta' think that would be trivial compared to the sorts of problems they've already solved in making this thing -- and you wouldn't have to recreate the mesh each time, just sync up the coordinates with the one already created.

      Ok, the use I see:

      Crime scenes.

      Bring in, hell, let's say 20 of these. Maybe some of them would be able to raise themselves up (heh, little accordioning platform for the recording mechanism, right out of the cartoons). They would roll around, sense out the room, figure out optimal placements, and then they would all scan the room, creating a near-perfect model of the room, perhaps mere hours of minutes after a crime has taken place. The cops would seal off the room, and the recorders would laboriusly record and texture everything about the room, down to the finest details.

      Sure, it wouldn't catch a fingerprint or a peice of hair, and the plane/shape detection that is done actually removes some of the captured information (also removes some 'noise', but the forensic work they'd probably prefer a little noise to averaging out potentially important information) -- but the bottom line is, there wouldn't be a need for crime scene 'reconstruction', from photographs and little sketches and things that come after the fact. This would be absolutely accurate, more accurate than subjective information relayed secondhand from paid expert testimony. "How close would you say they were probably standing, from this photograph of bloodstains?"

      So just in forensics alone, I see massive potential.
  • I would think extracting 3D from video footage would be better. This thing can only map places where it can ride. Digital video cameras are pretty decent nowadays. I have seen university projects that say they have gotten pretty decent detection rates from video, but never seen any code nor binary :-/
    • Very good idea. Especially with older video footage... it'd be cool to reconstruct buildings/streets as they were, even if only on a computer. There's an awful lot of video footage of the world... being able to translate it into a 3d model would be revolutionary.

      Do you happen to have any links to good information on extracting 3d from 2d video?

    • I saw a segment on a TV show about this. There are commercial programs out there that will do this. It is very commonly used in movies to interact a CGI animation with film. If I remember correctly the example they were showing was from Jeeprs Creepers 2. They inserted a bat thing into a video of a truck driving.

      I didn't catch all of the segment but apparently it is done by comparing the distances that objects move on screen, i.e. closer objects move faster then those that are farther away.

      It makes a
    • Carlo Tomasi, formerly of CMU and Stanford, at Duke as done some really nice work with this kind of thing. See his publications [stanford.edu]

      Laser range finders are not that exciting. Kind of stuff you buy pre built. Unless you do some interesting processing with the data, it's nothing new, but always fun to play with.

    • I don't believe this. Someone makes a comment about extracting 3D models from video footage, and after 7 hours, not a single comment has been made regarding porn. Come on! "Henk Poley" practically handed this setup to you on a platter!
  • I always thought it would be an interesting way to assemble 3D gaming worlds. A lot of people have done this by hand, like old DOOM or Quake maps modeled on their school or what not. I guess it should be gathering "polygon texture" data too in that case

    Of course, most buildings are pretty boring relative to the game-specific layouts, but hey. It would be a good quick start.
    • Well, they did this for the matrix high-end special effects work -- captured the real world data instead of modelling it from scratch.

      The issues with gaming data are things like, you know, complexity of the meshes, the insane size of the texture data (since every poly's texture data would be unique if it was captured) -- or, conversely, if you just created the mesh and had people texture it after the fact, your modellers might just rise up and kill you, because that makes their job a lot harder (in games,
      • Huh...but maybe you wouldn't have to capture *every* texture...most walls are prtty dull, after all. With a powerful AI program (and vision is a notoriously challenging area), you could probably make due with "average" or "typical" textures for blank wall, and then just capture the interesting stuff, billboards and doorframes and what not.

        As long as we're dreaming, you might as well make sure that the robot very carefully records the exact lighting levels, directions, and tints, and then "subtract" those
      • So this means we can buy a few thousand, dispatch them all over the planet, and build our own Matrix?
    • Sounds like a good idea. Scan the wanted building in, ask it to take snap shot of textures, add a few or move a few objects around. Bang, digitized, real world scenerio for those Counter Strike gamer to play in.
  • Now give it wings (Score:2, Interesting)

    by Anonymous Coward
    Seems pretty cool, I think what really sets this apart is the possibility of accuracy that couldn't be easily derived from an on site visit or from video. You can get some very fine grained messurements with this osrt of idea. It seems to me that once they get this thing refined down to a small enough size is method of ploting its currently location is rather condusive the being able to fly around. That would solve some people's concerns about mapping the tops of surfaces.

    Just power this thing up, let i
  • by Animats ( 122034 ) on Monday November 03, 2003 @02:04PM (#7378850) Homepage
    I was hoping that the Franhofer Institute had a new laser rangefinder, but they're just using the clunky but reliable SICK LMS unit on a tilt head. That's not a 3D scanner. It's a line scanner you can tilt, slowly. You can do quite a bit with something like that, but it's slow.

    The Franhofer Institute has been doing some nice work with MEMS mirrors, and I was expecting something new from them.

    There's a very nice true 3D solid state rangefinder out of Switzerland, but it's a continuous beam device and thus very limited in range. Works fine indoors, though.

    Imaging laser rangefinder technology is lousy, because product volume is so low. Five companies have exited the field in the last decade. There are several mechanical scanners available, all using scanning technologies abandoned by television in the 1940s. All-electronic solutions have been developed as prototypes, but they're not shipping yet.

    Once this problem is cracked, mobile robotics is going to get much better.

  • I'm guessing its pretty loud but I have to imagine these things could be deployed from a Predator in volume and quickly get a good picture of the interior of a compound/cave/stronghold before they were destroyed by an enemy.
  • This kind of sounds dumb... But why don't airplane simulation companies do this with the earth and combind it with a highly detailed pictures of the earth to construct the ultimat map?
  • Just one more step towards Google indexing the real world so I can finally find where my TV remote got to...
    • I've been thinking that locating the TV remote control, or other shit you lose in your house, seems like a nice (non-Big Brother) use for RFID.

      If you had an array of RFID transcievers in different rooms, perhaps you're not that far from being able to let your home find something you lost. Or send your little robot pal with the RFID transciever to find it for you...

  • similar idea (Score:2, Insightful)

    I had a similar idea a while back of using 2 cameras aligned side by side. each with a servo motor to give it 20 degrees of freedom either way. By taking a snapshot of both images, you could use motion detection routines (i.e same ones used to encode mpeg)to see how far the images differ from each other and move the camera's angles until the 2 images virtually parallel each other. Then, taking the angles of the cameras, a simple triangulation calculation would tell approximately how far an object is to the
    • Actually, that sounds similar to what the human eye does... minus the MPG of course. From what I understand, humans constantly move their eyes in 'micro movements' and the optic nerve does some mamboleo on the images to transmit a single pair of images (combined from the angles) with distance information, rather than sending each individual image. Makes for some really neat optical illusion capabilities hardwired in.
  • Similar projects (Score:2, Informative)

    by anakog ( 448790 )
    The idea of using mobile robots for automated 3-D modeling in not new in the robotics research community although it has been gaining speed lately.

    The AVENUE Project [columbia.edu] at Columbia University had an earlier implementation for modeling urban sites.

    Also check out The MIT City Scanning Project [mit.edu].

  • Similarly (Score:2, Informative)

    by Sinical ( 14215 )
    There's the Centibots stuff at SRI:

    http://www.ai.sri.com/centibots/

    which also uses LADAR-bots.

    In defense, there's a lot of interest in LADAR as well, because with an actual 3D image of the target area you can do autonomous target recognition and acquisition off something like a UAV. I think most LADARs right now are raster scan (i.e., one beam that sweeps left to right and then down, like a TV), but I've seen that people are working on flash LADAR (one big "pop" like a flashbulb and then all the info co
  • ...Now give the dang thing a vacuum cleaner so it can clean under the coffee table.
  • Useful in dangerous environments.

    For example, after 9/11, engineers had a hell of a time figuring out the situation below ground level at the World Trade Center site -- people got hurt exploring down there. Far better to send in a robot.

    Granted, this version of the robot isn't sufficiently capable, but future versions might well be.
  • Is this new?? (Score:2, Informative)

    by dFaust ( 546790 )
    RedZone Robotics [redzone.com] and Carnegie Mellon [cmu.edu] had this years ago on their Pioneer robot [cmu.edu] which did structural analysis at Chernobyl. It was deployed in the summer of 1999, though I think the build was complete by the start of 1999.

    I was told the 3D Mapper was from SGI, but I have a feeling they provided the computers, not the mapping technology. Also, the resulting 3D environment could be explored via a VR helmet and gloves. Pretty slick stuff, I have video of it somewhere.
  • At my old uni they had a very similar project doing 3D laser scans of building and meshing them with the visible pictures. Have a look at Resolve Project [leeds.ac.uk].
  • It seems that most of us have missed the importance of this robot. I was fortunate enough to see the research group present for this and another of their robots (which does the same thing but is large enough to carry a substantial cargo as well). This robot autonomously digitizes large enviorments including texture maps on it. While admittidly there is room for improvement it is more than just an important step. It could be snuck up into all sorts of places people can't fit and be used to search through
  • MP3Ds anyone?

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...