Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

GRACE Exceeds Expectations! 211

smashr writes "GRACE, the robot mentioned earlier on slashdot, has succeeded in the AAAI challenge at the conference in Canada. Her creators are saying that GRACE exceeded their expectations. The entire competition went well with only several minor hiccups (GRACE cut in front of a judge in line to register, and then demanded a conference badge several times). The team is looking forward towards refining GRACE for the competition in Mexico. Stories at: CNN.com, Yahoo, and the Edmonton Journal."
This discussion has been archived. No new comments can be posted.

GRACE Exceeds Expectations!

Comments Filter:
  • by Tattva ( 53901 ) on Monday August 05, 2002 @04:13PM (#4014150) Homepage Journal
    Unfortunately, GRACE does not have a flexible torso, and will therefore be unable to perform a requisite skill in the academic/conference field: kissing ass.

  • Schmoozing? (Score:3, Funny)

    by JUSTONEMORELATTE ( 584508 ) on Monday August 05, 2002 @04:14PM (#4014152) Homepage
    Team GRACE plans to refine the robot, hoping to add "schmoozing" skills to her repertoire for next year's challenge in Mexico.
    And elsewhere...
    ...and then of course [she] had to say 'Can you put it on me? I don't have any hands'
    That's one flirtatious babe! Can't wait to see her "Schmoozing" skills in Acapulco!
    • I'd love them to have a robot that emulates Bender from Futurama.

      He swill champagne, enjoy a cigar or two... try and chat up the other contestants, and make some of the judges he isn't sure about have a accident or two.

    • That's one flirtatious babe! Can't wait to see her "Schmoozing" skills in Acapulco!

      Just remember, if she ever offers to give you a lapdance, just say "no". And no touching of heatsinks!

  • by CommieLib ( 468883 ) on Monday August 05, 2002 @04:14PM (#4014153) Homepage
    Does anyone else see a problem with a bunch of robotics researchers teaching a robot social skills?

    Relax, it's a joke.
  • i really don't know what to say about this. on the one hand, it's cool as hell, and an amazing technological achievement - a robot that can actually register itself, get a badge, be rude in the process, and give a lecture. on the other hand, it's sort of scary - robots are getting autonomous; what do we do when the day comes that GRACE decides she doesn't like the judge's attitude and decides to "adjust" it?
    • turn her off and change the bad code?

      You've seen the Terminator movies and The Matrix too many times.

      Good scifi- not reality.

      .
      • Maybe you've seen too many STNG episodes ;-)

        Anyways, we all know that technological predictions are 100% accurate. As living proof, I'm writing this post on one of the only 5 computers in the world, and I never come close even touching the 640K ceiling.
        • Maybe you've seen too many STNG episodes ;-)


          Not too many.

          I do not subscribe to the Skynet fears that pop up about here so regularly. Mostly because I think we are so very much further from that kind of capability in a machine- than the optimists hope.

          but even if 'thinking' machines are near, I'm not too worried that immediately following prescience will be an overwhelming desire to wipe out humanity. Much too silly.

    • I could be wrong, but I don't think that there's a neccesarily direct path between robots that are programmed to socialize and robots that begin adopting actual human characteristics.

      Humans learn to use violence as a problem solving technique through early interaction with other humans. Unless the robot was *programmed* to be violent, or was capable of learning by itself AND was introduced to social situations where violence was used, we wouldn't have to worry about it.
    • Asimov, Issac - I, Robot.

      Read it.
      It's good for you like soup.

      You can obtain a copy at your local book store, library or eBook Warez IRC channel (though the former is preferred to the later).
      • heh, i got a hold of I, Robot when i was six, and haven't stopped reading Asimov yet. problem is, the Three Laws are physically hard-coded into the positronic brains (Little Lost Robot gets into this a bit - apparently without the three laws, there's no imaginary solutions to the positronic field equations...but i digress); i haven't heard of anybody using custom processors with hard-coded "rules" in robots as of yet. i'm not scared that we're going to have sociopathic killing machines in the next few years, but it is lurking at the back of my mind...a true AI would have to be capable of learning, with few or no restrictions on what behaviors it could pick up - and we all know how well we teach our little learning machines (children) to be nonviolent...we aren't ready for robots that need raising until we can raise kids right....but that's just my $0.02.
    • what do we do when the day comes that GRACE decides she doesn't like the judge's attitude and decides to "adjust" it?

      Listen. Understand. GRACE is out there. She can't be reasoned with, she can't be bargained with...she doesn't feel pity of remorse or fear... and she absolutely will not stop. Ever. Until you are dead.

  • by Microlith ( 54737 ) on Monday August 05, 2002 @04:15PM (#4014161)
    Not only is it sociable, but it's rude too!

    Sounds about normal for a lot of people. In fact the blatant disregard for others should earn it points for being more human than necessary.
    • Cutting in line, being insistent and unreasonable at the counter, no doubt trundling over somebody's toes, bragging about the vacation in Acapulco...

      Apparently, the only things she didn't do was grab somebody's ass while reeking of yesterday's wine. But hey, schmoozing skills in next version, yeah?

      Clearly professor material!
  • Grace is 3 years old. She weighs 157 pounds. She is 4 feet, 6 inches tall, and she has a Magavox for a face.

    Reject Artificiality!

    tcd004
    • Grace is 3 years old. She weighs 157 pounds. She is 4 feet, 6 inches tall

      Hmm, that is startling growth! She started out weighing nothing and without any height.

      157 pounds and 4.5 feet in 3 years -- (calculating) -- at this rate, by the year 2014, she will be 785 pounds, and 22 feet 6 inches tall!

      This enormous monster will devour us all!!!
  • by NoMoreNicksLeft ( 516230 ) <john.oyler@ c o m c a st.net> on Monday August 05, 2002 @04:17PM (#4014181) Journal
    Cutting in front of a judge sounds like something I would do, and if the robot imitates me, it must be doing something right, don't you think?

    Seriously though. By those criteria, half the human race might fail.
  • A paradox (Score:1, Interesting)

    by SkipToMyLou ( 595608 )
    Its an interesting paradox that the nerdiest of computer geeks are programming robots to interact socially... These people are the most qualified and least qualified at the same time!
    • Just as it's also an interesting paradox that the slickest of marketing people are effictively designing software products for the high tech industry. These people are the most qualified and the least qualified at the same time.
  • by Remus Shepherd ( 32833 ) <remus@panix.com> on Monday August 05, 2002 @04:18PM (#4014191) Homepage
    Wake me when GRACE is able to sign up at a sci-fi convention. Applicable skills will have to include: Giving backrubs to others standing in line; Recognizing the registration counter people as objects to talk to even when they're wearing klingon costumes; and bitch-slapping the crowd of fanboys around her chanting "Exterminate...exterminate!"
  • fascinating (Score:3, Insightful)

    by tps12 ( 105590 ) on Monday August 05, 2002 @04:20PM (#4014201) Homepage Journal
    I couldn't believe this story. At first I was ready to read about another amusing, if charmingly disappointing, attempt at the Turing Test. But it transpires that GRACE actually function independently and managed to register for and deliver a lecture at a crowded academic conference! I was floored.

    Look at how far we've come. The mechanics for her locomotion are only about a century old. The silicon electronical parts of her brains have only been around a few decades. And I'm even calling her "her!" She's a machine! That a handful of scientists and antisocial grad students have accomplished what it took evolution millions of years to do (create life) gives me hope for the future of mankind.

    As I look at these articles, I'm reminded of what my parents told me: "You can do anything." And now I'm realizing that that wasn't just "you" as in "me" (tps12), but also "you" as in the entire human race. We are reaching for the stars, we are playing with the origins of life and the very fabric of our Universe. We are playing God. If we don't end up destroying ourselves in the process, then we're in for one hell of a ride.
    • Mr Madison, what you just said is one of the most insanely idiotic things I've ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having heard it. I award you no points and may God have mercy on your soul.
    • Re:fascinating (Score:2, Interesting)

      by aleksey ( 1519 )
      ...a handful of scientists and antisocial grad students...

      I find it interesting that people on /. seem to think that grad students are somehow less sociable than the average computer geek.

      Actually, as I look around at my particular institution, I would have to say, that on the whole, of my friends who have achieved a BS in Comp Sci, those who've gone on to graduate school are no more antisocial than the rest.

      In fact, a lot of the grad students I know are far more normal than the average undergrad computer geek.

      I think people overlook one of the keys to successful academic career: the importance of being able to communicate your work effectively to others. Being good at coding, maths, etc is also necessery, but can almost be secondary. If noone understands your work, it doesn't matter how good it is. So if you're antisocial, you learn to deal with it and work on inter-personal skills. Otherwise you subvert your own work.

  • by jethro_troll ( 596531 ) on Monday August 05, 2002 @04:22PM (#4014208) Journal
    When I went to AAAI in Philly in 1986 (in my LISP hacker days) half of my coworkers (mathematicians, linguists, logicians, all damned good AI researchers) either got lost en route to the hotel, or got on the wrong shuttle bus to the conference, or forgot their presentation slides, or...
  • A Rudebot? (Score:2, Funny)

    by charlie763 ( 529636 )
    She was rude and cut in line? Well, then I guess she is more human than we thought.

    I'm too cool for a sig...
    • She was rude and cut in line?
      Not rude, just programmed with american aesthetics rather than canadian
      • "Not rude, just programmed with american aesthetics rather than canadian"

        /me grabs can-opener and takes it to the can of worms

        If you are used to driving in Canada and then drive around in Michigan for a day, you will understand how true this is.

      • programmed with american aesthetics rather than canadian

        Well they didn't want her standing around and saying 'eh all the time.

        -
  • Now..... (Score:2, Funny)

    by MrWinkey ( 454317 )
    when can they make me one to get beer and clean the house????
  • Does anyone know ? (Score:1, Interesting)

    by Anonymous Coward
    Has this robot got maps of the location
    a priori or does it have to figure out this
    stuff itself ?
    • For the initial part of the task ("get from the front doors to the area of the registration desk"), Grace has no a priori information, relying on the directions given by the human it interacts with. After Grace has registered and been "given" a map, she uses a map built the night before to navigate to the location where she's to give her talk. The map, however, only includes the static environment; the navigation & localization software must deal with dynamic obstacles (such as the crowds surrounding the robot).
    • Yeah, so I was at the conference, and if I remember correctly GRACE had a map built a priori. It seemed to me kind of like a cheat to do it that way.
  • What will stop robots and other artificial life forms in future from being treated as subhuman, like many people were in the past (and unfortuneately in the present). I can see the day when we do have robots that are almost human, but will they be our slaves or our friends?
    • Man, I'll become a tree-hugging vegan PETA extremist before I worry about a MACHINE'S feelings and social standing.
    • What will stop robots and other artificial life forms in future from being treated as subhuman, like many people were in the past (and unfortuneately in the present). I can see the day when we do have robots that are almost human, but will they be our slaves or our friends?

      For one view of this, watch the DVD for A.I. (Artificial Intelligence) by Spielberg/Kubrick.
      • For a far more entertaining perspective watch some Red Dwarf on TV (or read the excellent books Infinity Welcomes Careful Drivers and Better Than Life). Pay particular attention to the bits about Talkie Toaster, they are hilarious.

    • I can see the day when we do have robots that are almost human, but will they be our slaves or our friends?

      Neither. They will be our tools. Like a vaccuum. Just a really smart vaccuum.

      It's fun to anthropomorphize, but don't get carried away. PETA already has that market cornered for animals.

      We don't need a People for the Ethical Treatment of Androids.
  • by drox ( 18559 ) on Monday August 05, 2002 @04:26PM (#4014243)
    The GRACE design team deserves kudos, but I still think that robots/AI should primarily be designed and programmed to do things that humans are BAD at, like searching through dangerous rubble, or performing fine manipulations in toxic or extreme-temperature environments, rather than doing things that humans are already quite GOOD at, like schmoozing. There are billions of people available who already know how to schmooze, and they can learn new schmoozing rules quickly, on the fly, without costly reprogramming. There are very few who would be willing (to say nothing of able) to work in a hazardous or tiny confining environment.
    • by ultramk ( 470198 ) <ultramk@noSPAm.pacbell.net> on Monday August 05, 2002 @04:31PM (#4014278)
      rather than doing things that humans are already quite GOOD at, like schmoozing.

      Don't know many scientists, eh?

      There are very few who would be willing (to say nothing of able) to work in a hazardous or tiny confining environment.

      What, like a cubicle?

      m-
    • Any advance in artificial intelligence is a good one. Designing a robot to "schmooz" at a conference could be better training for doing hazardous work than actually practicing hazardous work (in the long run) for all we know.
    • robots/AI should primarily be designed and programmed to do things that humans are BAD at

      Right. So that means AI should end up doing their own software development too. :)

      But why stop with only the jobs we're BAD at? For example: most plumbers, miners, and fisherman are good at what they do, but I bet they'd rather be doing something else.

      I won't be happy until robots+AI are doing EVERYTHING most humans don't want to be doing themselves (so we have more time for eating, sleeping and fucking)... Ahh... the hedonistic imperitive... :)

      --

    • but I still think that robots/AI should primarily be designed and programmed to do things that humans are BAD at

      Like working for the government? If all the robots get all those jobs were are all the stupid middle class people supposed to go?
    • Learning to do social interaction is an important advance, because it makes the robot much easier to control. Learning to schmooze isn't that important, but it's a good test case for things like understanding tone of voice and subtext, which are really important for comprehension. For industrial applications, this is obviously unnecessary, but for disaster recovery, it would be extremely useful to be able to communicate with the robot with natural language to tell it what to do, ask it about what it is doing, and allow it to report on its surroundings.
    • The GRACE design team deserves kudos, but I still think that robots/AI should primarily be designed and programmed to do things that humans are BAD at ...

      The trick is, robots need to be able to do things that humans are good at in order to do many things we are bad at. For example, humans make lousy temps. You often have to explain the instructions more than once, we make mistakes performing repetitous tasks, we hate menial work, etc. But in order to be a good temp, you need lots of human skills in order to interact with people.

      Think of it this way, typical humans are bad, really bad, at interfacing with computers. Grace's social interface helps fill the gap between our inability to learn/remember arcane controls and whatever she does. In essence she is learning the arcane controls for us and providing a more friendly interface.

      ... like searching through dangerous rubble, or performing fine manipulations in toxic or extreme-temperature environments, rather than doing things that humans are already quite GOOD at, like schmoozing.

      I think a case could be made that humans are in general better than autonomous machines at searching and performing delicate manipulations (with tools). It's just the dangerous environments in your example that rule us out. If we were to have the robots schmooze in a puddle of molten lava, we'd have the same quandry.
  • This makes me wonder:
    1)Does Grace remember which judge she cut in front of? Does she have memory of interactions, and what kind?

    2) Does she self program?

    3) why don't they at least give the poor girl some hair?
  • ...I cannot tell: which one is the robot?

    Can anyone enlighten me?

    m-
  • Robot
    Utilizing
    Demanding
    Expectations

    He'll have to cut the line to register for the conference before everyone else, take all the donuts, leave a coat and bag on the chair next to him even though the conference room is full, blab away on a cell phone during the presentation and leave 10 minutes early.

  • Multiple AP sources (Score:1, Interesting)

    by Anonymous Coward
    Point two links to the same AP story in two places (CNN & Yahoo) doesn't really accomplish anything. Or do you think CNN or Yahoo could be slashdotted? At least the Edmonton Journal story was different.
  • by revscat ( 35618 ) on Monday August 05, 2002 @04:35PM (#4014302) Journal

    Anyone else notice any similarities between GRACE and non-lebsian Ayn Rand? Consider:

    1. Neither can write
    2. They both are approximately equal on the 1-10 beauty scale
    3. Neither is completely human
    4. Both have worshipful drones who provide for their every need and provide reassurance as to their value and self-esteem
    Amazing!
  • Translation (Score:3, Funny)

    by orangepeel ( 114557 ) on Monday August 05, 2002 @04:38PM (#4014324)
    "Mounted on her body are several gizmos, including a laser range finder, sonar equipment, voice-recognition technology and speech generation machinery."

    Translation:

    "Mounted on her body are several gizmos, including a laser range finder, sonar equipment, voice-recognition technology and an 89-cent speaker from Radio Shack."
  • GRACE [cmu.edu] and her Twin? Maybe just cousins. [google.com]
  • GRACE on TV (Score:2, Informative)

    by joepa ( 199570 )
    From http://www.palantir.swarthmore.edu/GRACE/ :

    8 August: Grace may be on the Today show (not yet finalized).

    ...
    That could be interesting.
  • Next thing you know her husband will die and in an effort to keep her house she'll create a a massive greenhouse with HPS lighting and hydroponics to grow tons of weed. But only to end up burning it so the whole town will get high. Oh wait, differan't Grace.
  • can it be equipped with a giant spinning saw blade and rip other robot's nuts off in a Comedy Central Death Match?
  • by Jugalator ( 259273 ) on Monday August 05, 2002 @05:01PM (#4014445) Journal
    I had nothing to do and asked Elbot [elbot.com] a few questions:

    ---

    Me: What do you think of GRACE?
    ELBOT: Are you taking a poll?
    Me: I wouldn't call it a poll...
    ELBOT: I realize this is a typical marketing question, but I'm supposed to ask it, otherwise they'll use the cheap oil on me. So, where did you find out about me?
    Me: I knew about you since earlier.
    ELBOT: Amazing! I must have underestimated your intellect!
    Me: So... What do you think about GRACE?
    ELBOT: My opinion on that matter is so revolutionary, I prefer to keep it to myself.
    Me: I see chatterbots still have a long way to go.
    ELBOT: Really? I could have sworn you missed that.

    ---

    So.. The only thing the bastard gave me was some rude answers and kept his opinions about GRACE top secret. Bah... So much for AI's. :P

    Perhaps I should have asked good ol' Alice instead.

  • Due to Wednesday's wet weather, the hydrophobic robot had to start inside the building.

    Couldn't stretch the budget to an umbrella? "And you will know me by the trail of eyeballs."

    Ali

  • by Devlin-du-GEnie ( 512506 ) on Monday August 05, 2002 @05:06PM (#4014487)
    The Edmonton Journal article says that GRACE's handlers used hand signals and voice cues to help her navigate and perform. That makes her registering for the conference merely interesting, not jaw-dropping amazing.
    • Not really. I got to watch the presentation that Grace gave on herself as well as a presentation by the team afterwards. I haven't read the article, but I don't imagine they got it all right.

      What Grace really does is this (which is really cool, and she's still autonomous): Grace is capable of recognizing hand gestures, and programmed to ask for help if she gets lost. So rather than preprogrammed voice cues or hand signals, when Grace gets lost she will ask the nearest person "How do I get to conference room 23?" (or where ever she's supposed to be going) If the directions are clear, "Grace, you go straight down this hall, at the end of the hall you turn left, and conference room 23 is the third door on the right." Grace will follow them. She will also watch for hand gestures while you're giving directions. If something doesn't line up, like if you point right and you say to go left, she will ask you which you mean.

      So Grace *is* autonomous, and put on quite an amazing performance!

  • "Overall, I have to say it went very well," said Alan Schultz, from the U.S. Naval Research Lab in Washington, D.C. The lab was one of five U.S. institutions which contributed to GRACE's creation.
    [...]
    Mounted on her body are several gizmos, including a laser range finder, sonar equipment, voice-recognition technology and speech generation machinery.

    Additionally, the navy version has two mounted General Electric M134 miniguns, triple mine dispensers and dual Stinger surface-to-air missile launchers. It is also rumored to be slightly more ill-tempered than the civilian version.

  • until it codes in perl, can order pizza, cook ramen, post on /. and develops a habit of belching on socially inauspicious times.
  • Check the NCARAI [navy.mil] which is a always a great source of info on current research in the field, including a number of key technologies used to implement Grace.
  • Creating an AI like GRACE is most definitely a laudable accomplishment.

    Before we research further into making robots more humanoid, however, I think we should remember an old biology axiom: Form Fits Function. Even at today's level of technology, humans could never design a molecule as complex and efficient as a protein. Why do we act like we know better than God (or whatever you wish to believe) when it comes to creating things now?

    A machines' design should be the most efficient for its specified task. Homo sapiens is a specialized species built for thinking. We don't need robots to take our place; we need robots to perform tasks that we are incapable of.

    • Why do we act like we know better than God[?]

      geez - look around! if we can't do better than this might as well give up now.

      (btw, I wouldn't worry just yet, it's not really AI, but a robot programmed for a specific task; god is safe for a few decades yet)

  • Oh great...the robot gets to go on vacation and I have to stay here and work! What's wrong with this picture?
  • GRACE cut in front of a judge in line to register, and then demanded a conference badge several times

    I see the prototype is using the brain of a typical american CEO/politician. Now if we only knew if they were asking for stock options or contributions, to figure out what type of brain they actually used.
  • The robots will just cut in line, force us to fetch them snazzy conference badges and listen them drone on about themselves. I don't know which is worse. :)
  • ...has pic of Reid Simmons with GRACE and a movie poster for Casablanca...shouldn't that be a Desk Set [imdb.com] poster?

    After all, the computer in that (see it if you haven't) movie could be termed as having some AI leanings...

  • cutting in line... (Score:4, Interesting)

    by furchin ( 240685 ) on Monday August 05, 2002 @05:21PM (#4014551)
    The CNN article states that the robot bumped into a judge, rather than cut in line. That's a significant difference. Cutting in line indicates a tempramental personality, with some true intelligence perhaps. Bumping into a judge indicates that the programmers in charge of GRACE failed basic obstacle avoidance -- which boils down to the following for loop:

    for (int i = 0; i<num_sensors; i++)
    if (sensor_distance[i] < 5 inches)
    motors = off;

    I'm involved in a lot of robotics work, and while I believe that robots should eventually attain very intelligent behavior, I also believe that the first priority in programming a robot is to ensure it does not harm humans. By bumping a judge, GRACE has shown that it is not capable of functioning safely in society. If it bumps a judge, what's to keep it from running a judge over and killing him? Standard robots the size of GRACE are 300 lbs, quite capable of inflicting significant damage.

    As a side note, most robots have touch sensors on their side panels that automatically shut off power to the motors when they are triggered. I'm willing to bet that this is what kept GRACE from running over the judge.
    • Robots can't be too wimpy. Aggressive robots get the job done. Read this paper: "Go Ahead, Make My Day", Robot Conflict Resolution by Aggressive Competition. [usc.edu] An excerpt:
      • We are investigating the use of aggressive behaviour to improve the efficiency of robot teams; this paper presents our initial simulation experiments. Our eventual goal is to demonstrate these methods running on real robots, so we have tried to keep the simulations and controllers realistic and as easy to transfer to the real world as possible. We demonstrate that a simple stylised fighting behaviour improves the overall performance of our system by reducing interference. We then discuss the (non)usefulness of social dominance hierarchies and suggest ways to improve overall efficiency.

      Experience with the HelpMate hospital delivery robot indicated that a bit of pushyness was needed, or the robot would be stalled by people standing and talking in corridors.

      This is, as far as I know, a result not anticipated in science fiction.


    • There are billions of clumsy humans who run into people, too. Why do you expect an AI robot to OUTperform a human?
    • You name a robot "Grace" and look what happens.
  • Comment removed based on user account deletion
  • If you subscribe to Ray Kurzweil's theory that man's big purpose was only to give birth to technology, then this is really frightening.

    Look how far we're coming, and how much quicker we're getting there. Five years from now who knows what this robot will be able to do. How long will it be before robots are serving us? How long before they're as intelligent as we are? How long before they're creating our music, our films, our art? I don't think humans could compete with a machine that knows all the right buttons to push.

    And yeah, to feed the paranoid, how long until they surpass us, and realize that we're nothing more than a liability? I've said it before, and I'll say it again: down with technology, we should have stopped at the wheel!

    • sigh.

      How long before they're as intelligent as we are?

      A very, very, VERY long time. While GRACE is very cool and definitely a great technical accomplishment, "she" is no closer to human intelligence than a toaster. It's a robot, programmed to perform a task; the tasks are getting more elaborate and the programming more tricky (and ingenuous), but none of this gets it any closer to intelligence. We'll be able to build a robot which behaves and interacts like a convincing human, long before we have even the faintest idea of how to build one that is even remotely intelligent. Emulation is simply not the same a duplication.

      As far as music and movies go, it seems intelligence is no longer required to create those anyway.

  • by agravaine ( 66629 ) on Monday August 05, 2002 @05:38PM (#4014654)

    In a post-conference interview, researchers noted that GRACE has already exceeded the social skills of Richard Stallman, who has been observed picking his teeth and clipping his toenails (then flicking the debris onto the floor) while giving a talk at Georgia Tech.

    "It's not really a fair contest:" groused Stallman,"GRACE doesn't have any toenails!"

    (True story about the toenails, BTW. Interesting talk otherwise; or so I heard.)
  • ...they held the convention next-door to DEFCON. Some hackers got into Grace's programming and made her romance and hump a full garbage can.

It is easier to write an incorrect program than understand a correct one.

Working...