Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

Robotic Dogs 130

Brian Daniels sent us a link to a crazy looking Sony Robotic Dog. It doesn't have a price listed, but I'm willing to bet you won't have to clean up after it. Its supposed to be configurable, expandable, and fairly programmable. And its cute too.
This discussion has been archived. No new comments can be posted.

Robotic Dogs

Comments Filter:
  • by Anonymous Coward

    heh.

  • by Anonymous Coward
    These dogs are real, our AI people gave some live demos of the dogs they used at the soccer tournament. They're quite amazing. They have a real OS inside, plenty of memory, sound and video input.

    But wouldn't it be great to have wireless networking? Then they could communicate, be remotely signalled, programmed, debugged, etc. Some robot toys already have this, of course.

    But just wait until some script kiddie cracks your dog. Now they can see and hear what's going on in your house, move around, kick things, push things, even try to escape. Just watch your pet jump out of the window in a bid for freedom, then some kid waiting with a laptop in the getaway car grabs it and drives away...

    We live in interesting times.

    roc@cs.cmu.edu
  • by Anonymous Coward
    http://www.world.sony.com/robot/robot.html
    >(3)Expendability for Accommodation of New Functions

    People get upset when you kill a real dog.
  • How long would it take someone to take Sony's OPEN-R [sony.com] specification, and replace Aperios with Linux or RT-Linux?
  • ... just not in the commercial market. This robot has it's own league in the RoboCup Soccer Tournament [robocup.org]. The winners last year were a team from Carnegie Mellon University (go CMU!! - I go there). It's really cool. Here's the site of CMU's team leader:

    Manuela Veloso's Page [cmu.edu]
  • You're misapplying the results of Milgram's experiments. Milgram demonstrated that people's obediance to authority often overrides their internal morality, something which quite effectively demonstrates the strength of an authority figure's influence. You're trying to use that to say that peoples' self-control over moral decisions is minimal in situations where no authority figure is neccesarily present, which I find inappropriate.
  • Posted by Usak0:

    It's a cyclops! I don't know about anyone else, but something with one eye kinda freaks me out...
  • Posted by hersh:

    (Also from CMU) The funniest part about them I thought was when I asked about marketing plans for them, and was told that one of the biggest problems with them is that their heads still break off too easily. Oops.

    The various motions they can be programmed to do seem fairly canned, unfortunately. Walking forward is one particular motion sequence, laying down is another, turning left is another, etc.

    I don't know anything about the API, but I would think the first thing to work on would be more natural looking (and therefore more efficient) motion. Not sure what sort of sensors they have, I doubt they include joint torque sensors.

    They definitely look fun, regardless.
  • Posted by neyec:

    This is the beginning of all the Blade
    Runner stuff.
  • Posted by sinfonian:

    The Robotic Dogs are actually old technology (by today's standards). The Computer Science department at Northwestern University has been doing programming and development on them since last school year (1997-98). They still have a ton of bugs in them.

    As a personal project, I was planning on implementing a hearing system for the dogs. (3D sound localization) However, the head mounted microphones don't even have drivers yet. Not to mention that the quality of the microphones mounted on the head would have to be upgraded, and a few things would have to be added to the head to make sound localization a reality.

    Long story short. Cool idea. They can't do much. Still a ton of bugs. They still have a long way to go before this would be a good product.
  • i SOOO hope this isnt a 4/1 Day joke...ive wanted a robot pet since i was like 6 or 7...CT, if this a joke, your ass is grass for joking with a boyhood dream...j/k (about the ass is grass stuff). PLEASE be real...all those QT3 movies looked VERY real and those lookjust like the robot pet schematics ive seen b4

    sigh...what a day
  • I picked up one of these things when they were really being hyped. I thought it was an interesting novelty. It was actually fun to watch it grow. And if you took better care of it, it grew into better forms. When I saw it Monday after accidentally leaving it at work over the weekend and seeing that ghost floating over the grave, I actually did feel a little bit sad. It died because of me.


    Which reminds me, I should put some new batteries in my Tamagotchi. It's been dead (literally) for some time now.
  • ROFLMAO too true too true
  • Why would anyone waste time giving affection to a
    house or a country or a '68 'vette? We're
    all made out of the same stuff. It's just our
    turn to pay taxes, post emails and have names.

    monk

  • Hey, if Sony and Microsoft are teaming up to do television (http://www.sony-television.com) next season, I could see them branching out to the electronic companionship market too. It's a pretty huge market as I see it. Lotsa lonely people who don't have the personal skills to keep house plants alive, let alone goldfish, cats, or dogs. They deserve friends that won't easily die too.

    And to think what advertising potential Sony TV gives the thing...

    - Chad
  • Also here at CMU -- Dr. Veloso demonstrated one to the freshman class last semester. The reason they play with the orange ball is they're trying to play robotic soccer (CMU is the world champion in most classes of robotic soccer, that's Dr. Veloso's specialty).

    You'll have to trust me, they're _absolutely adorable_. They look around and follow the ball with their heads, then walk towards it and kick it with their front legs.

    Apparantly debugging the things was a real pain (no good data out). Dr. Veloso says they wagged the tail to do debugging output :)
  • Another April Fools joke, have you NO SHAME!!!

    -Eric
  • Of course it's not a dog. It's not like they're wrapping it in fur and passing it off as a new, improved canine. It's no more a real, live animal than a Tamagochi or a Volkswagen, and I doubt anyone at Sony thinks for a minute these things will replace real dogs.

    That doesn't mean I wouldn't love to have one, though (and, for the record, I have a real, live, dog, two ferrets, and a couple parrots at home). I'm not going to expect a robotic dog to get ecstatically, bump-into-furniture happy when I come home from a road trip, but I'm a geek, dammit. I think robots are cool. I want one.

    I don't think there's anything sinister about it.
  • I've heard about this before, and I'm pretty sure that this isn't a joke. Then again, it could be a cleverly orchestrated joke reaching the highest levels of government covering up far more sinister robotic dogs. The truth is out there...
  • I'm a senior Computer Science major at Northwestern University, and I know we've got some of these around and being used in our department, in the robot lab...

    While I didn't take the course track that led to them this year (two C95 special topics classes, one fall, one winter quarter), a good number of people have.

    This of course means they're programming them, and doing some rather complicated things with them. Sadly, I'm not sure what exactly, but I'll see if I can find anything else out. :)

    AR Schleicher (Jerrith)
    ars@nwu.edu

  • But can it say, "Affirmitive, Master."?

    If so, I bet John Leeson is available to record the voice for it. If not, they could always get David Brierly. :)

    I know some fellow slash-dotters (isn't that the foreign minister of Iceland?) know what I'm getting at. (Cue Grainer Theme 1...)
  • It's NOT a furby clone, but a serious study of how to do the mechanics and electronics for four-legged robots. As I understand it, the fact that they made them look like dogs is just an added feature -- they have to look like something. With the pricetag they have, you won't see these in private homes anytime soon. For researchers, they offer a free loan of a unit provided you sign a non-disclosure agreement (we considered it, but it felt like too much of a hassle just to get another toy to play with in the lab).

  • He's talking about apple quick time 3.
    --
  • This machine will be utterly uninteresting to me unless you are able to hack it, a la Lego Mindstorms. I don't mean teach it a few little tricks with Sony-supplied software, I want to run a Unix on it and teach it new tricks with Python or something.
  • Yeah, but K-9 was a twit; I always wanted to kill him.

    Exterminate! Exterminate! Exterminate the canine!

    --JT
  • If everyone's using a standard, the standard should be open, dammit!

    (or at least have a linux version)

    I want to see that dog!
  • Anyone else remember the little robotic dog in Woody Allen's movie "Sleeper?" IIRC, it was a little terrier-like thing that just walked around saying "hello, my name is ruff!"

    flashbacks galore...
  • I can think of one thing it would be good for. I'd like to get one to leave home with my real dog [purpledreams.com] while I'm at work.
    Bet it would keep him busy for hours!

    peter
  • From a human being's point of view, you are relatively unpredictable. You think in a manner that is unique. Nobody else will hold the same specific views that you do.


    Oh?


    At most, you probably know about a thousand people. Out of these, a few will have views that are very similar to yours about a given topic. Now, picture a million times as many people. Out of that many, you will find a few that either match your views with fantastic precision on a given topic, or match your views and opinions reasonably well on several topics.


    Now, consider all of history, past and present. That bumps up the sample size by a few more orders of magnitude.


    If you define "the same" as being atom-for-atom identical, then you are indeed unique. If you define "the same" as being similar enough for most practical purposes, then you are no longer unique. It's a big planet. It's a big universe. There's a lot of history behind us and ahead of us.


    And uniqueness is still not terribly relevant. If I could fire a duplicator gun at you and wind up with two identical copies, the copies would be no less human.


    If human is "ambiguous" to you, you have a problem.


    Ask a room full of philosophers what a human is. You will get an amazing variety of answers, because people tend to have very different opinions on what "being human" involves.


    Use something more concrete than this if you are using "humanity" to justify arguments. Tell me *what* about what you consider humanity justifies your point, and tell me why you believe that it is so.


    A machine "thinks" in a deterministic preprogrammed manner. The manner in which you think has been built evolutionarily and is constantly being modified throughout your entire lifetime. You are sentient (or at least about as sentient as I am) and you can express yourself uniquely and hold opinions and change them independantly under your own volition - or illusion of volition. If I ask a machine the same question a year later, I will probably get the same answer phrased the same way, from you I will not.
    Or, from me, you might. And from the machine, you might not. A strong AI would at the _very_ least base its answers upon everything that it has experienced. Over the course of the year, it will have been exposed to new things. Further, it is likely that any machine capable of demonstrating strong AI behaviour would have to be able to modify parts of its programming. The way in which it modifies itself may be deterministic, but the result won't be (as it is performing modificatins in response to nondeterministic stimuli).


    "I'm not trying to provoke; I'm trying to get you to think more carefully about your answer."


    Please do not accuse me of being a haphazard thinker. I've thought my position over quite carefully. I can use psychology to back me up. I'm not making wild leaps of faith to support my conclusions.


    My apologies. In that case, I am trying to get you to express your reasoning more clearly, as the justifications for some of your statements seem to be missing a few points, which I am pointing out in appropriate paragraphs elsewhere.


    See above for my objections to the statement that I was referring to.


    "Second of all, I am only "unique" if you use a very narrow set of criteria for recognizing me. Out of the six billion people on the planet, there are going to be many who are extremely similar to me. Expand your search to cover all places and all times, and you can probably find as close a match as you like."


    Completely untrue. The criteria is extremely broad. You underestimate the diversity of people. [...] no two people will come to the same conclusions in the same way, this means that changing somebody's opinion will require different solutions to change the same opinion.


    You underestimate the _number_ of people, or at least the full effects of this large a sample size. Will you find people who think similarly? That depends on the degree of similarity, but for just about all degrees, I'd say "yes".


    You sill haven't said why this is relevant, also. Why would it make a difference in either of our arguments if you could or could not find two people who reacted identically to things? Some people react similarly, some react differently. What of it?


    Have you ever met two people that acted the same way in most situations? I haven't and I've met a lot of people.


    You've never noticed similarities among your friends? Now, as with the example above, bump up the number of people by a few orders of magnitude, and you'll be able to find pairs with even closer similarity.


    You cannot be studied so well that you become 100% predictable or even 80%. When I use "machine" I mean a machine that is deterministic from the point of view of a human being, i.e. predictable. I wouldn't place any mammal in the category of machine because they aren't anywhere near 100% predictable but a robotic dog is.


    You've never known what a friend will do before they do it? I know that my friends occasionally complain about me being predictable. I know that I can in many cases predict what a friend will do.


    Re. machines necessarily being deterministic, it's easy to build one that isn't. Just make some of its decisions random.


    Re. being deterministic "from a human point of view" - why is this important? I've been using "deterministic" and "non-deterministic" in the absolute sense. What happens, in your argument, if you have an entity whose behaviour some people can see the pattern to but others can't? How does my personally being able or not being able to predict a robot's behaviour change the nature of the robot?


    No offense, but please reread my last post. I was answering your question as to how talking to a human being on the Internet is significantly different than interacting with a machine controlled by a set of algorithms. Again, I use machine as in "predictable machine".


    Then that is the source of one of our conflicts. It is easy to build machines that are non-deterministic, and it is easy to build deterministic machines that are complex enough to appear non-deterministic to a human (look at a standard pseudo-random number generator; that uses a deterministic formula).


    Re. the social benefits or lack thereof of melding with the 'net, that depends on what you're reading. I've seen people have thought-provoking discussions on the 'net, but I've also seen people who log into chat rooms and react purely on reflex. Just because there's someone else on the end of the connection doesn't mean that they're making thought-provoking statements, or that you couldn't be having more fulfilling social interactions elsewhere.


    Joe Average Kid in a chat room may be having a useful social interaction. Joe Average Kid surfing for porn or pirated games might as well be talking to the robotic dog.


    There are plenty of anti-social distractions out there already. IMO, one more won't make that much of a difference.


    I'm not going to explain myself down to a masonite block. If you cannot understand the conversation, then I'm no longer willing to converse. What am I getting out of this? What are you telling me?


    I am poking holes in your arguments, and arguing for a different point of view. If you would rather leave than defend your statements, I of course have no way of stopping you.


    What ideas have you given me so far? None. No offense, but I engage in conversation to get different points of view, I'm not here for your benefit, I'm here for mine. Likewise if you get nothing out of this, you may as well leave as you have no obligation to me either.


    When I see someone make a statement that I believe to be incorrect, I have a strong urge to argue the point, so that they and whoever else is listening may be better informed. I am not always right - I don't claim to be infallible - but if I am right, and I argue the point, I have increased the amount of knowledge per capita by an insignificant fraction. If I'm wrong, then at least I've learned a bit more.


    That is why I'm here, to answer your question.


    You claim that I've presented no ideas? Then what the heck are you disagreeing with me about? I've certainly stated _something_, or else you wouldn't have had anything to reply to.


    To recap, my main points are:

    • People have had antisocial distractions for a long time, and new ones won't hurt (they'll just displace old ones).

    • The mechanisms behind human thought can be understood, and are already partly understood.

    • Human-like thought can be duplicated in machines (though we haven't successfully done this _yet_).



    Some of these points are in direct conflict with yours, and so we each argue our cases, presenting evidence for our own arguments and poking holes in the arguments of each other.


    At what point was I unclear about this?

  • "Similarly, you could be out socializing with friends instead of surfing on Slashdot reading this."


    You may not be a friend, but you are human. You are unique. You're not a machine.


    First of all, I'd appreciate a definition of what "human" means in this context/what aspects of "humanity" you consider important in your statement above. As it stands, that point is too ambiguous to be very meaningful.


    Why is my being "human" relevant to the discussion?


    I'm not trying to provoke; I'm trying to get you to think more carefully about your answer.


    Second of all, I am only "unique" if you use a very narrow set of criteria for recognizing me. Out of the six billion people on the planet, there are going to be many who are extremely similar to me. Expand your search to cover all places and all times, and you can probably find as close a match as you like.


    Why is my being "unique" relevant?


    I am not a machine? Actually, I beleive that I am (in that I am a physical entity governed entirely by the physical laws of the universe, and could be duplicated perfectly if my physical form was duplicated perfectly).


    Why is my being a machine or not a machine relevant to your point?


    I get some idea of what you're getting at, but I need it phrased a bit more concretely if I am to debate it.


    "Most people have the self-control needed to balance social and non-social behaviours in their life. Some people don't, but IMO they have bigger problems than robotic dogs."


    This is an assertion that you will not be able to back up with evidence. Hardly anybody has the self-control that you think they do.


    Let me give some context - I believe that most people in an environment similar to the one that most people presently are in have enough self control to (etc). The proof of this is simple. Look around you; most people interact adequately socially and spend an adequate amount of time in social interaction (note that I'm saying "adequate" instead of "wonderful" for both of these points).


    You can of course produce extreme cases where a person will not balance these factors. IMO, this is not relevant, as such extreme cases are very unlikely to occur, even with things like robotic dogs around. Humans will always be around too, and ignoring them tends to have very negative consequences.


    "I'm not saying that introversion is a good thing - I'm just saying that we should try addressing it directly, as opposed to addressing (in reverse chronological order) toys that emulate affection, the Internet, television, comic books, and whatever else people have held up as threats to human interaction over the years. IMO, these people are going after the wrong targets."


    I agree 100%


    Then why the strong objection to robotic dogs, as opposed to bad parenting?


    "This is most likely pretty accurate. IMO, a good approach to solving this kind of problem is to encourage people to interact more with their children. Then it won't matter if the kid has a television set or a 'net connection or a robotic dog, because there will be something better available to them - a real person."


    What a child might perceive as "good" is not necessarily good for the child. Things that bring a developing adult despair are equally important to things that bring a developing adult joy. This is a harsh reality, but happiness isn't attained by having fun, it's attained by work and work is often not fun.


    An absence of having fun tends to detract substantially from happiness, from what I can see. Work can sometimes lead to happiness, and is certainly a prerequesite for happiness in a capitalist society, but this is hardly the same as your claim above. In fact, I define "having fun" as being "doing things that make me happy", regardless of whether those things are also considered "work" or not.


    Re. what a child percieves as "good"... I'm having trouble seeing what you are getting at. If you're saying that a child might be attracted to behaviour that is ultimately harmful, then I'll agree. However, that doesn't relate directly to my point. My point is that parents can make themselves an influence in their childrens' lives, and that (if done right) this can counteract harmful effects from other influences under most conditions.


    So, I don't see the problem with a robotic dog or a TV or a 'net connection if the kid is also properly exposed to other humans.


    The second we can decontruct psychology to explain all aspects of human nature in minute detail is the second we cease to non-deterministic.


    We will always be nondeterministic, because the stimuli that affect us are nondeterministic, and influence our "programming". Our "programming" will therefore change in nondeterministic ways even if the principles governing our minds are deterministic. At best we are probabilistic, and chaos severely limits how far into the future predictions can be usefully extended.


    And if it turns out that we are deterministic, why would it make a difference whether we knew about it or not? I'd _rather_ know, personally.


    Ant's are too complex for us to understand completely.


    No. The only thing that we know for certain is that we don't presently understand ants. Whether they can or cannot be understood is not directly proven by empyrical evidence, and there is strong circumstantial evidence that they can.


    This "dog" isn't even close to a dog, it operated on completely different principles, it's not even close to being a life form.


    So what? I never claimed that it was. I and the original poster were talking about understanding minds in general and the human mind in particular at this point in the thread. The robotic dog is not relevant for this discussion.

  • However, this is not comparable to life. It is a parody of life. It's a joke, a toy. It is 0.01% of a life form. I think that people are reclusive enough sitting in front of their TV's 24 hours a day. Social interaction with a robot isn't the same or as useful as social interaction with a living thing.


    Similarly, you could be out socializing with friends instead of surfing on Slashdot reading this.


    Most people have the self-control needed to balance social and non-social behaviours in their life. Some people don't, but IMO they have bigger problems than robotic dogs.


    I'm not saying that introversion is a good thing - I'm just saying that we should try addressing it directly, as opposed to addressing (in reverse chronological order) toys that emulate affection, the Internet, television, comic books, and whatever else people have held up as threats to human interaction over the years. IMO, these people are going after the wrong targets.


    Do you know how you can make a sociopath? Have a kid and give it no affection for the first 4 years of life, the resultant adult will exhibit complete indifference to the welfare of others.


    This is most likely pretty accurate. IMO, a good approach to solving this kind of problem is to encourage people to interact more with their children. Then it won't matter if the kid has a television set or a 'net connection or a robotic dog, because there will be something better available to them - a real person.


    If we could understand ourselves, we would be too simple to understand outselves.


    Not strictly true. We can understand the principles upon which we operate, without having to hold in our minds every memory, association, and piece of data that is held in our minds (which is indeed impossible). IMO, an understanding of psychology adequate for building a true AI would consist of understanding thoroughly the principles upon which the mind operates, and understanding the more important association links that form in humans. Anything beyond that is IMO most likely extraneous.


    Now, this is much easier said than done, as it is extremely difficult to observe the mind in operation, and it is very complex. However, I'm addressing your point re. it being possible at all. We don't have to know everything that is in our minds to understand how our minds work, IMO.


    Just my two cents worth.

  • Place the design of a 686 on a 686 and the design of all the memory of the computer on that 686 with the designs for the hard disk, ethernet card, monitor, and leds. What? You're running out of memory to have the computer represent itself completely!!


    You can easily build a computer that has the ability to "understand" its own design, because a lot of the design _is_ redundant, or easily inferred from basic principles. You don't even need magical software to do this; look up integrated circuit CAD and synthesis tools on the web. Sure, a PII contains several million transistors in the CPU alone, but the VHDL code specifying how it works is much more compact. The implementation can be algorithmically generated by synthesis tools that understand the basic principles of how transistors can be used to implement logic.


    Now, you'd get a more efficient version of the chip by designing parts of it by hand, but the automatically generated version will still work, and will be able to run the synthesis tools itself.


    IMO, the mind is based on similarly simple principles; it just has a complex implementation.


    Picking apart the mind's inner workings will be like picking apart spaghetti code under the best of circumstances, because it was put together by more-or-less random processes working over a very long period of time and so is disorganized. That doesn't mean that we can't understand the compiler, to carry the programming analogy further, or the logic involved in the mind's construction.

  • [I missed a point, so I'm addressing it in a new message here.]


    "Now, this is much easier said than done, as it is extremely difficult to observe the mind in operation, and it is very complex. However, I'm addressing your point re. it being possible at all. We don't have to know everything that is in our minds to understand how our minds work, IMO."


    I still maintain that it is absolutely impossible to do this. We might may very gross generalizations that hold well for many situations, but these are models. Models do not represent reality, they represent a simplified reality so that the real reality can be better understood.


    We can come up with increasingly refined models. How refined a model we need depends on what we want to do with it. Understanding how something works does not require that we be able to predict with certainty how it will behave. In fact, for nondeterministic systems, we will never be able to predict behaviour. A coin toss is a good analogy here, though something truly random like quantum tunnelling would be a better example.


    Another point is that all depths of understanding are not equal. Understanding how our minds work well enough to create a true AI does not necessarily require that we understand all of the peculiar idiosyncracies of human brains that have built up over several hundred million years of evolution. Only the critical and/or relevant points need be understood.


    Why do you feel that our minds must be based on principles too complex to be understood?

  • What I'm betting is that a robotic dog will simulate being estatically, bump-into-furniture happy when you come home from a road trip. If that's what the consumer wants anyhow. That's what is sinister about it.


    Why is this sinister?


    By similar arguments, Tamagochis are unhealthy and stuffed animals shouldn't exist either.


    This isn't real, and the buyers fully know that it isn't real. Where is the problem? I'm having trouble seeing where you are coming from with this.

  • Our population is now at about subsistence level, people get married less, and have less kids. It is no longer common to have dinner parties, nor is it common to take a sunday drive. It is common to drink beer on the sofa and watch football though. Not that it is a bad thing, but we are certainly less interactive then we used to be. Do you know your neighbors even?


    I don't know about your friends, but most of mine are more socially active than the football-watcher that you describe.


    Do I know the people living next door to me? No.


    Do I know other people? Yes.


    Readily available transportation means that my circle of friends is scattered about the city (and beyond). I still have several close friends and dozens of more casual friends, whom I see and socialize with regularly. My friends have similarly large circles of friends.


    I think that you're being overly pessimistic about the amount of socialization that Joe Average participates in. Certainly, you _could_ just sit at home and meld with the TV or the computer in _all_ of your free time, but how many people do you know who actually _do_?

  • She also threw me out of the house when she opened my mail to find my lease in my new place in Back Bay in Boston, called my office, threatened my secretary, called the cops, claimed I was beating her, and threatened me repeatedly and stole $450 from me. According to the police, this happens to every tenant she's had - she's listed as "loony" at the station. She could become the norm if most interaction is done with things that don't give negative feedback. I worry about such situations.


    Insane people exist. IMO, it is better to recognize them and restrict their actions if necessary than to restrict the actions of *all* of the populace. This best preserves the freedom of individuals in general.


    However, as mentioned in another thread, I don't think that we're in danger of interacting primarily with robots any time soon. Every time a major entertainment fad has shown up, this kind of concern has been voiced, but so far it doesn't seem to have been warranted.


    Not that there aren't other problems that need fixing; see my previous messages.

  • "Let me give some context - I believe that most people in an environment similar to the one that most people presently are in have enough self control to (etc). The proof of this is simple. Look around you; most people interact adequately socially and spend an adequate amount of time in social interaction (note that I'm saying "adequate" instead of "wonderful" for both of these points)."


    You "believe". You could just as easily say that about an environment that has absolutely no social interaction. This is merely a value judgement of yours. It's an opinion, nothing more.


    Instead of berating me, how about responding to the proof that I presented above?


    We have growing amounts of violence among the young, that would seem to indicate that less children are receiving the social interaction they need to become functional adults. Do you have another theory as to why violence among the youth has risen? Look at the United States prison statistics.


    Prison statistics prove only that more people are put in prison. This reflects less on how many crimes are being committed than on what we consider a crime, and what we choose to put people in prison for.


    I've already addressed the topic of youth violence elsewhere.


    People are already ignoring people because the consequences are simply no longer there. Being ostracized today isn't what is used to be. If you are being ostracized by your family, fuck them, move to NYC and start over.


    And similarly alienate potential friends. If a person has no friends because they're a jerk, moving won't change this. They will either continue moving from place to place wallowing in bitterness, or learn not to be a jerk.


    "I'm not saying that introversion is a good thing - I'm just saying that we should try addressing it directly, as opposed to addressing (in reverse chronological order) toys that emulate affection, the Internet, television, comic books, and whatever else people have held up as threats to human interaction over the years. IMO, these people are going after the wrong targets."


    "I agree 100%"


    "Then why the strong objection to robotic dogs, as opposed to bad parenting?"


    Um what? When was parenting mentioned? You think you can "fix" parenting? I have news for you, you can't. What are you going to do, tell parents what deserves positive feedback and what deserves negative feedback? You cannot control parents through legistation.


    If you agree that things like the 'net and robotic dogs are the wrong targets, then why do you attack them? You have stated elsewhere that you don't think either influence can be removed, so why not gripe about the real causes of social problems (bad parenting being one case)? Similarly futile, but at least you're complaining about the right thing.


    Personally, I feel that we should make an effort (through "parental education initiatives" or what-have-you) to make sure that parents understand that it is important for them to raise their children. Then, if they sill don't bother, we can justifiably say that they brought the problems upon themselves when they try blaming something else.


    Endorsing the blaming of incorrect targets IMO does no good and potentially causes a fair bit of harm. If the 'net was shut down to prevent Joe Average Kid from searching for porn, then Joe Average Kid would go back to reading his friend's brother's magazines, and things like this discussion would suddenly become a lot harder.


    I'm pointing out that you cannot remove these (Internet, television, etc.) from society, because we are now dependant on them. What I'm trying to indicate is that there has already been damage done, and more is to come. If a parent doesn't see anything wrong with their child hanging out with the mechanical dog, like you don't, the parent will likely not spend as much time with the child.


    I see nothing wrong with a child of mine hanging out with a mechanical dog as long as they do spend some time hanging out with human friends as well.


    That is substantially different from your statement above, yet I stated it clearly elsewhere.


    The real problem is our society is built on escapism. Perhaps we should figure out what we are trying to escape from and eliminate it. It is easy to see that the art of conversation is dead, it's very difficult to find anybody to talk to.


    If conversation is dead, then why is there traffic here?


    You never go out to socialize and chat with your friends?


    Eliminating what we're trying to escape from is impossible, because we're trying to escape from both human nature and the fundamental realities of the world around us. Life isn't always pleasant, so we retreat to fantasies that are more pleasant. Life will never always be pleasant, because jerks always exist and because resources are finite. IMO, the best that we can do is to teach our kids how to deal effectively with the real world, so that they have less of a need to escape. "Less of a need", not "no need".


    The higher the potential you reach the happier you are, that's my theory and the more I learn, the happier I am. I never want to stop learning, not because I find learning to be "fun", but because it always brings rewards later. I have been very lucky to experience so much because I've worked to do it. It didn't happen by fucking off, in fact when I did fuck off in college (11 credits, 2.1 GPA semester), I was most depressed - but I smoked a lot of weed and drank a lot of alcohol and saw tons of movies, and had a lot of "fun" - of course it was a waste of a semester but it was "fun".


    If you were depressed, it doesn't sound like you were enjoying yourself very much. I reiterate that I personally define "fun" to be "doing things that make me happy". If you drank and were still depressed, I question the logic of calling that "fun".


    There's also total integrated "fun" to consider. If I quit my job and smoked crack on the streets, I might be happy when I'm high, but the rest of the time I'd be in abject misery. Net fun lowers.


    Re. learning, I learn because I enjoy learning. If this brings me rewards later in life, so much the better.


    I do problem sets because they will bring me benefit and allow the potential for future happiness. Not because I enjoy them. If I didn't bother doing problem sets, I might have more time for happiness in the short term, but exam stress and future employment would be a lot worse, resulting in a net loss of happiness down the road. If I spent all of my spare time doing problem sets, I'd probably get perfect marks, but I'd be a lot less happy also.


    I consider work a means to an end, not an end in itself. If you actually enjoy your work, then great, but it is not and doesn't have to be the end-all and be-all for the rest of us.


    "My point is that parents can make themselves an influence in their childrens' lives, and that (if done right) this can counteract harmful effects from other influences under most conditions."


    Sure they "can", but keep in mind that people act almost exclusively on instinct. Take away the natural environment and they will no longer instinctively know how to solve the problem. We are far from being purely intellectual despite our assurances that we are "logical" - we aren't. Here we have video games, 24 hour cartoons, artificial friends, and unlimited access to information - you think a typical parent can handle this?


    Sure they can. Turn off the TV. Turn off the computer. Heck, sell the TV if neither you nor your kid benefits from it (but be prepared to provide alternate entertainment for your kid). Take your kid out to the museum or the zoo. Do something else with them.


    It isn't _trivial_ to make yourself a part of your kid's life, but it isn't fundamentally impossible, either.


    Ultimately the solution is for each parent to take the initiative, but the vast majority of the parents will not.


    I agree. However, I believe that it is a worthy goal to work towards, because some of the parents will take initiative, and we can fix the blame squarely where it belongs in most other cases, as they will now have known better.


    Which is more fun, to hang out with mom or dad or to play Quake III?


    Well, I'd like to get around to playing "Tribes" at some point, but playing games all of the time loses its charm after a while (I've tried it). I socialize with my friends because it _is_ more fun than doing most other things. Otherwise I wouldn't be doing it.


    I'm not saying all this is bad, but when we start producing interactive things that are meant to replace interaction with living things, I think we are in danger of crossing a line.


    IMO, it would take something along the lines of a true AI to be involving enough to replace interaction with living things, and that will neatly solve the problem (as interacting with a true AI is as socially fulfilling as interacting with a real person, because it _is_ a person). Now, there is the issue of ensuring that the AI itself isn't designed with harmful psychology, but it turns out that there are fairly straightforward ways of doing this (give an AI a drive for self-preservation, and it won't be happy when you beat up on it, for instance).


    However, I dout that we'll see this for a few decades yet.


    I am not saying that robotic dogs will make humans more deterministic, I'm saying robotic dogs, cats, friends, etc, will make us fucked up because we aren't interacting with something that will give us not only positive feedback by negative feedback. Our brains aren't built to operate in this type of environment. You are free to treat a robotic dog in any manner you see fit, and probably with no guilt associated with it. You could "kill" a robotic dog and feel no remorse (maybe it was even enjoyable to do so), if the same person has interacted with machinery more than humans and gives people the same value as machines, you can run into a serious problem. And this is entirely possible if they never receive negative feedback for doing "bad" things to this "living" machinery.


    As long as they still have to interact with other humans on a regular basis, there will still be negative feedback for antisocial behaviors. I've already covered why I think that they will still be exposed to humans. Re. making a masochistic robot dog, I agree that this would probably produce some harm (though I maintain that the majority of people would still turn out ok due to other social influences). IMO, the best way to curb this kind of harm is to legislate that toys of this type should not encourage abusive behavior. This can be enforced (the manufacturers are big enough targets).

  • "I see where you're coming from, but I think that human-to-human interaction is prominent enough in peoples' lives even with robotic dogs that this isn't a big threat."


    Back it up with evidence. We have more antisocial behavior than ever it seems.


    Back *that* up with evidence. The operative word here is "seems".


    Kids blowing away kids with guns, people getting mugged for sneakers, you name it.


    There was an interesting documentary a few years back about television and the Internet. It brought up headlines similar to the ones you describe, with 11-year-olds murdering 11-year-olds and similar atrocities.


    The newspeaper headlines were from the first couple of decades of this century. The antisocial behavior was then blamed on comic books.


    A very large number of people are anti-social, criminal, selfish, stupid, or a combination of the above. This is true now, and it always _has_ been true. That's human nature. Instead of pointing fingers at the latest fashionable target for blame, try thinking about how to make society work despite the fact that such people exist. They certainly aren't going away.


    I don't think the average middle income white person has any idea how the average middle class black person lives - or cares. And it goes both ways. For all people, for all groups, income brackets, and education levels. When is the last time you talked to a somebody that was between 30 and 40 and dropped out of high school? There make up a good section of our population. When is the last time they talked to you?


    You're presuming a lot about who I am and am not friends with. Speak only for yourself in this regard, please.


    Shit happens dude. And I think this is shit. I'm predicting that this will be ultimately negative for both human beings and for animals.


    People happen, "dude". I don't think that this will have any greater an effect on human nature than anything else this century has. What we do in any given decade may be different, but there are still nice people, not so nice people, and actively malevolent people.

  • And if it turns out that we are deterministic, why would it make a difference whether we knew about it or not? I'd _rather_ know, personally


    Did I say it would make a difference?


    Yes, you did:


    "The second we can decontruct psychology to explain all aspects of human nature in minute detail is the second we cease to non-deterministic."


    Whether we are deterministic or not (in the absolute sense) is a part of the nature of the universe, and is most certainly not influenced by whether or not we know that we are deterministic or non-deterministic.


    Similarly, as I've already shown elsewhere, you can completely understand something that is still non-deterministic. Non-deterministic systems can still operate on deterministic principles.


    As it is I believe we are deterministic, but I also know that it cannot be proven that we are - so for all practical purposes, we do have "free will", although in reality I believe that "free will" is an illusion. The "free will" model suits us well, because the system is so complex, it may as well be "free will".


    How does complexity prevent us from proving that we operate on deterministic principles?


    In fact, there's already a proof of this if you choose to accept the axiom that we are wholly embodied by our physical forms. Our physical forms obey deterministic physical laws (though with quantum effects, this actually only proves that we are probabilistic, with probability functions that are deterministic).


    Not that this particular proof yields much insight into our inner workings, but it still works as an argument for determinism as a whole.


    "No. The only thing that we know for certain is that we don't presently understand ants. Whether they can or cannot be understood is not directly proven by empyrical evidence, and there is strong circumstantial evidence that they can."


    The nervous system of an ant is hugely complex although very simple compared to a dog yet we do not understand the mind of an ant.


    So what? I've already conceded that they are complex, and I've already mentioned that we don't _presently_ understand them in their entirity. How does this influence whether or not they _can_ be understood?


    You maintain that building a robotic dog and installing an artificial psychology in the mechanical dog will not effect the owner.


    I maintain that building a robotic dog of the type described will not _significantly_ adversely affect the psychology of its owner. Will it have _some_ influence? Of course it will. All stimuli affect us to some degree. Will it have the large, harmful effects that you fear? IMO, not for the vast majority of people.


    Certainly some people's lives are very much attached to their pets, it will happen with a robotic animal as well if the animal emulates a living thing well enough. The psychology of the robotic animal is likely to be flawed. This may very well (I would say most likely) cause the psychology of the owner to become flawed, it will certainly change it. It's just that simple.


    Some people will become attached to robotic pets. Some people are already attached to things like Tamagochis. Some people, as you point out, are attached to stuffed animals to variying degrees (yes, I know exactly how far this can go in Furry circles).


    In all cases, this is due to people ascribing qualities to the devices (or plushies) that aren't actually there. There is a similar effect when people make up imaginary friends. Heck, it could be argued that religion is based on similar principles (with your chosen god being the "imaginary friend" in this case). (NOTE THE "COULD BE ARGUED", AND DON'T FLAME ME.) This is a very basic tendency, and as you point out with plushies, it will exist no matter what. IMO, as long as the manufacturers and parents take pains to point out that the dog (or the Tamagochi) *isn't* *real*, and doesn't actually feel emotion, then the harm from this is not significantly more than the harm from anything else (the kid will still think up an imaginary friend or latch on to a stuffed animal if they feel that deeply that there is no human that can be friends with them).


    Ever hang out with somebody that was mentally unstable? I have, and it rubs off inevitably.


    Yes I do, fairly often. One of my good friends is mentally unstable.


    It is often stressful interacting with him, but I don't think that my own psychology has been adversely affected. I interact with many other people; the forces affecting my psychology are acting to make it easier for me to interact with _all_ of them, not just any one in particular.


    As long as a robotic toy isn't the _sole_ social influence in a kid's life, their psychology will _not_ be solely based on interactions with it. Logically enough.


    "So what? I never claimed that it was. I and the original poster were talking about understanding minds in general and the human mind in particular at this point in the thread. The robotic dog is not relevant for this discussion."


    I am the original poster, and the robotic dog is completely relevant to this discussion.


    Not to the thread debating whether or not human minds could be understood.


    This conversation contains easily dozens of individual threads that we are debating, on various topics. The point that you were responding to was a response to a statement about human minds in general - not the robotic dog or its impact on humans.


    I've started splitting my responses up, as this discussion is getting too large to manage in one message.


    Re. being the same poster, you seemed to exhibit inconsistency re. believing humans to be deterministic or non-deterministic. In retrospect, this was because you were using a different definition of "determinism" than I use. Choosing an alias would eliminate future misunderstandings, and would also prevent any Anonymous Coward from claiming that they were you (as I've seen happen in other threads with other people).


    You don't have to specify your normal email address; a temporary hotmail account can receive your password string as easily as any other account can.

  • The system of the brain must have some layer of stability even though it has functions that are chaotic. These underlying features of stability are completely and totally not understood on the physical level. We can only observe the outcome. We know where in the brain some functions are performed but the how the structure of the brain causes the function to be performed is still a complete mystery, and may be forever.


    Consult with a friend in neurobiology for more information on how much we do and do not understand of the brain on a physical level. Find a few good textbooks on neural networks for more information on how much we do and do not know about how the brain and similar structures can process information.


    We most certainly don't know everything about the brain, but neither is it a complete mystery, as you claim it to be. And our understanding continues to (slowly) improve.

  • It has to do with the fact that I'm a determinist which means that I believe we ourselves are completely robotic in nature. I do not believe in "free will" as a concept. We are just a really complicated feedback loop. We are controlled 100% by what we interact with (from television to dear old Mom) and I don't think it's such a great idea to be interacting with something that is totally artificial and completely predictable. You may not be getting just standardized robotic dogs with standardized artificial psychologies but standardized human beings with artificial psychologies. I find it Orwelian to the nth degree.


    Well, bear in mind that there's a fair bit of random noise injected into our feedback loops, which makes our behaviour more probabilistic than strictly deterministic.. O:).


    I see where you're coming from, but I think that human-to-human interaction is prominent enough in peoples' lives even with robotic dogs that this isn't a big threat.


    If the Evil Empire locked us in a room from birth with only carefully programmed machines to interact with, then we would probably develop standardized personalities (to within the limits imposed by physical variations to our brains and noise perturbing our thoughts), but IMO anything short of such an impractical isolation would have a hard time standardizing people to the extent that you fear.


    Just another opinion.

  • of the Micros~1 Teletubbies that respond to coded TV transmissions.
  • "A consumer toy designed to sell will do what is necessary to make the customer happy."

    Then why did my tamagotchi (sp?) die? Because it was part of the experience for which I paid.

    I think it's interesting that something grossly simpler than a robotic dog could evoke any kinds of feelings at all. Yet, after feeding and taking care of my "virtual pet" for several weeks, I did feel bad when it got sick and died. It defied my internal logic and made me feel a little silly, but that still doesn't change the way I felt.

    I see a robotic dog as being no different than a doll. In some cases it may be better; If I beat a doll it doesn't even have the oppurtunity to give negative feedback.

    Interesting discussion though.
  • AC: "It's a machine. It will never be happy to see you, it will never feel 'good' or enjoy 'praise', it will just go through the algorithms that makes you think it did."

    Please specifically define the terms/phrases:
    "be happy"
    "feel good"
    "enjoy praise"

    And are we really sure that these aren't just complex algorithmic responses from an analog computer?

    Is my dog really happy to see me? Or is he just brown-nosing for the alpha male? Is what we think is "happiness" in our dogs just an evolution conditioned response to keep the dog alive?

    Are we sure?
  • But it needs to have a little laser that comes out of it's snout. And it has to call me "The Doctor".

    Seriously though - it seems like a neat idea. I wonder what people that are against the keeping of animals would think about it?

    -lx
  • Woof! Woof! Woof! Woof! Hello! I'm Rags!
    Woof! Woof! Woof! Woof! Hello! I'm Rags!
    Woof! Woof! Woof! Woof! Hello! I'm Rags!
    Woof! Woof! Woof! Woof! Hello! I'm Rags!
    Woof! Woof! Woof! Woof! Hello! I'm Rags!
    Woof! Woof! Woof! Woof! Hello! I'm Rags!
    Woof! Woof! Woof! Woof! Hello! I'm Rags!
    Etc. Etc. Etc.
  • yes, this announcement is serious. our robotics group at northwestern had been using them for robotics research for quite a while now, and they're very cool.

    btw, just look at the specifications [sony.com] (scroll all the way down that page) for more info...
  • and those of you who read memepool [memepool.org] knew about this a couple of months ago... :-)
  • >Sure the Heisenburg Uncertainty Principle makes >life non-deterministic, but only to us. If you >start out the universe in exactly the same way >again, we would be having this conversation about > 14 billion years Nope. QM is probabilistic and nondeterministic in nature. Heisenberg doesn't say "you can't measure position and momentum together". It says "position and momentum only exist within a constraint, there is no such thing as complete precision in both". Throw in chaos, and strict determinism is out the window.
  • > Replacing life with a simulation of life is sinister. It's just my opinion.

    Is it more sinister than forcing an animal to live in captivity, denying it the opportunity to reproduce, and eventually taking it to the vet to be involuntarily killed--just to satisfy your own misdirected parenting instincts? Arguably not.
  • > It's a machine. It will never be happy to see you, it will never feel "good" or enjoy "praise", it will just go through the algorithms that makes you think it did.

    How do you know your current pets aren't doing the same thing?
  • Well, that brings other issues... I for one think alot about this shit, and its delightfully complicated! =)
    The idea that the brain is a 'wet computer' seems to be to be absolutley correct, mine even has bugs =P~
    Seriously though, the i'm certain that we can create "Life" from a machine, since there is nothing magical about us, there is not 'soul' or 'spirit' which we gotta have to truly 'live'.
    The question really is, what exactly does something have to do to be 'alive'?

    You point (which i forgot about) is that your not sure that mankind will be a kind ruler to the other beasts on our little planet.
    Well, that has nothing to do with living machinery. Making intelligent machines is a natural outgrowth of computers.
    Hell, mabye its already here, i get the feeling my computers has a deep and awe inspiring hatred of me.
  • Can you prove to me that you aren't merely a collection of algorithms and software to simulate what I think is humanity? For that matter, can I prove to you the same? Can you prove to me that you are conscious? How do I know that you feel "good" or enjoy "praise" and aren't simply going through the algorithms that make me think you do?

    I suppose we need to tell your friends to buy toasters and ditch you :-)
  • As was previously mentioned, the success and value of these things are not tied to their feelings or real-ness.

    People obviously get attached to cars, to computers, to tamagotchis, to stuffed animals... Imagine the level of attachment someone could get out of their robo-pet? What does it matter that it's programmed to 'like' you? How is a real dog any different?

    Robots are good for one thing:fulfilling a purpose. If it's to give someone a boost of joy or pleasure, then so be it. Slave labor is very over hyped in today's service society, esp with the value of entertainment and such.

    Its as much a waste of silicon as a car (in my case, a Yellow NeoBeetle) is a waste of iron and plastic. I'm very attached to it, and if you bad mouthed it or scratched it or something, I would get upset way beyonds it's monetary worth.

    AS
  • Dr. Who's companion, K-9.
  • It's using Sony's proprietary real-time OS. Do you think there'll be a Linux port to this platform? Just think of the Bay-Woof cluster you could have with a pack of these things
  • does it matter? If it's too inconvenient to take care of a real animal you can still heal the universal karma by nurturing an electric one.

    Philip K. Dick is cool... and so is Harrison Ford.
  • by Mr_44 ( 18214 )
    I love Brazil!

    ...whee...
  • Exactly what is the difference between an emotion and it equivalent reaction? If a programmed entity perfectly simulated every emotion all the time, on what basis could you claim the emotions aren't "real". For any argument you give against emotions in preprogrammed entities, I could give you an equivalent argument claiming your emotions aren't real on the basis that you only carry the algorthims of emotional reaction in your genes/neural net.
  • Does anybody know if anyone has ever studied the use of a non-biological pet in a therapy setting? Probably not, but the possibilities are really interesting...For an elderly or disabled patient who can't take care of, or would be intimidated by, a real dog, a substitute might still improve their mood and help them with their need for companionship.

    Don't laugh--for someone who is lonely, any companionship or interaction, even with a machine, might be better than nothing at all. And to those who don't understand how a person could have feelings for something that's not alive, go watch a small child play with a stuffed animal or doll.
  • Yeah, yeah . . . take your "it has awareness" argument to a Skinnerean and see how far it gets you. :P

    Please note that I am NOT agreeing with the Skinnerean view of things. In fact, I think it's all hooey. Just playing devil's advocate. =)

    --
  • I wonder what kind of snap-on attachment the likes of Dr. John's will make for the pet-bots.

    Katz, you listening?

    --Corey
  • saw this (sortof) on Discovery or TLC last week - they had a show on Robots and while they didn't cover the pet-bot in detail they DID show it playing with a ball and getting rowdy.

    I thought it was extremely cute, nearly lifelike even. I'd get one just to see what it could do. :)

  • Hmm...Heheh.. Well at least that would be funny as hell thou..
  • A whole new meaning for beast-e-ality...
  • I want it to be as big as a Rott...=> Hmm.. How many hd's could I fit in that amount of room..
  • Ever heard of a screwdrivers and patch cables? If you really wanted to you could hack them....
  • Think we should have a fund to get Jon a hooker? J/k!!!
  • Hm... You die, you have no freinds and stuff.. And one day some one sees your skeleton and this dog doing house work.....
  • It doesn't have feelings, right? It doesn't come to you because it likes you, but rather because it is *programmed* to, right?

    Robots are good for one thing: slave labour. Make it pick up my laundry - then I'll be impressed... other than that, this is a waste of silicon.

    I'm not discounting the ability to create and AI in the future that is capable of emotion and thought, even if it's only at the level of a pet's intelligence, and if so, put the AI in our little robot thingy here, and then I'll be interested.

    Maybe those Sony Engineers need to get out more. I find trees and birds and even grass far more stimulating than a robotic pet dog. :-)
  • That's good! I don't want a dog, but I'd like something running around that I could play with from time to time. Real animal pets cost a lot, and take a lot of time to take care of, and can be a real hastle. I think this robot pet thing is really cool. If it can mimic how a real animal behaves, even somewhat, I want one.
  • >Another April Fools joke, have you NO SHAME!!

    Actually, I visited this site months ago. It's definitly no joke.
  • Yeah, this ain't AFJ. In fact, one of my friend
    from highschool is in the development team. (He
    never told me much of the detail though...)

    Also, from the rumor I heard, it's going to cost
    something like 200,000yen. About 2grand(US)? Can't
    confirm on this, but I feel that's about the
    right price.
  • K-9 rocked, but he was so damned slow.

    You'd think if they could come up with a laser for a sloppy wet nose
    they could have put a big ass motor in his rear...
  • Why would I waste time giving affection to a machine? I don't consider it a "waste of time". I have feelings about my car. If somebody gave me a different white Miata, it'd still not be the same. I have feelings about my computers. I like my Mac better than my PC's, even though it's slow and old and getting kinda clunky. I spend a lot of time with these machines. Isn't it natural that I attach emotional value to them? This certainly doesn't mean that I don't like people (I'm quite fond of them, 'specially the female variety), but that doesn't preclude my ability to be "attached" to machines. It's not rational, there's no REASON for me to "like" my Mac, but I do anyway. Why would a robot pet be any different?
  • I don't know about the rest of you, but I find it easier to relate to the binary boxes we call computers than most humans, especially women. I don't really care about affection, I just like stuff to do what I want it to do (call me a control freak).

    I don't like living dogs, and don't think I'd care for a robotic one, however something like a Gorilla... something big that I could have rip jocks limb from limb, that'd be cool.

  • I don't know, it's not really a pet if you can turn it off so it doesn't bug you, no?

    That said, I'd like to get one for my cat to "play" with...

  • Go to your local library newspaper morgue and read it. It doesn't just seem we have more violence, we do have more violence. 3% of our population is currently in jail right now. Only because we have enough jail space (and no debtor's prisons :-). Look at older newspapers. Or historical texts.

    For that matter, during New Year's Day I heard concern about how violent and dangerous the USA is -- but this was in a city in Brazil which on that day had ten times the number of shooting deaths as my twice-as-large home city in the USA.

    Most "western" movies exaggerate the number of guns being worn, but when criminals got too uppity ranchers did start carrying around their weapons, and that's one reason they had the weapons. Romeo was a typical 13-year-old of the time, carrying around a sword. And at those times a minor stabbing could easily be a death sentence due to infection (doctors did not understand how to control infection at that time).

    For that matter, does "violence" include or exclude what the staff of the royalty and clerics are doing?

  • I guess if ppl are sick of cleaning up after their dogs, it's ok....otherwise, c'mon now...i ain't gonna wrestle with no machine.

    Well...mebbe I will :)
  • couldnt resist...
  • With Tomagotchis and Furbys, this story is just crazy enough to be true, even on April 1st. I think it is kind of odd that it was last modified yesterday, though....

    HEAD /robot/index.html HTTP/1.1
    Host: www.world.sony.com

    HTTP/1.1 200 OK
    Date: Thu, 01 Apr 1999 23:35:07 GMT
    Server: Apache/1.3.4 (Unix) mod_ssl/2.2.5 OpenSSL/0.9.1c
    Cache-Control: max-age=86400
    Expires: Fri, 02 Apr 1999 23:35:07 GMT
    Last-Modified: Wed, 31 Mar 1999 06:55:56 GMT
    ETag: "88e90-1cf5-3701c6fc"
    Accept-Ranges: bytes
    Content-Length: 7413
    Content-Type: text/html
  • They had one of these ``Pets'' operating
    at AAAI last year. Although there was nothing
    technically brilliant about them, they stole
    the show (all the girls thought they were
    extremely cute). They are very programmable
    and (by default) include programming that
    allows the robot to right itself whenever it
    falls over (through a neat set of leg movements).
    As for price, I have heard figures of around $600US. Almost everyone who has seen one in action
    wants one, and most would pay that price.
  • One might argue that if you think it's affectionate it doesn't matter.
    Can't buy 'em yet anyway, so I'll stick with my coffee maker, which really likes me anyway, and would probably be hurt if I brought home some competition.
  • It was in Popular Mechanics some time ago and has been demonstrated at various conventions. Unless they decided to spend hundreds of thousands of dollars developing and showing off one of these animals just for a prank. I doubt it.

    Incidently, it can also behave like a monkey, a cat, and one other non-dog animal.
  • Well, haven't you heard of the Turing test? If it can exhibit all the qualities of a real dog and fool you into believing it was real, who is to say it's not?

    Not to get into a philosophical discussion, but aren't all of our behaviors just responces to stimuli and in that case aren't we just going through certain algorithms?

    If they can do just a part of what they claim it'll be huge. Come on, look at Furby's those infernal little toys can say like 40 different things and everybody went nuts over them....

  • They're actually really smart (is that possible?). They have something in them that emulates emotion. If you hit it too much, it sulks and will eventually go to sleep. If you pet it, it will be happy and wag its tail.

    I will get one. Oh yes.

    Do you think that they will change the apartment restrictions to say: "No pets. Electronic or otherwise."?
  • I hate to have to break this to you dude, but any lifeform, sentient or not, is merely a biological machine, with an admittedly powerful and extensive neural network for a cpu, which is capable of learning (read: adding new algorithms).

    How do you know what a dog or cat really feels, maybe it is just performing the logical outcome of it's own neural algorithms.

    And who knows for sure, maybe one day some of these machines may evolve into more than just the sum of their parts?
  • I want to believe...







    heh
  • I forgot to qualify: A robot that provides no negative feedback, or acts happy when it's beaten, probably does not qualify as an intelligence approaching that of sophisticated life forms, which are "smart" enough to know to preserve and protect themselves.


  • I am an atheist. I understand that we are machines. I understand that happiness, feeling, and enjoyment are adaptations to our environment
    that make us better reproductive machines and that all emotions are nothing but carry-overs as a solution to life.

    However, this is not comparable to life. It is a parody of life. It's a joke, a toy. It is 0.01% of a life form. I think that people are reclusive
    enough sitting in front of their TV's 24 hours a day. Social interaction with a robot isn't the same or as useful as social interaction with a
    living thing. You must realize that our programming brought on by evolution can be thwarted when feedback (which we are dependant on) is
    incorrect. For example, what if you made a robotic dog that exhibited "hapiness" when you beat it? What would that do to the human being?

    Nobody's saying it's a life form of any sort. You buy a real, furry dog for fun, companionship, love, and in some cases, as in hunting dogs, fire dogs, and seeing eye dogs, to help you. Sony is selling its robotic dog for one reason only: entertainment. It's designed to make us laugh and intrigue us, get us thinking about robots and their possibilities, and not fool us into thinking it's a real animal. If someone ever builds a robot that's lifelike enough to be considered an intelligent life form of some sort, then we can begin to address these issues. If it approaches a human in intelligence, then what would it matter if one of our friends is flesh and blood or steel and silicon?


    Remember that we humans are adaptive machines, too, and if one of the changes we'll have to adapt to is having intellectual equals which are made of metal then so be it.

  • It's true. I've been hearing about this thing for some time now. And I've seen the robot on TV. It moves similarly to the way a real animal does!
  • by Anonymous Coward on Thursday April 01, 1999 @12:45PM (#1952983)
    Why would anybody waste time giving affection to a machine? This is merely a collection of algorithms and software to simulate a pet, it simply isn't the same thing. I think it will be a sad world when enough is understood about psychology that we will be able to simulate everthing and do nothing. This is a step in that direction in my opinion.

    It's a machine. It will never be happy to see you, it will never feel "good" or enjoy "praise", it will just go through the algorithms that makes you think it did. Buy a toaster, it would be cheaper and just as affectionate.
  • by Roundeye ( 16278 ) on Thursday April 01, 1999 @02:14PM (#1952984) Homepage
    For the marketing representatives of Sony reading
    this list I am officially making a proposal for
    experimentation with this groundbreaking AI
    technology and will require eleven (11) Sony(tm)
    Dogs(tm) in order to conduct the following
    important experiments:

    10 - "Identity Crisis" - one week of acclimation
    training to dog's given name "Rambo" with
    establishment of "his territory", "his chores",
    and "his spiked collar". After week one dog
    will be referred to by the name "Prissy", shown
    "her pink bow", and reprimanded on "violations of
    Rambo's territory." "Rambo is a good dog. Prissy
    is a bad dog." Prissy will have no "territory".
    Rambo's emminent angry return will be prophecied
    more and more frequently.

    9 - "Navigation" - Dog will be told that my
    apartment is "it's wonderful home." Dog will
    be taken to the sidewalk and will be told to
    "come home!". On the next day dog will be taken
    to the end of the block and told to "come home!".
    On the third day Dog will be taken to the edge
    of the neighborhood and be told to "come home!".
    On the fourth day Dog will be placed in the
    luggage hold of a Greyhound bus bound for El Paso,
    TX and told to "come home!"

    8 - "New Dog, New Trick!" - Dog will be taught
    to fetch Heineken from refrigerator, being
    rewarded after each fetch. After one week, Dog's
    legs will be removed and replaced with standard
    grocery shopping cart wheels, floor will be waxed
    and Dog will be told to "fetch beer". Pictures
    of Dog from experiment #9 being placed under bus
    with "El Paso, TX" on destination placard will
    be shown for motivation.

    7 - "Schizoid" - Dog will be trained to go to
    front door upon utterance of phrase "go to front
    door", and trained to go to back door upon
    utterance of phrase "go to back door". Dog's
    vocal recorder will be programmed to play sound
    clip of experimenter saying "go to back door" when
    dog arrives at front door. Dog's vocal recorder
    will be programmed to play sound clip of
    experimenter saying "go to front door" when dog
    arrives at back door. Experimenter utters command
    "go to front door."

    6 - "Da pimp" - Dog will be shown selections from
    an extensive catalogue of bestiality films. Dog
    will be dressed in lingerie and placed on a
    street corner in a depressed local neighborhood.
    Dog will be instructed that "Pimp daddy betta
    get all his cash or you ain't nobody's bitch no
    more!"

    5 - "Doggy Style" - Dog will be shown old Lassie
    reruns where Lassie swims to save Timmy. Dog will
    be transported via pontoon boat with mannequin
    labelled "Timmy" to middle of sizeable community
    reservoir. Timmy will be thrown overboard.
    Dog will be thrown overboard. Pontoon boat will
    return to shore to record observations.

    4 - "Silicon brain, Iron Will" - Dog's mobility
    circuits will be disabled. Dog will be placed
    in front of a television, in an otherwise
    empty room, which will play a continuous tape-loop
    of "Young Einstein" starring Yahoo Serious.
    Dog will be wired to the charging system to
    provide continuous recharging.
    At the end of one month the Dog will remain
    locked in this room with mobility circuits
    re-activated.

    3 - "Nuremberg" - Dog will be placed on trial
    for "crimes against humanity" and sentenced
    to execution by Monster Truck. A lengthy but
    fruitless appeals process will be conducted
    with experimenter playing the roles of public
    defender and presiding judge. Dog will be
    executed after dramatic "Dead Dog Walking"
    march to driveway.

    2 - "Franken-tug" - Two Dog's will be trained
    to "fetch". Both Dog's will be disassembled
    and their front halves joined back to back.
    "Dog" will be reactivated and told to "fetch".

    1 - "MSCSE" - Dog will be trained to pass the
    Microsoft Certified Software Engineer Exam. After
    receiving certification, Dog will apply to
    Redmond, WA headquarters as a "Lead Developer".
    Salary and unavoidable performance bonuses will be
    contributed to the Free Software Foundation.

"Life sucks, but death doesn't put out at all...." -- Thomas J. Kopp

Working...