Robotic Dogs 130
Brian Daniels
sent us a link to a crazy looking
Sony Robotic Dog.
It doesn't have a price listed, but I'm willing to bet you
won't have to clean up after it. Its supposed to be configurable,
expandable, and fairly programmable. And its cute too.
Yeah, wait'll it dumps core on the rug. (Score:1)
heh.
"Honey, they cracked the dog!" (Score:1)
But wouldn't it be great to have wireless networking? Then they could communicate, be remotely signalled, programmed, debugged, etc. Some robot toys already have this, of course.
But just wait until some script kiddie cracks your dog. Now they can see and hear what's going on in your house, move around, kick things, push things, even try to escape. Just watch your pet jump out of the window in a bid for freedom, then some kid waiting with a laptop in the getaway car grabs it and drives away...
We live in interesting times.
roc@cs.cmu.edu
The best feature... (Score:1)
>(3)Expendability for Accommodation of New Functions
People get upset when you kill a real dog.
OPEN-R Linux! (Score:1)
This stuff's been around... (Score:1)
Manuela Veloso's Page [cmu.edu]
Misapplication of Milgram (Score:1)
Cute??? (Score:1)
It's a cyclops! I don't know about anyone else, but something with one eye kinda freaks me out...
I've seen 'em (Score:1)
(Also from CMU) The funniest part about them I thought was when I asked about marketing plans for them, and was told that one of the biggest problems with them is that their heads still break off too easily. Oops.
The various motions they can be programmed to do seem fairly canned, unfortunately. Walking forward is one particular motion sequence, laying down is another, turning left is another, etc.
I don't know anything about the API, but I would think the first thing to work on would be more natural looking (and therefore more efficient) motion. Not sure what sort of sensors they have, I doubt they include joint torque sensors.
They definitely look fun, regardless.
Blade Runner (Score:1)
This is the beginning of all the Blade
Runner stuff.
Dogs Old News (Score:1)
The Robotic Dogs are actually old technology (by today's standards). The Computer Science department at Northwestern University has been doing programming and development on them since last school year (1997-98). They still have a ton of bugs in them.
As a personal project, I was planning on implementing a hearing system for the dogs. (3D sound localization) However, the head mounted microphones don't even have drivers yet. Not to mention that the quality of the microphones mounted on the head would have to be upgraded, and a few things would have to be added to the head to make sound localization a reality.
Long story short. Cool idea. They can't do much. Still a ton of bugs. They still have a long way to go before this would be a good product.
dear GOD (Score:1)
sigh...what a day
Tamagotchis? People felt for them when they died. (Score:1)
Which reminds me, I should put some new batteries in my Tamagotchi. It's been dead (literally) for some time now.
Battlestar Galactica (Score:1)
This isn't a dog (Score:1)
Why would anyone waste time giving affection to a
house or a country or a '68 'vette? We're
all made out of the same stuff. It's just our
turn to pay taxes, post emails and have names.
monk
Not AFD Joke (Score:1)
And to think what advertising potential Sony TV gives the thing...
- Chad
I've seen 'em (Score:1)
You'll have to trust me, they're _absolutely adorable_. They look around and follow the ball with their heads, then walk towards it and kick it with their front legs.
Apparantly debugging the things was a real pain (no good data out). Dr. Veloso says they wagged the tail to do debugging output
Yeah, Right! Pull the Other One! (Score:1)
-Eric
No kidding! (Score:1)
That doesn't mean I wouldn't love to have one, though (and, for the record, I have a real, live, dog, two ferrets, and a couple parrots at home). I'm not going to expect a robotic dog to get ecstatically, bump-into-furniture happy when I come home from a road trip, but I'm a geek, dammit. I think robots are cool. I want one.
I don't think there's anything sinister about it.
Not AFD Joke (Score:1)
Northwestern University has sone... (Score:1)
While I didn't take the course track that led to them this year (two C95 special topics classes, one fall, one winter quarter), a good number of people have.
This of course means they're programming them, and doing some rather complicated things with them. Sadly, I'm not sure what exactly, but I'll see if I can find anything else out.
AR Schleicher (Jerrith)
ars@nwu.edu
Yeah, but... (Score:1)
If so, I bet John Leeson is available to record the voice for it. If not, they could always get David Brierly.
I know some fellow slash-dotters (isn't that the foreign minister of Iceland?) know what I'm getting at. (Cue Grainer Theme 1...)
Bloody furby clones... (Score:1)
QT3..... (Score:1)
--
hacking the dog (Score:1)
Flashback. (Score:1)
Exterminate! Exterminate! Exterminate the canine!
--JT
QT3 (Score:1)
(or at least have a linux version)
I want to see that dog!
Sleeper (Score:1)
flashbacks galore...
Bloody furby clones... (Score:1)
Bet it would keep him busy for hours!
peter
This isn't a dog (Score:1)
Oh?
At most, you probably know about a thousand people. Out of these, a few will have views that are very similar to yours about a given topic. Now, picture a million times as many people. Out of that many, you will find a few that either match your views with fantastic precision on a given topic, or match your views and opinions reasonably well on several topics.
Now, consider all of history, past and present. That bumps up the sample size by a few more orders of magnitude.
If you define "the same" as being atom-for-atom identical, then you are indeed unique. If you define "the same" as being similar enough for most practical purposes, then you are no longer unique. It's a big planet. It's a big universe. There's a lot of history behind us and ahead of us.
And uniqueness is still not terribly relevant. If I could fire a duplicator gun at you and wind up with two identical copies, the copies would be no less human.
If human is "ambiguous" to you, you have a problem.
Ask a room full of philosophers what a human is. You will get an amazing variety of answers, because people tend to have very different opinions on what "being human" involves.
Use something more concrete than this if you are using "humanity" to justify arguments. Tell me *what* about what you consider humanity justifies your point, and tell me why you believe that it is so.
A machine "thinks" in a deterministic preprogrammed manner. The manner in which you think has been built evolutionarily and is constantly being modified throughout your entire lifetime. You are sentient (or at least about as sentient as I am) and you can express yourself uniquely and hold opinions and change them independantly under your own volition - or illusion of volition. If I ask a machine the same question a year later, I will probably get the same answer phrased the same way, from you I will not.
Or, from me, you might. And from the machine, you might not. A strong AI would at the _very_ least base its answers upon everything that it has experienced. Over the course of the year, it will have been exposed to new things. Further, it is likely that any machine capable of demonstrating strong AI behaviour would have to be able to modify parts of its programming. The way in which it modifies itself may be deterministic, but the result won't be (as it is performing modificatins in response to nondeterministic stimuli).
"I'm not trying to provoke; I'm trying to get you to think more carefully about your answer."
Please do not accuse me of being a haphazard thinker. I've thought my position over quite carefully. I can use psychology to back me up. I'm not making wild leaps of faith to support my conclusions.
My apologies. In that case, I am trying to get you to express your reasoning more clearly, as the justifications for some of your statements seem to be missing a few points, which I am pointing out in appropriate paragraphs elsewhere.
See above for my objections to the statement that I was referring to.
"Second of all, I am only "unique" if you use a very narrow set of criteria for recognizing me. Out of the six billion people on the planet, there are going to be many who are extremely similar to me. Expand your search to cover all places and all times, and you can probably find as close a match as you like."
Completely untrue. The criteria is extremely broad. You underestimate the diversity of people. [...] no two people will come to the same conclusions in the same way, this means that changing somebody's opinion will require different solutions to change the same opinion.
You underestimate the _number_ of people, or at least the full effects of this large a sample size. Will you find people who think similarly? That depends on the degree of similarity, but for just about all degrees, I'd say "yes".
You sill haven't said why this is relevant, also. Why would it make a difference in either of our arguments if you could or could not find two people who reacted identically to things? Some people react similarly, some react differently. What of it?
Have you ever met two people that acted the same way in most situations? I haven't and I've met a lot of people.
You've never noticed similarities among your friends? Now, as with the example above, bump up the number of people by a few orders of magnitude, and you'll be able to find pairs with even closer similarity.
You cannot be studied so well that you become 100% predictable or even 80%. When I use "machine" I mean a machine that is deterministic from the point of view of a human being, i.e. predictable. I wouldn't place any mammal in the category of machine because they aren't anywhere near 100% predictable but a robotic dog is.
You've never known what a friend will do before they do it? I know that my friends occasionally complain about me being predictable. I know that I can in many cases predict what a friend will do.
Re. machines necessarily being deterministic, it's easy to build one that isn't. Just make some of its decisions random.
Re. being deterministic "from a human point of view" - why is this important? I've been using "deterministic" and "non-deterministic" in the absolute sense. What happens, in your argument, if you have an entity whose behaviour some people can see the pattern to but others can't? How does my personally being able or not being able to predict a robot's behaviour change the nature of the robot?
No offense, but please reread my last post. I was answering your question as to how talking to a human being on the Internet is significantly different than interacting with a machine controlled by a set of algorithms. Again, I use machine as in "predictable machine".
Then that is the source of one of our conflicts. It is easy to build machines that are non-deterministic, and it is easy to build deterministic machines that are complex enough to appear non-deterministic to a human (look at a standard pseudo-random number generator; that uses a deterministic formula).
Re. the social benefits or lack thereof of melding with the 'net, that depends on what you're reading. I've seen people have thought-provoking discussions on the 'net, but I've also seen people who log into chat rooms and react purely on reflex. Just because there's someone else on the end of the connection doesn't mean that they're making thought-provoking statements, or that you couldn't be having more fulfilling social interactions elsewhere.
Joe Average Kid in a chat room may be having a useful social interaction. Joe Average Kid surfing for porn or pirated games might as well be talking to the robotic dog.
There are plenty of anti-social distractions out there already. IMO, one more won't make that much of a difference.
I'm not going to explain myself down to a masonite block. If you cannot understand the conversation, then I'm no longer willing to converse. What am I getting out of this? What are you telling me?
I am poking holes in your arguments, and arguing for a different point of view. If you would rather leave than defend your statements, I of course have no way of stopping you.
What ideas have you given me so far? None. No offense, but I engage in conversation to get different points of view, I'm not here for your benefit, I'm here for mine. Likewise if you get nothing out of this, you may as well leave as you have no obligation to me either.
When I see someone make a statement that I believe to be incorrect, I have a strong urge to argue the point, so that they and whoever else is listening may be better informed. I am not always right - I don't claim to be infallible - but if I am right, and I argue the point, I have increased the amount of knowledge per capita by an insignificant fraction. If I'm wrong, then at least I've learned a bit more.
That is why I'm here, to answer your question.
You claim that I've presented no ideas? Then what the heck are you disagreeing with me about? I've certainly stated _something_, or else you wouldn't have had anything to reply to.
To recap, my main points are:
Some of these points are in direct conflict with yours, and so we each argue our cases, presenting evidence for our own arguments and poking holes in the arguments of each other.
At what point was I unclear about this?
This isn't a dog (Score:1)
You may not be a friend, but you are human. You are unique. You're not a machine.
First of all, I'd appreciate a definition of what "human" means in this context/what aspects of "humanity" you consider important in your statement above. As it stands, that point is too ambiguous to be very meaningful.
Why is my being "human" relevant to the discussion?
I'm not trying to provoke; I'm trying to get you to think more carefully about your answer.
Second of all, I am only "unique" if you use a very narrow set of criteria for recognizing me. Out of the six billion people on the planet, there are going to be many who are extremely similar to me. Expand your search to cover all places and all times, and you can probably find as close a match as you like.
Why is my being "unique" relevant?
I am not a machine? Actually, I beleive that I am (in that I am a physical entity governed entirely by the physical laws of the universe, and could be duplicated perfectly if my physical form was duplicated perfectly).
Why is my being a machine or not a machine relevant to your point?
I get some idea of what you're getting at, but I need it phrased a bit more concretely if I am to debate it.
"Most people have the self-control needed to balance social and non-social behaviours in their life. Some people don't, but IMO they have bigger problems than robotic dogs."
This is an assertion that you will not be able to back up with evidence. Hardly anybody has the self-control that you think they do.
Let me give some context - I believe that most people in an environment similar to the one that most people presently are in have enough self control to (etc). The proof of this is simple. Look around you; most people interact adequately socially and spend an adequate amount of time in social interaction (note that I'm saying "adequate" instead of "wonderful" for both of these points).
You can of course produce extreme cases where a person will not balance these factors. IMO, this is not relevant, as such extreme cases are very unlikely to occur, even with things like robotic dogs around. Humans will always be around too, and ignoring them tends to have very negative consequences.
"I'm not saying that introversion is a good thing - I'm just saying that we should try addressing it directly, as opposed to addressing (in reverse chronological order) toys that emulate affection, the Internet, television, comic books, and whatever else people have held up as threats to human interaction over the years. IMO, these people are going after the wrong targets."
I agree 100%
Then why the strong objection to robotic dogs, as opposed to bad parenting?
"This is most likely pretty accurate. IMO, a good approach to solving this kind of problem is to encourage people to interact more with their children. Then it won't matter if the kid has a television set or a 'net connection or a robotic dog, because there will be something better available to them - a real person."
What a child might perceive as "good" is not necessarily good for the child. Things that bring a developing adult despair are equally important to things that bring a developing adult joy. This is a harsh reality, but happiness isn't attained by having fun, it's attained by work and work is often not fun.
An absence of having fun tends to detract substantially from happiness, from what I can see. Work can sometimes lead to happiness, and is certainly a prerequesite for happiness in a capitalist society, but this is hardly the same as your claim above. In fact, I define "having fun" as being "doing things that make me happy", regardless of whether those things are also considered "work" or not.
Re. what a child percieves as "good"... I'm having trouble seeing what you are getting at. If you're saying that a child might be attracted to behaviour that is ultimately harmful, then I'll agree. However, that doesn't relate directly to my point. My point is that parents can make themselves an influence in their childrens' lives, and that (if done right) this can counteract harmful effects from other influences under most conditions.
So, I don't see the problem with a robotic dog or a TV or a 'net connection if the kid is also properly exposed to other humans.
The second we can decontruct psychology to explain all aspects of human nature in minute detail is the second we cease to non-deterministic.
We will always be nondeterministic, because the stimuli that affect us are nondeterministic, and influence our "programming". Our "programming" will therefore change in nondeterministic ways even if the principles governing our minds are deterministic. At best we are probabilistic, and chaos severely limits how far into the future predictions can be usefully extended.
And if it turns out that we are deterministic, why would it make a difference whether we knew about it or not? I'd _rather_ know, personally.
Ant's are too complex for us to understand completely.
No. The only thing that we know for certain is that we don't presently understand ants. Whether they can or cannot be understood is not directly proven by empyrical evidence, and there is strong circumstantial evidence that they can.
This "dog" isn't even close to a dog, it operated on completely different principles, it's not even close to being a life form.
So what? I never claimed that it was. I and the original poster were talking about understanding minds in general and the human mind in particular at this point in the thread. The robotic dog is not relevant for this discussion.
This isn't a dog (Score:1)
Similarly, you could be out socializing with friends instead of surfing on Slashdot reading this.
Most people have the self-control needed to balance social and non-social behaviours in their life. Some people don't, but IMO they have bigger problems than robotic dogs.
I'm not saying that introversion is a good thing - I'm just saying that we should try addressing it directly, as opposed to addressing (in reverse chronological order) toys that emulate affection, the Internet, television, comic books, and whatever else people have held up as threats to human interaction over the years. IMO, these people are going after the wrong targets.
Do you know how you can make a sociopath? Have a kid and give it no affection for the first 4 years of life, the resultant adult will exhibit complete indifference to the welfare of others.
This is most likely pretty accurate. IMO, a good approach to solving this kind of problem is to encourage people to interact more with their children. Then it won't matter if the kid has a television set or a 'net connection or a robotic dog, because there will be something better available to them - a real person.
If we could understand ourselves, we would be too simple to understand outselves.
Not strictly true. We can understand the principles upon which we operate, without having to hold in our minds every memory, association, and piece of data that is held in our minds (which is indeed impossible). IMO, an understanding of psychology adequate for building a true AI would consist of understanding thoroughly the principles upon which the mind operates, and understanding the more important association links that form in humans. Anything beyond that is IMO most likely extraneous.
Now, this is much easier said than done, as it is extremely difficult to observe the mind in operation, and it is very complex. However, I'm addressing your point re. it being possible at all. We don't have to know everything that is in our minds to understand how our minds work, IMO.
Just my two cents worth.
Minds understanding themselves. (Score:1)
You can easily build a computer that has the ability to "understand" its own design, because a lot of the design _is_ redundant, or easily inferred from basic principles. You don't even need magical software to do this; look up integrated circuit CAD and synthesis tools on the web. Sure, a PII contains several million transistors in the CPU alone, but the VHDL code specifying how it works is much more compact. The implementation can be algorithmically generated by synthesis tools that understand the basic principles of how transistors can be used to implement logic.
Now, you'd get a more efficient version of the chip by designing parts of it by hand, but the automatically generated version will still work, and will be able to run the synthesis tools itself.
IMO, the mind is based on similarly simple principles; it just has a complex implementation.
Picking apart the mind's inner workings will be like picking apart spaghetti code under the best of circumstances, because it was put together by more-or-less random processes working over a very long period of time and so is disorganized. That doesn't mean that we can't understand the compiler, to carry the programming analogy further, or the logic involved in the mind's construction.
This isn't a dog (Score:1)
"Now, this is much easier said than done, as it is extremely difficult to observe the mind in operation, and it is very complex. However, I'm addressing your point re. it being possible at all. We don't have to know everything that is in our minds to understand how our minds work, IMO."
I still maintain that it is absolutely impossible to do this. We might may very gross generalizations that hold well for many situations, but these are models. Models do not represent reality, they represent a simplified reality so that the real reality can be better understood.
We can come up with increasingly refined models. How refined a model we need depends on what we want to do with it. Understanding how something works does not require that we be able to predict with certainty how it will behave. In fact, for nondeterministic systems, we will never be able to predict behaviour. A coin toss is a good analogy here, though something truly random like quantum tunnelling would be a better example.
Another point is that all depths of understanding are not equal. Understanding how our minds work well enough to create a true AI does not necessarily require that we understand all of the peculiar idiosyncracies of human brains that have built up over several hundred million years of evolution. Only the critical and/or relevant points need be understood.
Why do you feel that our minds must be based on principles too complex to be understood?
Why is this sinister? (Score:1)
Why is this sinister?
By similar arguments, Tamagochis are unhealthy and stuffed animals shouldn't exist either.
This isn't real, and the buyers fully know that it isn't real. Where is the problem? I'm having trouble seeing where you are coming from with this.
Socializing (Score:1)
I don't know about your friends, but most of mine are more socially active than the football-watcher that you describe.
Do I know the people living next door to me? No.
Do I know other people? Yes.
Readily available transportation means that my circle of friends is scattered about the city (and beyond). I still have several close friends and dozens of more casual friends, whom I see and socialize with regularly. My friends have similarly large circles of friends.
I think that you're being overly pessimistic about the amount of socialization that Joe Average participates in. Certainly, you _could_ just sit at home and meld with the TV or the computer in _all_ of your free time, but how many people do you know who actually _do_?
This isn't a dog (Score:1)
Insane people exist. IMO, it is better to recognize them and restrict their actions if necessary than to restrict the actions of *all* of the populace. This best preserves the freedom of individuals in general.
However, as mentioned in another thread, I don't think that we're in danger of interacting primarily with robots any time soon. Every time a major entertainment fad has shown up, this kind of concern has been voiced, but so far it doesn't seem to have been warranted.
Not that there aren't other problems that need fixing; see my previous messages.
This isn't a dog (Score:1)
You "believe". You could just as easily say that about an environment that has absolutely no social interaction. This is merely a value judgement of yours. It's an opinion, nothing more.
Instead of berating me, how about responding to the proof that I presented above?
We have growing amounts of violence among the young, that would seem to indicate that less children are receiving the social interaction they need to become functional adults. Do you have another theory as to why violence among the youth has risen? Look at the United States prison statistics.
Prison statistics prove only that more people are put in prison. This reflects less on how many crimes are being committed than on what we consider a crime, and what we choose to put people in prison for.
I've already addressed the topic of youth violence elsewhere.
People are already ignoring people because the consequences are simply no longer there. Being ostracized today isn't what is used to be. If you are being ostracized by your family, fuck them, move to NYC and start over.
And similarly alienate potential friends. If a person has no friends because they're a jerk, moving won't change this. They will either continue moving from place to place wallowing in bitterness, or learn not to be a jerk.
"I'm not saying that introversion is a good thing - I'm just saying that we should try addressing it directly, as opposed to addressing (in reverse chronological order) toys that emulate affection, the Internet, television, comic books, and whatever else people have held up as threats to human interaction over the years. IMO, these people are going after the wrong targets."
"I agree 100%"
"Then why the strong objection to robotic dogs, as opposed to bad parenting?"
Um what? When was parenting mentioned? You think you can "fix" parenting? I have news for you, you can't. What are you going to do, tell parents what deserves positive feedback and what deserves negative feedback? You cannot control parents through legistation.
If you agree that things like the 'net and robotic dogs are the wrong targets, then why do you attack them? You have stated elsewhere that you don't think either influence can be removed, so why not gripe about the real causes of social problems (bad parenting being one case)? Similarly futile, but at least you're complaining about the right thing.
Personally, I feel that we should make an effort (through "parental education initiatives" or what-have-you) to make sure that parents understand that it is important for them to raise their children. Then, if they sill don't bother, we can justifiably say that they brought the problems upon themselves when they try blaming something else.
Endorsing the blaming of incorrect targets IMO does no good and potentially causes a fair bit of harm. If the 'net was shut down to prevent Joe Average Kid from searching for porn, then Joe Average Kid would go back to reading his friend's brother's magazines, and things like this discussion would suddenly become a lot harder.
I'm pointing out that you cannot remove these (Internet, television, etc.) from society, because we are now dependant on them. What I'm trying to indicate is that there has already been damage done, and more is to come. If a parent doesn't see anything wrong with their child hanging out with the mechanical dog, like you don't, the parent will likely not spend as much time with the child.
I see nothing wrong with a child of mine hanging out with a mechanical dog as long as they do spend some time hanging out with human friends as well.
That is substantially different from your statement above, yet I stated it clearly elsewhere.
The real problem is our society is built on escapism. Perhaps we should figure out what we are trying to escape from and eliminate it. It is easy to see that the art of conversation is dead, it's very difficult to find anybody to talk to.
If conversation is dead, then why is there traffic here?
You never go out to socialize and chat with your friends?
Eliminating what we're trying to escape from is impossible, because we're trying to escape from both human nature and the fundamental realities of the world around us. Life isn't always pleasant, so we retreat to fantasies that are more pleasant. Life will never always be pleasant, because jerks always exist and because resources are finite. IMO, the best that we can do is to teach our kids how to deal effectively with the real world, so that they have less of a need to escape. "Less of a need", not "no need".
The higher the potential you reach the happier you are, that's my theory and the more I learn, the happier I am. I never want to stop learning, not because I find learning to be "fun", but because it always brings rewards later. I have been very lucky to experience so much because I've worked to do it. It didn't happen by fucking off, in fact when I did fuck off in college (11 credits, 2.1 GPA semester), I was most depressed - but I smoked a lot of weed and drank a lot of alcohol and saw tons of movies, and had a lot of "fun" - of course it was a waste of a semester but it was "fun".
If you were depressed, it doesn't sound like you were enjoying yourself very much. I reiterate that I personally define "fun" to be "doing things that make me happy". If you drank and were still depressed, I question the logic of calling that "fun".
There's also total integrated "fun" to consider. If I quit my job and smoked crack on the streets, I might be happy when I'm high, but the rest of the time I'd be in abject misery. Net fun lowers.
Re. learning, I learn because I enjoy learning. If this brings me rewards later in life, so much the better.
I do problem sets because they will bring me benefit and allow the potential for future happiness. Not because I enjoy them. If I didn't bother doing problem sets, I might have more time for happiness in the short term, but exam stress and future employment would be a lot worse, resulting in a net loss of happiness down the road. If I spent all of my spare time doing problem sets, I'd probably get perfect marks, but I'd be a lot less happy also.
I consider work a means to an end, not an end in itself. If you actually enjoy your work, then great, but it is not and doesn't have to be the end-all and be-all for the rest of us.
"My point is that parents can make themselves an influence in their childrens' lives, and that (if done right) this can counteract harmful effects from other influences under most conditions."
Sure they "can", but keep in mind that people act almost exclusively on instinct. Take away the natural environment and they will no longer instinctively know how to solve the problem. We are far from being purely intellectual despite our assurances that we are "logical" - we aren't. Here we have video games, 24 hour cartoons, artificial friends, and unlimited access to information - you think a typical parent can handle this?
Sure they can. Turn off the TV. Turn off the computer. Heck, sell the TV if neither you nor your kid benefits from it (but be prepared to provide alternate entertainment for your kid). Take your kid out to the museum or the zoo. Do something else with them.
It isn't _trivial_ to make yourself a part of your kid's life, but it isn't fundamentally impossible, either.
Ultimately the solution is for each parent to take the initiative, but the vast majority of the parents will not.
I agree. However, I believe that it is a worthy goal to work towards, because some of the parents will take initiative, and we can fix the blame squarely where it belongs in most other cases, as they will now have known better.
Which is more fun, to hang out with mom or dad or to play Quake III?
Well, I'd like to get around to playing "Tribes" at some point, but playing games all of the time loses its charm after a while (I've tried it). I socialize with my friends because it _is_ more fun than doing most other things. Otherwise I wouldn't be doing it.
I'm not saying all this is bad, but when we start producing interactive things that are meant to replace interaction with living things, I think we are in danger of crossing a line.
IMO, it would take something along the lines of a true AI to be involving enough to replace interaction with living things, and that will neatly solve the problem (as interacting with a true AI is as socially fulfilling as interacting with a real person, because it _is_ a person). Now, there is the issue of ensuring that the AI itself isn't designed with harmful psychology, but it turns out that there are fairly straightforward ways of doing this (give an AI a drive for self-preservation, and it won't be happy when you beat up on it, for instance).
However, I dout that we'll see this for a few decades yet.
I am not saying that robotic dogs will make humans more deterministic, I'm saying robotic dogs, cats, friends, etc, will make us fucked up because we aren't interacting with something that will give us not only positive feedback by negative feedback. Our brains aren't built to operate in this type of environment. You are free to treat a robotic dog in any manner you see fit, and probably with no guilt associated with it. You could "kill" a robotic dog and feel no remorse (maybe it was even enjoyable to do so), if the same person has interacted with machinery more than humans and gives people the same value as machines, you can run into a serious problem. And this is entirely possible if they never receive negative feedback for doing "bad" things to this "living" machinery.
As long as they still have to interact with other humans on a regular basis, there will still be negative feedback for antisocial behaviors. I've already covered why I think that they will still be exposed to humans. Re. making a masochistic robot dog, I agree that this would probably produce some harm (though I maintain that the majority of people would still turn out ok due to other social influences). IMO, the best way to curb this kind of harm is to legislate that toys of this type should not encourage abusive behavior. This can be enforced (the manufacturers are big enough targets).
Why is this sinister? (Score:1)
Back it up with evidence. We have more antisocial behavior than ever it seems.
Back *that* up with evidence. The operative word here is "seems".
Kids blowing away kids with guns, people getting mugged for sneakers, you name it.
There was an interesting documentary a few years back about television and the Internet. It brought up headlines similar to the ones you describe, with 11-year-olds murdering 11-year-olds and similar atrocities.
The newspeaper headlines were from the first couple of decades of this century. The antisocial behavior was then blamed on comic books.
A very large number of people are anti-social, criminal, selfish, stupid, or a combination of the above. This is true now, and it always _has_ been true. That's human nature. Instead of pointing fingers at the latest fashionable target for blame, try thinking about how to make society work despite the fact that such people exist. They certainly aren't going away.
I don't think the average middle income white person has any idea how the average middle class black person lives - or cares. And it goes both ways. For all people, for all groups, income brackets, and education levels. When is the last time you talked to a somebody that was between 30 and 40 and dropped out of high school? There make up a good section of our population. When is the last time they talked to you?
You're presuming a lot about who I am and am not friends with. Speak only for yourself in this regard, please.
Shit happens dude. And I think this is shit. I'm predicting that this will be ultimately negative for both human beings and for animals.
People happen, "dude". I don't think that this will have any greater an effect on human nature than anything else this century has. What we do in any given decade may be different, but there are still nice people, not so nice people, and actively malevolent people.
This isn't a dog (Score:1)
Did I say it would make a difference?
Yes, you did:
"The second we can decontruct psychology to explain all aspects of human nature in minute detail is the second we cease to non-deterministic."
Whether we are deterministic or not (in the absolute sense) is a part of the nature of the universe, and is most certainly not influenced by whether or not we know that we are deterministic or non-deterministic.
Similarly, as I've already shown elsewhere, you can completely understand something that is still non-deterministic. Non-deterministic systems can still operate on deterministic principles.
As it is I believe we are deterministic, but I also know that it cannot be proven that we are - so for all practical purposes, we do have "free will", although in reality I believe that "free will" is an illusion. The "free will" model suits us well, because the system is so complex, it may as well be "free will".
How does complexity prevent us from proving that we operate on deterministic principles?
In fact, there's already a proof of this if you choose to accept the axiom that we are wholly embodied by our physical forms. Our physical forms obey deterministic physical laws (though with quantum effects, this actually only proves that we are probabilistic, with probability functions that are deterministic).
Not that this particular proof yields much insight into our inner workings, but it still works as an argument for determinism as a whole.
"No. The only thing that we know for certain is that we don't presently understand ants. Whether they can or cannot be understood is not directly proven by empyrical evidence, and there is strong circumstantial evidence that they can."
The nervous system of an ant is hugely complex although very simple compared to a dog yet we do not understand the mind of an ant.
So what? I've already conceded that they are complex, and I've already mentioned that we don't _presently_ understand them in their entirity. How does this influence whether or not they _can_ be understood?
You maintain that building a robotic dog and installing an artificial psychology in the mechanical dog will not effect the owner.
I maintain that building a robotic dog of the type described will not _significantly_ adversely affect the psychology of its owner. Will it have _some_ influence? Of course it will. All stimuli affect us to some degree. Will it have the large, harmful effects that you fear? IMO, not for the vast majority of people.
Certainly some people's lives are very much attached to their pets, it will happen with a robotic animal as well if the animal emulates a living thing well enough. The psychology of the robotic animal is likely to be flawed. This may very well (I would say most likely) cause the psychology of the owner to become flawed, it will certainly change it. It's just that simple.
Some people will become attached to robotic pets. Some people are already attached to things like Tamagochis. Some people, as you point out, are attached to stuffed animals to variying degrees (yes, I know exactly how far this can go in Furry circles).
In all cases, this is due to people ascribing qualities to the devices (or plushies) that aren't actually there. There is a similar effect when people make up imaginary friends. Heck, it could be argued that religion is based on similar principles (with your chosen god being the "imaginary friend" in this case). (NOTE THE "COULD BE ARGUED", AND DON'T FLAME ME.) This is a very basic tendency, and as you point out with plushies, it will exist no matter what. IMO, as long as the manufacturers and parents take pains to point out that the dog (or the Tamagochi) *isn't* *real*, and doesn't actually feel emotion, then the harm from this is not significantly more than the harm from anything else (the kid will still think up an imaginary friend or latch on to a stuffed animal if they feel that deeply that there is no human that can be friends with them).
Ever hang out with somebody that was mentally unstable? I have, and it rubs off inevitably.
Yes I do, fairly often. One of my good friends is mentally unstable.
It is often stressful interacting with him, but I don't think that my own psychology has been adversely affected. I interact with many other people; the forces affecting my psychology are acting to make it easier for me to interact with _all_ of them, not just any one in particular.
As long as a robotic toy isn't the _sole_ social influence in a kid's life, their psychology will _not_ be solely based on interactions with it. Logically enough.
"So what? I never claimed that it was. I and the original poster were talking about understanding minds in general and the human mind in particular at this point in the thread. The robotic dog is not relevant for this discussion."
I am the original poster, and the robotic dog is completely relevant to this discussion.
Not to the thread debating whether or not human minds could be understood.
This conversation contains easily dozens of individual threads that we are debating, on various topics. The point that you were responding to was a response to a statement about human minds in general - not the robotic dog or its impact on humans.
I've started splitting my responses up, as this discussion is getting too large to manage in one message.
Re. being the same poster, you seemed to exhibit inconsistency re. believing humans to be deterministic or non-deterministic. In retrospect, this was because you were using a different definition of "determinism" than I use. Choosing an alias would eliminate future misunderstandings, and would also prevent any Anonymous Coward from claiming that they were you (as I've seen happen in other threads with other people).
You don't have to specify your normal email address; a temporary hotmail account can receive your password string as easily as any other account can.
The state of current knowledge (Score:1)
Consult with a friend in neurobiology for more information on how much we do and do not understand of the brain on a physical level. Find a few good textbooks on neural networks for more information on how much we do and do not know about how the brain and similar structures can process information.
We most certainly don't know everything about the brain, but neither is it a complete mystery, as you claim it to be. And our understanding continues to (slowly) improve.
Why is this sinister? (Score:1)
Well, bear in mind that there's a fair bit of random noise injected into our feedback loops, which makes our behaviour more probabilistic than strictly deterministic.. O:).
I see where you're coming from, but I think that human-to-human interaction is prominent enough in peoples' lives even with robotic dogs that this isn't a big threat.
If the Evil Empire locked us in a room from birth with only carefully programmed machines to interact with, then we would probably develop standardized personalities (to within the limits imposed by physical variations to our brains and noise perturbing our thoughts), but IMO anything short of such an impractical isolation would have a hard time standardizing people to the extent that you fear.
Just another opinion.
I'm more scared (Score:1)
This isn't a dog (Score:1)
Then why did my tamagotchi (sp?) die? Because it was part of the experience for which I paid.
I think it's interesting that something grossly simpler than a robotic dog could evoke any kinds of feelings at all. Yet, after feeding and taking care of my "virtual pet" for several weeks, I did feel bad when it got sick and died. It defied my internal logic and made me feel a little silly, but that still doesn't change the way I felt.
I see a robotic dog as being no different than a doll. In some cases it may be better; If I beat a doll it doesn't even have the oppurtunity to give negative feedback.
Interesting discussion though.
This isn't a dog (Score:1)
Please specifically define the terms/phrases:
"be happy"
"feel good"
"enjoy praise"
And are we really sure that these aren't just complex algorithmic responses from an analog computer?
Is my dog really happy to see me? Or is he just brown-nosing for the alpha male? Is what we think is "happiness" in our dogs just an evolution conditioned response to keep the dog alive?
Are we sure?
I want it! (Score:1)
Seriously though - it seems like a neat idea. I wonder what people that are against the keeping of animals would think about it?
-lx
Woof! Woof! Woof! Hello, I'm Rags! (Score:1)
Woof! Woof! Woof! Woof! Hello! I'm Rags!
Woof! Woof! Woof! Woof! Hello! I'm Rags!
Woof! Woof! Woof! Woof! Hello! I'm Rags!
Woof! Woof! Woof! Woof! Hello! I'm Rags!
Woof! Woof! Woof! Woof! Hello! I'm Rags!
Woof! Woof! Woof! Woof! Hello! I'm Rags!
Etc. Etc. Etc.
this is for real (Score:1)
btw, just look at the specifications [sony.com] (scroll all the way down that page) for more info...
Nope, it's NOT an April Fool's joke... (Score:1)
Why is this sinister? (Score:1)
Why is this sinister? (Score:1)
Is it more sinister than forcing an animal to live in captivity, denying it the opportunity to reproduce, and eventually taking it to the vet to be involuntarily killed--just to satisfy your own misdirected parenting instincts? Arguably not.
This isn't a dog (Score:1)
How do you know your current pets aren't doing the same thing?
No kidding! (Score:1)
The idea that the brain is a 'wet computer' seems to be to be absolutley correct, mine even has bugs =P~
Seriously though, the i'm certain that we can create "Life" from a machine, since there is nothing magical about us, there is not 'soul' or 'spirit' which we gotta have to truly 'live'.
The question really is, what exactly does something have to do to be 'alive'?
You point (which i forgot about) is that your not sure that mankind will be a kind ruler to the other beasts on our little planet.
Well, that has nothing to do with living machinery. Making intelligent machines is a natural outgrowth of computers.
Hell, mabye its already here, i get the feeling my computers has a deep and awe inspiring hatred of me.
You're not a human (Score:1)
I suppose we need to tell your friends to buy toasters and ditch you
Bloody furby clones... (Score:1)
People obviously get attached to cars, to computers, to tamagotchis, to stuffed animals... Imagine the level of attachment someone could get out of their robo-pet? What does it matter that it's programmed to 'like' you? How is a real dog any different?
Robots are good for one thing:fulfilling a purpose. If it's to give someone a boost of joy or pleasure, then so be it. Slave labor is very over hyped in today's service society, esp with the value of entertainment and such.
Its as much a waste of silicon as a car (in my case, a Yellow NeoBeetle) is a waste of iron and plastic. I'm very attached to it, and if you bad mouthed it or scratched it or something, I would get upset way beyonds it's monetary worth.
AS
Flashback. (Score:1)
Linux port? (Score:1)
Do Androids Dream of Electric Sheep? (Score:1)
Philip K. Dick is cool... and so is Harrison Ford.
Brazil (Score:1)
...whee...
Reaction vs. Emotion. Who cares? (Score:1)
Robopet Therapy (Score:1)
Don't laugh--for someone who is lonely, any companionship or interaction, even with a machine, might be better than nothing at all. And to those who don't understand how a person could have feelings for something that's not alive, go watch a small child play with a stuffed animal or doll.
This isn't a dog (Score:1)
Please note that I am NOT agreeing with the Skinnerean view of things. In fact, I think it's all hooey. Just playing devil's advocate. =)
--
crazy sickER (Score:1)
Katz, you listening?
--Corey
pretty cool... (Score:1)
I thought it was extremely cute, nearly lifelike even. I'd get one just to see what it could do.
"Honey, they cracked the dog!" (Score:1)
crazy sickER (Score:1)
I want it with... (Score:1)
QT3..... (Score:1)
Help Jon? (Score:1)
Not AFD Joke (Score:1)
Bloody furby clones... (Score:1)
Robots are good for one thing: slave labour. Make it pick up my laundry - then I'll be impressed... other than that, this is a waste of silicon.
I'm not discounting the ability to create and AI in the future that is capable of emotion and thought, even if it's only at the level of a pet's intelligence, and if so, put the AI in our little robot thingy here, and then I'll be interested.
Maybe those Sony Engineers need to get out more. I find trees and birds and even grass far more stimulating than a robotic pet dog.
This isn't a dog (Score:1)
Yeah, Right! Pull the Other One! (Score:1)
Actually, I visited this site months ago. It's definitly no joke.
The price (Score:1)
from highschool is in the development team. (He
never told me much of the detail though...)
Also, from the rumor I heard, it's going to cost
something like 200,000yen. About 2grand(US)? Can't
confirm on this, but I feel that's about the
right price.
Flashback. (Score:1)
You'd think if they could come up with a laser for a sloppy wet nose
they could have put a big ass motor in his rear...
This isn't a dog (Score:1)
I want a Gorilla... (Score:1)
I don't like living dogs, and don't think I'd care for a robotic one, however something like a Gorilla... something big that I could have rip jocks limb from limb, that'd be cool.
I want it! (Score:1)
That said, I'd like to get one for my cat to "play" with...
Why is this sinister? (Score:1)
For that matter, during New Year's Day I heard concern about how violent and dangerous the USA is -- but this was in a city in Brazil which on that day had ten times the number of shooting deaths as my twice-as-large home city in the USA.
Most "western" movies exaggerate the number of guns being worn, but when criminals got too uppity ranchers did start carrying around their weapons, and that's one reason they had the weapons. Romeo was a typical 13-year-old of the time, carrying around a sword. And at those times a minor stabbing could easily be a death sentence due to infection (doctors did not understand how to control infection at that time).
For that matter, does "violence" include or exclude what the staff of the royalty and clerics are doing?
crazy sick (Score:1)
Well...mebbe I will
bayesian woof? (Score:1)
Serious...or not...? (Score:1)
HEAD
Host: www.world.sony.com
HTTP/1.1 200 OK
Date: Thu, 01 Apr 1999 23:35:07 GMT
Server: Apache/1.3.4 (Unix) mod_ssl/2.2.5 OpenSSL/0.9.1c
Cache-Control: max-age=86400
Expires: Fri, 02 Apr 1999 23:35:07 GMT
Last-Modified: Wed, 31 Mar 1999 06:55:56 GMT
ETag: "88e90-1cf5-3701c6fc"
Accept-Ranges: bytes
Content-Length: 7413
Content-Type: text/html
Saw one of these running at AAAI (Score:1)
at AAAI last year. Although there was nothing
technically brilliant about them, they stole
the show (all the girls thought they were
extremely cute). They are very programmable
and (by default) include programming that
allows the robot to right itself whenever it
falls over (through a neat set of leg movements).
As for price, I have heard figures of around $600US. Almost everyone who has seen one in action
wants one, and most would pay that price.
This isn't a dog (Score:1)
Can't buy 'em yet anyway, so I'll stick with my coffee maker, which really likes me anyway, and would probably be hurt if I brought home some competition.
It's not an April Fool's day joke (Score:1)
Incidently, it can also behave like a monkey, a cat, and one other non-dog animal.
This isn't a dog (Score:1)
Not to get into a philosophical discussion, but aren't all of our behaviors just responces to stimuli and in that case aren't we just going through certain algorithms?
If they can do just a part of what they claim it'll be huge. Come on, look at Furby's those infernal little toys can say like 40 different things and everybody went nuts over them....
Entertainment Robot (Score:1)
I will get one. Oh yes.
Do you think that they will change the apartment restrictions to say: "No pets. Electronic or otherwise."?
News for you... (Score:1)
How do you know what a dog or cat really feels, maybe it is just performing the logical outcome of it's own neural algorithms.
And who knows for sure, maybe one day some of these machines may evolve into more than just the sum of their parts?
Not AFD Joke (Score:1)
heh
This isn't a dog (Score:1)
This isn't a dog (Score:1)
Nobody's saying it's a life form of any sort. You buy a real, furry dog for fun, companionship, love, and in some cases, as in hunting dogs, fire dogs, and seeing eye dogs, to help you. Sony is selling its robotic dog for one reason only: entertainment. It's designed to make us laugh and intrigue us, get us thinking about robots and their possibilities, and not fool us into thinking it's a real animal. If someone ever builds a robot that's lifelike enough to be considered an intelligent life form of some sort, then we can begin to address these issues. If it approaches a human in intelligence, then what would it matter if one of our friends is flesh and blood or steel and silicon?
Remember that we humans are adaptive machines, too, and if one of the changes we'll have to adapt to is having intellectual equals which are made of metal then so be it.
Not AFD Joke (Score:1)
This isn't a dog (Score:3)
It's a machine. It will never be happy to see you, it will never feel "good" or enjoy "praise", it will just go through the algorithms that makes you think it did. Buy a toaster, it would be cheaper and just as affectionate.
top 10 experiments to conduct on Sony Dog (Score:4)
this list I am officially making a proposal for
experimentation with this groundbreaking AI
technology and will require eleven (11) Sony(tm)
Dogs(tm) in order to conduct the following
important experiments:
10 - "Identity Crisis" - one week of acclimation
training to dog's given name "Rambo" with
establishment of "his territory", "his chores",
and "his spiked collar". After week one dog
will be referred to by the name "Prissy", shown
"her pink bow", and reprimanded on "violations of
Rambo's territory." "Rambo is a good dog. Prissy
is a bad dog." Prissy will have no "territory".
Rambo's emminent angry return will be prophecied
more and more frequently.
9 - "Navigation" - Dog will be told that my
apartment is "it's wonderful home." Dog will
be taken to the sidewalk and will be told to
"come home!". On the next day dog will be taken
to the end of the block and told to "come home!".
On the third day Dog will be taken to the edge
of the neighborhood and be told to "come home!".
On the fourth day Dog will be placed in the
luggage hold of a Greyhound bus bound for El Paso,
TX and told to "come home!"
8 - "New Dog, New Trick!" - Dog will be taught
to fetch Heineken from refrigerator, being
rewarded after each fetch. After one week, Dog's
legs will be removed and replaced with standard
grocery shopping cart wheels, floor will be waxed
and Dog will be told to "fetch beer". Pictures
of Dog from experiment #9 being placed under bus
with "El Paso, TX" on destination placard will
be shown for motivation.
7 - "Schizoid" - Dog will be trained to go to
front door upon utterance of phrase "go to front
door", and trained to go to back door upon
utterance of phrase "go to back door". Dog's
vocal recorder will be programmed to play sound
clip of experimenter saying "go to back door" when
dog arrives at front door. Dog's vocal recorder
will be programmed to play sound clip of
experimenter saying "go to front door" when dog
arrives at back door. Experimenter utters command
"go to front door."
6 - "Da pimp" - Dog will be shown selections from
an extensive catalogue of bestiality films. Dog
will be dressed in lingerie and placed on a
street corner in a depressed local neighborhood.
Dog will be instructed that "Pimp daddy betta
get all his cash or you ain't nobody's bitch no
more!"
5 - "Doggy Style" - Dog will be shown old Lassie
reruns where Lassie swims to save Timmy. Dog will
be transported via pontoon boat with mannequin
labelled "Timmy" to middle of sizeable community
reservoir. Timmy will be thrown overboard.
Dog will be thrown overboard. Pontoon boat will
return to shore to record observations.
4 - "Silicon brain, Iron Will" - Dog's mobility
circuits will be disabled. Dog will be placed
in front of a television, in an otherwise
empty room, which will play a continuous tape-loop
of "Young Einstein" starring Yahoo Serious.
Dog will be wired to the charging system to
provide continuous recharging.
At the end of one month the Dog will remain
locked in this room with mobility circuits
re-activated.
3 - "Nuremberg" - Dog will be placed on trial
for "crimes against humanity" and sentenced
to execution by Monster Truck. A lengthy but
fruitless appeals process will be conducted
with experimenter playing the roles of public
defender and presiding judge. Dog will be
executed after dramatic "Dead Dog Walking"
march to driveway.
2 - "Franken-tug" - Two Dog's will be trained
to "fetch". Both Dog's will be disassembled
and their front halves joined back to back.
"Dog" will be reactivated and told to "fetch".
1 - "MSCSE" - Dog will be trained to pass the
Microsoft Certified Software Engineer Exam. After
receiving certification, Dog will apply to
Redmond, WA headquarters as a "Lead Developer".
Salary and unavoidable performance bonuses will be
contributed to the Free Software Foundation.