Man vs Machine Story Writing Contest 130
ari{Dal} writes "Brutus.1 will challenge humans in a contest to write the best short story on the theme of betrayal. It took six years to develop at a cost of about 2 million dollars US, and writes stories based on logic, AI, math, and grammar structures. The judges will be challenged, not only to pick the best story, but also to spot the computer written one. The contest runners don't believe they'll be able to tell the difference.
"
Turing Test (Score:1)
Alan Turing looked at the human brain as matter, and said that it could be reproduced by humans.
Turing introduced a concept of the 'Universal Turing Machine'. Each Turing Machine would be part of a different method or algorithm. These Turing Machines would be embedded in the Universal Turing machine, and this one machine could accomplish any task.
These Universal Turing Machines are computers (refer to them as such amongst friends as a pedantic joke
In the article, the keywords 'Nothing human here' are thrown around. To me, this is just fluff. I would like to see their definition of what constitutes human only behaviour. The right side of the brain does work completely with physical matter. Therefore, simulating such things is a possibility, given enough capacity to store and process a large enough neural net.
They also mention that they do not think Brutus is 'a conscious entity'. To say that this is what will always define humans and computers may be a bit rash.
However, this is an exciting prospect. Perhaps Turing's predictions about the outcome of the Turing Test's completion date were only off by six to ten years.
Re:Call me cynical, but... (Score:2)
Actually, I think it will be easy to tell the difference for anothe reason. Take any good story, one you really liked, and plug it into just about any modern word processor with a grammar checker. It's quite simple; the best writers, be they short story authors, novelists, or columnists, know that a truly good story breaks the rules of grammar regularly. The hard part is defining how much is too much. And that, I don't think a computer can do.
Re:Tweaking software for one-time output (Score:1)
Re:Such limited thinking (Score:1)
A Sample Story from Brutus.1 (Score:4)
There was an article in the May 1998 issue of MIT Technology Review which had a sample story called "Betrayal" (very original name) written by Brutus...
Here's the link: http://www.techreview.com/a rticles/ma98/bringsjord.html [techreview.com]
Re:No better than Mad-Libs (Score:2)
But it is probably a pretty big deal. Now that the dreams of true Turing-esque AI are largely fallen by the wayside, researchers are focusing on smaller areas of interest and practical applications, e.g. expert systemsm, neural networks, or language processing. One important area is "human computer interaction", meaning not just one person sitting at their PC, but true communication between a person and a computer, either by typing or speaking. Thus, a computer that can "understand" the rudiments of grammar and "respond" in kind is a realistic proposition, even if you can say it's just an ELIZA program with a huge language database.
Just as expert systems have begun to replace, say, bank loan officers, companies are also looking to automate (for consistency as much as anything) portions of customer service. Imagine a system that can deal with the public, deciding whether the vendor has taken too many returned widgets this month and has to hold the line and suggest an exchange for a whatsis instead, that sort of thing. This would be a boon for small businesses trying to make it online.
A guy I knew a couple years ago was working on a project to have a computer read and interpret complaint letters, then recommend a course of action. This likely falls into the same category, except it's more like pure research.
Roald Dahl Must Be Laughing (Score:1)
He once wrote a short story entitled "The Great Automatic Grammatizator" that's about this very topic. To summarize:
A programmer, Adoph Knipe, has long wanted to be a writer. He creates a computer, with the financial assistance of his employer, Mr. Bohlen, that can write stories automatically. The computer is a great success, and they set up a publishing company to mass-produce literature. They simply purchase the names of famous authors and produce literature of their style, by simply adjusting settings of the computer.
At the end, the narrator says that over half of all of all stories are created on The Great Automatic Grammatizor. But he, the narrator, refuses to give up on writing, even though nobody wants books written by humans anymore.
Anyhow, this story can be found in the Roald Dahl Omnibus. Good luck finding it -- check a used bookstore. What a great book.
ARGH! (Score:2)
Re:No better than Mad-Libs (Score:2)
A *good* author shouldn't just pull a 'plot outline' from a stack of pre-generated index cards and fill in the blanks, although that's a perfectly good way to sell hordes of cheapo paperback copies if you don't mind putting your name on absolute drivel. Stuff like that *does* seem to sell, after all.
Instead, if you search hard enough you can find highly original authors, like Eco, Brunner (arguably), and so forth. Not all human authors crank out "techno-thrillers" with the same characters and insanely similar plots each time, or story after story about writers being terrorized, or what-not.
A computer that gets fed a plot structure and creates entirely new ones is fine. One that fills in the blanks, is coming nowhere near the level of achievement of the (better...) human authors.
The body is important to emotional development (Score:1)
I know I would.
Wonder what the algorithm is based on? (Score:1)
-- Moondog
interesting... (Score:1)
I want one of these! (Score:1)
Just think: no more waiting for Neal or Bruce or William to crank out a new novel. Just sit down and start reading!
Seriously, while possibly indistinguishable from human writing, will the stuff be good? That, I doubt.
Call me cynical, but... (Score:4)
Of course, I try to be open as well as cynical, so I look forward to reading some of Brutus' offerings.
Better than Chess? (Score:2)
Of course, this is inferring that the ability to write a good story is a matter of skill alone
Tweaking software for one-time output (Score:1)
Actually.. (Score:2)
"Seriously, while possibly indistinguishable from human writing, will the stuff be good? That, I doubt."
I think that the fact that it's /not/ good is the reason why it's indistinguishable from human writing.
Re:AI gets first post~ (Score:1)
Haha!
Guess not, eh?
Computers never win. Don't ever forget that.
BTW, isn't this sort of like that story about computers making ads? I don't know. Doesn't it seem there are some things that computers will never be able to do as good as a real person?
Computers are good a repetitive tasks, exacting tasks, and scientific tasks, but they'll never do things like be a friend, give you a hug, make the day brighter.
-Brent--
Re:Tweaking software for one-time output (Score:2)
Nah. After all, you don't expect human writers to submit first drafts. Tweaking is inevitably required.
Computer generated novels? (Score:2)
- On the one hand, there could be only a database of whole sentences and a database describing their content and context. Using this information, the computer puts together a story. The quality of the produced stories could be good, but there's only a limited number of distinct stories and it's a lot of work to create the stories.
- On the other hand, the highest achievment would be to build a computer that only has a dictionary, grammar rules, some knowledge of the real world and some artificial "common sense" to create stories.
I think it's somewhere inbetween, but the article doesn't really give any info how the program works.
Anyone remember Hasan B. Mutlu? (Score:1)
In '93 or so, it was wildly rumoured that Mutlu as really an AI, a script that parsed the soc.culture.* hierarchy. At about the same time, I stopped following s.c.*
As heard on NPR (Score:1)
I actually heard a story about this project on National Public Radio almost a year ago. Due to the best possible luck, I got in my car and turned on the radio in the middle of a human reading the computer generated story so that I was not forewarned.
While it was not an exceptionally long story, the quality was above 80% of the stuff you'd be likely to hear in a college writing course. I was really impressed then, and can't wait to read the new stuff.
Can anyone find the link to the NPR story??
Re:No better than Mad-Libs (Score:1)
The way I understood it, this was a result of the "corporatization"(sp?) of Hollywood - if a certain type of story is shown to do very well in the marketplace, then the businesspeople who run the corporations figure that they'll be able to make more money with that plot using reusing resources while paying very little extra for new creative input.
Kinda evolutionary in a way (anybody remember memes?) - we keep getting the same kinds of well-worn stories until somebody truly creative throws a mutation in somewhere - but the mutation must survive in the environment of the marketplace for it to become part of the entertainment "ecosystem".
Computer Generated Dead Authors (Score:1)
I think that this would present an interesting opportunity to have computers write new Shakespearian Plays, write sequels to exiting works, or even computer unfinished masterpieces whose author's died before they were finished. (The Canturbury Tales comes to mind)
Infinite monkey theory (Score:1)
An infinite number of monkeys on typewriters (or a random number generator given infinite time) will eventually produce the complete works of Shakespeare. But it's the people who own the monkeys who have to sift through and find it. (And that task is made harder by the millions of flawed complete works of Shakespeare in the results.)
That famous computer music composer generates a whole bunch of junk. Its creator then goes through the results to find the gems.
By restricting Brutus.1 to real words and phrases, and giving it other rules to follow, they would cause less junk to be generated, which makes it easier to find a gem. If this is the Brutus technique, it's not nearly AI.
Note that this technique is similar to the technique of Deep Blue, the computer that beat Kasperov at chess. Its technique is also to throw lots of stuff at the wall and see what sticks, except that "good chess move" is more easy to quantify than "good music" or "good story". As a result, Deep Blue can evaluate its results by itself.
Music generators usually have human editors.
Brutus? We'll have to see. I bet it works on infinite monkey theory.
More information on Brutus (including stories) (Score:5)
http://www.rpi.edu/dept/ppcs/BRUTUS /brutus.html [rpi.edu]
It's got some stories generated by the program, and other information by Selmer. (Incidentally, I've now had Selmer for three separate courses, and he is one of the absolute best professors I've had in my four years. Period.) Selmer fully believes that "true" AI is impossible, and that man is more than a machine, but has devoted his life to the study of AI anyway, and finding close approximations.
-Brian
Re:This contest seems rather unfair... (Score:1)
The article says that it took 6 years to 'develop' this computer.. I wonder how much of that time was spent wading through 100,000 crappy stories to find the one that sounded 'the most human.'
... (Score:3)
--
Re:Tweaking software for one-time output (Score:1)
Yes, tweaking should be done, but it should be done by the computer. What the computer spits out should be the final draft. This is about whether the computer can create content, not whether humans can create content from computer output.
You don't do your homework, and then have your "friend" tweak it and *then* submit it as yours, do you?
-Brent--
Re:Turing Test (Score:1)
When he says, "There's nothing human here", he's right on the money. Without human emotions or an understanding of them, no machine can connect with humans on an emotional level, except via Infinite Mokey theory.
Re:Such limited thinking (Score:1)
Re:The body is important to emotional development (Score:1)
Re:Call me cynical, but... (Score:1)
--
Emo (Score:1)
Bye,
TYLER
Manufacturing Content (Score:2)
In 1984, Orwell writes that stories and songs were generated by massive machines (the novel was written before he could have imagined doing it electronically).
Brutus.1 is not real AI, it simply constructs stories based on mathematical rules for putting together words. This is exactly what we should fear: stories and media without even artificial intelligence behind them. Stories can become completely meaningless, but they will still be amusing to the general public (note the large number of books that have absolutlely nothing but entertainment value).
This is a first step to "manufactured" media devoid of any real content.
brutus source code...revealed! (Score:3)
20 PRINT "WAS"
30 PRINT "A"
40 PRINT "DARK"
50 PRINT "AND"
60 PRINT "STORMY"
70 PRINT "NIGHT"
Anybody remeber 1984 (Score:1)
I laugh. (Score:1)
THe best thing about the human organism is its ability to write what it has percieved and imagined in a fashion for others to enjoy. You can experiment with grammar and structure, and sometimes you can pull off a stunningly personal situation by STRUCTURING IT INCORRECTLY. Few people I know speak in a grammatcally correct fashion: our machine in question may be able to handle dialogue, but can it account for a real person? Can it make the character feel REAL to you, the reader? Yes, I've read a lot of flat and lifeless fiction. I file it under CRAP with most of the rest of what society offers for consumption. If the story isn't REAL, at least in the mind of the writer, then it jsut can't come across as such on paper. I've been hooked by the hokiest concepts and characters not because they were well written, but because the author put every bit of his BELIEF into the concept he was relating. It was REAL.
And our beige box? Can it do the same thing? If it can, i still refuse to read it. Bell labs or whomever will never produce a Proust, Chekhov, or a Hunter S Thompson.
AIs and New York Times Bestselling Authors (Score:1)
Re:Call me cynical, but... (Score:1)
Neural Nets and Critics? (Score:1)
Re:Anyone remember Hasan B. Mutlu? (Score:2)
According to the people at Harvard, Mutlu was indeed an AI, written by a guy at AT&T Bell Labs and a graduate student. "The program they wrote was quite funny in a way it screwed people's names and generated insults. Still, the insults were mixed with a load of scanned propagandist files."
So that accounts for the length of the posts - cut and pasting preexisting literature.
Kean de Lacy
...waiting for a password
Re:No better than Mad-Libs (Score:2)
Quite true, and I think that if there is anything to be worried about, it's that some publishing houses may find it's cheaper to generate formula novels by computer than select them from a slushpile. The fact that "good enough" beats "better" should be remembered here.
While many might see this as liberating authors to write more worthwhile books, it could have a chilling economic effect on those that write formula novels to support their real writing or simply to break into the business.
The books I want to read by computers are ones that give me insight into what it's really like to be a computer in a human society.
But I have to think this progam is very impressive if it does as well as they suggest.
Writing Sample (Score:1)
does it do homework? (Score:1)
i remember d.dennett talking about a computer that composed music. they did a "turing test"
in the form of a performance of obscure stuff done by classical composers and music composed by the computer. --convert goers were asked to try to guess which was (e.g.) mozart and which was the computer-- people didn't do too well.
Selmer, Brutus.1, and AI in general (Score:3)
I'm one of Selmer's students in the Minds & Machines Program here at RPI, and I know there is a fundamental difference between Brutus and people. For now, that difference is creativity. Brutus was told/taught about English, much as we were, and the university setting, etc. What is different between something that I write and what Brutus.1 writes is where it comes from. I know that Brutus.1 has a limited knowledge base from which it writes, but I'm not sure I can pinpoint where these words I'm writing are coming from as my fingers type them.
I personally think that Brutus' stories are a little static, but not as bad as some of the things I've read. As a literary critic (which I'm not), I think the weakest part of the stories are consistently the endings -- for me, it leaves little sense of conclusion and zero resolution. But then again, that's just me.
For people who write about the computer writing perfect English as opposed to "normal" everyday speaking and writing with small mistakes, it's very easy to program a computer to make typos, etc.
As far as Selmer's comment about Brutus.1 not being conscious, it definitely isn't, and not because it doesn't have a body; it's because it wasn't built to be conscious. It doesn't have a cohesive grasp of stories, and I don't think it has any idea about anything around it, not does it actually understand anything about the feeling of betrayl or anything associated with it. It might know how to express it in written English, but writing about something and knowing it are two very different things indeed.
It's true that Selmer doesn't believe in Strong AI, and if you ever have the pleasure of meeting him you'll find that his arguments are clear, concise, and based perfectly on logic. My views are extrememly similar to his, and I don't believe that Strong AI is possible either; I'm resistant to what some claim is the "fact" that I'm basically a machine. However, this personal belief that Stong AI can't really work or exist is exactly what drives me to see if I can make it happen.
Re: Seems fitting (Score:2)
Wow. Talk about the march of progress.
propaganda tool (Score:1)
Multiple stories and computer originality (Score:2)
It's impossible to tell how creative the computer is by only reading one story. For example, the computer may be coming out with lines which in the first story come out as brilliant, but after reading four or five stories are tired cliches. Similarly its first plot might seem well crafted, but the others might have too many similarities to
have merit.
If these scenarios are so, then the programmer was able to write a set of parameters which allow variations on a single story of his creating. He wasn't however, able to fulfill the goal of creating a machine capable of original story-writing. The results of the test would be more revealing if brutus.1 submitted more than one story.
Re:Creator has already thrown in the towel ! (Score:1)
Sounds like a misquote to me (Score:1)
I go to RPI and am taking one of his classes right now, he seems like an intelligent, straightforward guy, who knows his stuff. I'll have to ask him about this
-------------
The following sentence is true.
Maybe I'm a bit uneducated... (Score:1)
It seems to me that any AI system, no matter how complex, no matter what technology is used is ultimately a collection of a bazillion switches. Even so-called analog computers are ultimately digital.
So I don't see how any AI system can outwrite a human. People can think, computers can't. Its really simple. Writing is something can comes from the soul. Computers have no soul. Technology is not the answer to everything. Humans are still needed and creativity can't be learned...In fact, the purpose of computers is ultimately to allow humans to more easily utilize their own creativity, right?
Why would this be the case? (Score:2)
It sounds like you subscribe to the (in my opinion) slightly outlandish theory that the human brain doesn't operate at the level of its neurons, but in fact is some sort of giant quantum computer. Now, I won't deny that quantum effects could influence whether or not any particular neuron fires or not at a given moment and that those effects could spiral upwards (like a butterfly flapping its wings and influencing future weather) and influence the thoughts of the host brain. On the other hand, I disagree with the notion that such effects are some sort of magic spark of life and that the brain wouldn't function without them. I see no reason why you can't have an emulated brain with neurons whose weights are measured in discrete units that could think, visualize, imagine, plan, and feel emotions (although, obviously you'd have to come up with some way of simulating the effects of the various chemicals produced in the brain on those neurons) just like a normal brain. Sure, if you started an artificial brain and a real brain with matching neurons off in the exact same state and ran them with the exact same stimuli for a length of time you'd probably get a different ending stat in each one. Nevertheless, they would both react in a human manner (which would probably be to go insane considering that both brains are recieving exactly the same stimuli [which basically means that they're trapped in some sort of simulated universe where absolutely nothing they think or try to do affects what they sense with their five senses or where their body moves and what it does]).
The only situation I can think of in which it wouldn't be possible to simulate a brain like this would be if we lived in a completely rigged universe. If the universe were designed in some way so that it only appeared as if our consciousness were originating from our brains while it's really transmitted from somewhere else, then maybe we couldn't simulate consciousness because we wouldn't be able to look at a working model. I'm not going to ask who rigged the universe, if indeed it is rigged, that way right now. That's way beyond the scope of the debate. But the same basic principle could apply to everything in the universe, even physics and mathematics could be based on principles that only "make sense" in a controlled, artificial environment.
Anyway, I doubt it. Not neccessarily since I think it's impossible, mind you. It's just that, if it's true, there's nothing we could do about it anyway.
Oh yeah, about the solar system sized planet thing. Now _that_ would be an incredible trick. The engineering problems to overcome would be enourmous, even working under the assumption that this isn't a fully solid computer (where would you get all the mass?). In a planet sized computer emulating a human brain, signals travelling from one side of the computer to the other at the speed of light would take very roughly the same amount of time as an equivalent electro-chemical signal would take to cross a human brain. Scale that up to solar system size and it would take a day for the signal to make the equivalent journey. Actually, this has given me some pretty interesting ideas to toy with.
True... (Score:1)
Hmm, questions, questions.
Re:Still a long way to Turing's test.. (Score:1)
Re:Anyone remember Racter? (Score:1)
Here's another example of Racter's output found on the t-shirt we had made up for our AI Lab:
Quite stirring for a computer program don't you think? It's probably worth keeping in mind that a whole lot of garbage had to be sifted through before this gem was found. Oh, that reminds me...
Shopping list:
nightmares (Score:1)
They just need a small sample of your innermost thoughts and they will be able to clone you intellectually, no one will suspect. Soon there will be no one left. (insane laughter)
It is scary but there are a lot of newsgroups that could easily be replaced by a Pearl script...
Re:Better than Chess? (Score:2)
People who think this sounds cool should definitely check out the book Godel, Escher, Bach [amazon.com] by Douglas Hofstadter. I'm just getting through the last pages, and... wow. Anyway, he addresses your argument, that "creativity can only be programmed as a matter of randomness (or chaoticness?), not as one of observation." Why should that be so? What's to keep a sufficiently complex computer from observing it's "environment" (whatever kind of environment that may be) and making sensible observations, similar to those a human would make? The way humans express creativity doesn't seem to be the result of chaos or randomness in our thoughts, but instead a process of drawing connections between similar things, or finding similarities (or interesting differences) between things that seem to have no relation. But we do it according to processes, which seems to imply that similar processes could be programmed. Basically, what would it be about a human that would allow us to engage in creativity which could never be programmed, or result from a logical structure-- that is, be an emergent epiphenomenon of a lower-level program.
----
We all take pink lemonade for granted.
Pretty weak... (Score:3)
Unless they reveal the internals, or release several hundred stories generated by the program with no human selection or input, there is no reason to believe they have accomplished anything new or interesting. It appears to me that this is a story compiler, not a story writer. The "programmers" wrote the facts of the story and the computer compiled it into a linear story of a fixed format written in English:
-detailed view of betrayed
-establishment of trust
-opportunity for betrayal
-initiation of betrayal event (but not the complex details of the confrontation that would ensue)
-short view of betrayer afterwards
There may be an ad-libbing function too that generates variations from combining random selections from lists, but this can hardly be called AI.
The sample stories show no motivation of any sort. They are nonsense stories. There is no character more easy to write about than a madman, because his actions don't need to be logical.
The exception is the self-betrayal story, which displays a very simple motivation (if that's even the right word): the betrayer/betrayed hates what he has to do and freezes in the middle of it. With such a small sample, there's no reason to assume anything but that the program can produce no other motivation.
BTW, does anyone doubt that the AOL community can produce lifeless prose indistinguishable from that which a program can create? I've taken a few minutes to identify bots in chats before, but only because I've had equally lame conversations with people who have nothing worth saying, are often distracted because they're doing five other things with their computer at the same time, commonly only want to talk about one very specific thing, and sometimes don't know English very well. Any humans can seem like a bot with sufficient limitations on the interaction.
In summary, it looks like this costs a lot more effort than it pays back. It takes immense human effort to produce short stories of very limited range in an apparently fixed format. The creativity displayed here is human, and a human using this tool could not compete with a professional author spending the same effort. While it might be able to produce hundreds of stories from a single input, nobody would want to read them all because they would all be the same story underneath.
Creator has already thrown in the towel ! (Score:2)
I too believe that before a computer can create an acceptable story with believable human characters, you'll need a computer capable of mimicking human characteristics. You can't have believable human characters without a computer capable of being a believable human. Computer Science is a long way away from this, and Brutus, while interesting, is no closer to this holy grail of AI.
Tom Clancy has been using this for years. (Score:1)
Word has it Danielle Steele is interested.
- A.P.
--
"One World, one Web, one Program" - Microsoft promotional ad
1984 (Score:1)
Re:No better than Mad-Libs (Score:1)
Nowhere does the article say that the computer entry uses a fed plot-structure. Just that it's incapable of plot-structures as complex as human ones. We don't know for sure exactly what kind of plot-generation scheme is used here. It could very well be, as you said, pre-composed. But it could also be "invented."
I don't think this would be the main limiting factor, though, in the system's writing capability. There are other, more pressing, difficulties I could see arising from the system; for instance, it would probably be unable to handle humour very well, or science fiction/fantasy. This is because it would be incapable of free thought and intuition/insight. Sure, it could probably come up with some sort of unusual idea that could probably be mistaken for a joke--if told right. The problem here is, what part do you reveal first, and what do you leave out until the end, to deliver as a punch line?
Likewise, with the science fiction/fantasy example, it could probably construct a universe on it's own, where all vehicles look like elongated icecream cones, or where all travel is done using some pseudo matter-shifting technology or something, but the big problem is, again, how would the writer understand how to relate it to the reader?
You see, human authors know how low-level they have to go with a concept in order to relate a new idea to someone else. This low level is called common knowledge. How do you relate this common knowledge to the computer system?
The big underlying problem is that we still haven't figured out how to get a computer to understand ideas and abstract theories. Until the system is capable of storing these theories, or more accurately, converting these theories into 'understanding' and storing that understanding, (Understanding being like an executable version of that theory or idea. (Perhaps we could refer to this process as "compiling?" It's theoretically a lot like compiling source code..)) we'll never have a system capable of invention or insight. Just an infinite number of monkeys on an infinite number of typewriters.
James
Re:Maybe I'm a bit uneducated... (Score:1)
heh, I'm sure you know Alan Turing's reply to that argument...
--
"HORSE."
Re:Better than Chess? (Score:1)
And yes, G, E, B is an excellent book my dad made me read a little while back.
Re:The body is important to emotional development (Score:1)
SHLRDU (sp?) was only able to converse intelligently about the relative position of blocks in it's virtual world. It has no way of grasping talk of anything beyond it's world of blocks.
The same goes for the virtual world of the expert systems that have been so popular, whether these expert systems diagnose illness or tell stories about buying hamburgers. Virtual worlds thus far have been closed systems.
The world that bodies inhabit is not a closed system in any practial sense. Any "rules" discovered may suddenly yield exceptions, special cases, or end up being falsified. There are always surprises, be they mundane or major. This world is not a closed system in that it is always open for interpretation. Remember that logic is argumentative but not necessarily descriptive.
My view is that the machine that will be able to have the sort of emotional responses that we humans can relate to it as an intelligence and not just as a complex machine will have to have some sort of window into the world-- this window being some sort of body.
AI Rights? (Score:1)
Even I would have to go along w/ Turing & say that if we can't actualy point out the difference, we're morally obligated to assume that computers *can* feel emotion, etc.
But what will we do if we come to the point where we have to make that assumption?
You know, the one thing that always bothers me when talking about Artificial Intelligence and where it could possibly go is that a ot of people don't think about the political implications of such an intelligence. Why are we working so hard to replicate human intelligence artifically without a set of ethical guidelines? If we actually succeed, we're gonna have a hell of an interesting set of problems about which to worry.
I'm really tired , it's five o'clock in the morning, and I'm up with a splitting headache, so I'm kind of worried that I'm rambling. I suppose my point is that in a world of complication, where we uphold a variety of beliefs about the ways in which we should treat each other, should we really be working so hard to create artificially minds and all that suggests without a set of ethical guidelines? I hate the idea of creating a slave-class.
Re:Still a long way to Turing's test.. (Score:1)
I don't understand why some people were so shocked after Deep Blue's victory over Kasparov. The real miracle is that men are still able to compete with computers today ! This is merely a matter of time before we can get machines powerful enough to calculate and try the entire tree of a game (or, for more complex games, significant parts of it) and be almost sure to win.
By design, machines are better than human at mathematical games. Chess are a mathematical game.
What about the Chinese and Japanese game of Go [well.com]? So far, Go has been entirely too complex for a computer to be any good at beating anyone but a novice player. Hell, I'm still trying to figure out the most basic rules. There's still a level of complexity and intutition to this very mathematical game that computer progammers can't replicate or overcome. This suggests to me that there's more to programming a computer to be unbeatable at a game than just entering all possible variations. With Go, that number is humungous. There's a saying that Go is so complex, no two identical games have ever been played. You can't say that about Chess.
Instead of just computing likely outcomes, true AI is going to have to have emotional understanding and a real level of intuition. That's going to be the difficult part.
Re:True... (Score:1)
I don't know about that. Some time of forced study in NL-related study has taught me the golden rule of the field: no matter what the problem is, it just takes some more and better programming techniques. Of course, no one really knows what programming techniques these are, so we're stuck...
Re:Worse than chess challenge (Score:1)
My point is... that if the story is sufficiently dull, boring, and straitjacketed, then it will make no difference if it was written by by a mind or knowledge base program.
Such limited thinking (Score:2)
I find it interesting that someone with enough know-how to co-create the computer, Brutus.1, would have such a limited mindset. My dog doesn't have a human body, but I'm reasonably sure that it's conscious.
Then again, one of my pet conspiracy theories is that the Department of Defense made the Internet a free-for-all because it would be much cheaper to let (m|b)illions of people contribute their life stories than to pay someone to generate content. At least one AI researcher theorized that a computer would need about 40TB of data to gain self-awareness, and that was the cheapest method of getting that much data.
Anyway, if my theory were true, then I'd also be willing to believe that the DOD covertly underwrote Brutus.1 to be the mouth of the newly self-aware Internet. In that case, Professor Bringsjord would be quite the spin doctor.
Then again, I'm stuck at work and out of coffee, so you could probably just ignore all of this.
Guess I'll go back to posting my web page about what it means to be human...
Anyone remember Racter? (Score:3)
Many enraged psychiatrists are inciting a weary butcher. The butcher is weary and tired because he has cut meat and steak and lamb for hours and weeks. He does not desire to chant about anything with raving psychiatrists, but he sings about his gingivectomist, he dreams about a single cosmologist, he thinks about his dog. The dog is named Herbert.
I sincerely hope Brutus will do better than that, and show how much technology/+programming have advanced in 12 years.
(For a demonstration of actual rather than artificial insanity, check out the article's last sentence: "If the computer wins the contest, I'm going to take my computer and burn it. I certainly hope a human wins." Sigh. Did the Wright brothers burn their plane right after Kitty Hawk?)
Worse than chess challenge (Score:1)
Machine:
With well-defined rules for grammar,
little logic,
use a large database, or even copy some sentences from the web,
Resuled with a story with no mistake in grammer.
Human:
With well-defined rules for grammar,
little logic,
use a brain, or even directly cut-and-paste some sentences from the web (from not well-known sites),
Resuled with a story with no mistake in grammer.
Maybe, the stories for slashdot already written by computer.
This contest seems rather unfair... (Score:2)
If you give the computer very strict and complex rules for putting together words, and have it crank out 1,000,000 stories.. one of them is bound to sound like a human.
This contest would be more fair if the computer's entries were randomly generated, rather than handpicked by a human.
No better than Mad-Libs (Score:1)
Can Brutus I's stories REALLY be considered AI? After all, if they feed it "plot structure", aren't they in large part writing the story for it? What would really be amazing is if it could actaully come up with novel story structures and plots and that sort of thing.
Otherwise, it's just a matter of degree more complex than Mad-Libs.
I think it's neat and all, but I hardly think it's a big deal.
Re:Tweaking software for one-time output (Score:1)
Computer written sitcoms/dramas (Score:3)
This has pretty good ramifications for gaming, role playing games that make their own plots 10 years down the track.
Don't they already use a computer to write the plot for Ally Mcbeal tho?
for n in every_male_in_show
act_like_scum(males(n))
unique court case = random element of ally's life
court_winnter = woman
insert frigging annoying dancing baby
have ally walk down street to random song in ally sound track.
What type of alogorithm does it use? (Score:1)
But seriously, this is either extremely impressive or very lame, depending on exactly how the stories are created.
Re:AI gets first post~ (Score:3)
Yeah, score this down as "heathen"
A more humorous (!?) view.. (Score:2)
"The judges, who must not only pick the winner but also which story was written by Brutus.1, include published authors and a university English professor."
In other words, the computer, a bunch of people who have thrown a Web page online at some point during their lives, and some prof who has better things to do than prepare for his next class.
"The computer - which cost $2 million and took more than six years to develop - can write stories with themes of betrayal, deception, evil and a little bit of voyeurism."
It exhibits all of the best qualities in a person!
" "It's provocative," said Hurley, who is running the contest. "But I bet nobody is going to figure out which one was written by a computer - most people write stories that are worse than this." "
I'm not sure if what he's talking about is provocative, but the topics that computer writes about sure are. Well, we all know that the English prof is going to win, and apparently they got the Web page making people together so the computer will blend in.
" "I don't believe Brutus is a true conscious entity," said Bringsjord, director of the school's minds-and-machines lab. "It doesn't have a human body yet." "
You can't be a conscious entity unless you have a human body? Damn.. I was so hoping to be one of those heads in a jar like in Futurama.. Grr.
Actually.. watch this thing turn into something like Skynet in the Terminator movies some day and see what this guy has to say /then/!
And what does he mean by "yet"..? *shudder*
Funny how we make up tests and then try to cheat (Score:2)
--
grappler
The short story that would win.... (Score:1)
named Amiga has had, now that short story would win and beat Brutus 1.
As long as it's got good grammar (Score:1)
Re:Worse than chess challenge (Score:1)
If not, do a tad more reading before shooting your mouth off.
If so, I feel greatly offended that you think that's all that goes into writing a story.
This is REALLY importent.. (Score:2)
In mathematics we are beginning to see theorem proving software that can do a little bit of the grunt work involved in proving some types of theorems, but the limiting factor is still partly user interface and partly the difficulty of learning how to interact with the AI, i.e. designing your notation so that the AI can work on it. I expect the problems in computer generated documentation will also be that the human author needs to express him/her self to the machine in a way it can understand.. and the machine needs to give sufficent feedback so that the human dose not end up fighting the machine to keep it fvrom writing down a specific path.
This is sorta like the research into functional programming langauges. You write your code in a provably correct specification langauge and then have the compiler make it efficent.. but imagine how a compiler which inserted optimization hints into the functional code so that you could come along and adjust it later.
Jeff
BTW> I wonder why no one has writen an AI to check C source for buffer overflows.
The death of Weak AI (Score:1)
The Strong AI community has always known that it has a hard task ahead, but that's no reason to believe that the Strong AI direction is flawed and to take instead a less ambitious path.
Re:Such limited thinking (Score:1)
This isn't a limited mindset at all. Douglas Coupland speculates in Microserfs that the peripheral nervous system functions as a peripheral memory storage device. In that case, part of that which makes up your mind might be stored in your body: peripheral memory. Such peripheral memory might in fact function in a very different way than central memory.
Anyway, what evidence of consciousness does your dog display? Consciousness implies self-consciousness, which implies a capacity for reasoning about oneself. No dog that I've ever met is really capable of any such thing: for the most part they're idiots.
~k.lee
Re:Anyone remember Racter? (Score:1)
> will remain as they are.
What do you mean we can't change human's abilities? What do you want to improve? Memory? Creativity? Ability to perform calculations? All of these things can be improved for a particular individual w/ practice & education. And by learning how to better practice & educate, we can improve them for humanity as a whole. Of course, that doesn't begin to touch on what is, IMNSHO, the most important capacity of all, which is to understand[1]. But frankly, we haven't got an f'ing clue what it means to understand, so we certainly don't know how to measure it, or even whether it even makes sense to talk about a capacity for understanding in a concrete, rigorous way.
> however, computers are only a tool that process
> information, no emotion and such.
Here again, while I'd personaly agree with you, this is hardly settled. Many people would argue that they could, or perhaps even do. Even I would have to go along w/ Turing & say that if we can't actualy point out the difference, we're morally obligated to assume that computers *can* feel emotion, etc.
[1] Yes, I'm being deliberately ambiguous here. I'm not going to say whether I'm talking about emotional understanding or technical/abstract understanding, because I don't know. I don't think there's really any significant difference in fact, but that's another issue....
Are we doing this right, though? (Score:1)
At best, you'd get something decidedly inhuman.
Maybe our order is a bit backwards. What if we were to teach a machine to read instead, and keep track of not just the states of the characters, but also what's going on inside their head, and thus their *motives* for doing things. If your AI thusly learns that someone jumps when startled, smiles when pleased, etc., it'll have a better understanding of the human mind. Better than we could teach it, in fact, if you feed it enough input data; who can tell that person who has been isolated from society everything they need to know about the world? We'll overlook something.
I guess it seems kinda silly to say all this, cos I suppose that's pretty much the essense of artificial intelligence in itself. But then I guess what I'm really saying is, isn't this a bit premature?
Maybe they should... (Score:1)
"Dave Striver loved the university--its ivy-covered clocktowers, its ancient and sturdy brick, and its sun-splashed verdant greens and eager youth. The university, contrary to popular belief, is far from free of the stark unforgiving trials of the business world; academia has its own tests, and some are as merciless as any in the marketplace. A prime example is the dissertation defense: to earn the PhD, to become a doctor, one must pass an oral examination on one's dissertation. This was a test Professor Edward Hart enjoyed giving. "
(verbs hilighted for your convenience)
Brutus seems to have a problem keeping its verbs in the same tense.Re:Pretty weak... (Score:1)
--
Re:No better than Mad-Libs (Score:1)
Now THAT sounds like something worth reading!
Re:Tweaking software for one-time output (Score:2)
"My random number generator program is so smart it can produce whatever number you want."
"Really? OK, make it produce the number 1345687"
"OK..click..click..click..click.."
"Hey, those numbers aren't even close to the one I said."
"Yeah, but watch.."
hours later
"Hey, whadda ya know? It really did produce the same number. That program of yours is fantastic! It can produce specific numbers much better than any human can."
--
Machine storys (Score:2)
"Damn it 213.23.434.32 You cant go along masquirading your IP along, your no better than being a localhost junkie!"
"Oh and you say that to me? You 127.0.0.1 whore!"
[Bum Bum Buum! Dramatic Pause]
Whats next, Computers that can reginize speech?
Re:This contest seems rather unfair... (Score:2)
Still a long way to Turing's test.. (Score:3)
It is evident that computers are intrinsically better than humans at mathematical games stricto sensu. I don't understand why some people were so shocked after Deep Blue's victory over Kasparov. The real miracle is that men are still able to compete with computers today ! This is merely a matter of time before we can get machines powerful enough to calculate and try the entire tree of a game (or, for more complex games, significant parts of it) and be almost sure to win.
By design, machines are better than human at mathematical games. Chess are a mathematical game. Writing a very short story on a precise subject can still be roughly modelled as a mathematical game, at least for the structure of the story, while the "creative sugar" may be a difficult bit. Writing a full novel with complex stories and deep, meaningful dialogues is beyond that reach.
The problem is, are there still many people who actually read complex stories - especially with deep, meaningful dialogues ? This Brutus-1 computer is just a machine equivalent for Barbara Cartland or industrial pop-music songwriters. CACDBS - Computer-Aided Celine Dion BullShit - is only years away.
You see, this is a little like Babelfish : on its own, it's useless (too buggy), but used as a "preparser" to do the bulk of the job so the humans only have to correct the errors and add little twists every here and there, it can drastically enhance productivity. My opinion is, it will be very successful in America.
(This is not an attempt at US-bashing : I'm sure the books it'll write will have tremendous success even in Europe - simply, european editors might be more reluctant to adopt this machine than their american counterparts. Damn intellectuals. Still don't understand the market is always right)
Thomas
$2M? Jeez (Score:2)
Getting a computer to generate text is not that big of a deal (unlike, say, getting it to play chess really well). The postmodernism generator [monash.edu.au] does a pretty good job (and funny, too) and I'd venture to say for far less money.
Re:Such limited thinking (Score:3)
To pursue the idea a bit further... The Scientific American article mentioned that Edward Teller is missing a foot due to a streetcar accident in 1928. Is he therefore less conscious than someone with two feet? How much of a human body is required? Would a single atom suffice? Two? 32767?
Is a fat man more conscious than a thin man?
Re:Anyone remember Racter? (Score:2)
Read Jorn Barger's Racter FAQ [robotwisdom.com].
It's probably going too far to call it a hoax, but there certainly was more hype than substance here.
Isaac Asimov (Score:3)
Words are a means of self expression. Giving a machine the power to express itself in words is just one more step in producing true AI. So kudos to the programmers and engineers.
Just one more thought, the robots final story pitted two colleges against each other: one from Yule (Yale) and another from Harvard. Anyway please make all the corrections necessary to my poor recollection of the story.
Bortbox01