Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Technology

Man vs Machine Story Writing Contest 130

ari{Dal} writes "Brutus.1 will challenge humans in a contest to write the best short story on the theme of betrayal. It took six years to develop at a cost of about 2 million dollars US, and writes stories based on logic, AI, math, and grammar structures. The judges will be challenged, not only to pick the best story, but also to spot the computer written one. The contest runners don't believe they'll be able to tell the difference. "
This discussion has been archived. No new comments can be posted.

Man vs Machine Story Writing Contest

Comments Filter:
  • This reminds me of the Turing test in the 1940's. Alan Turing predicted that by the year 2000, man and machine alike could take his intelligence test, and there would be no distinguishable outcome.

    Alan Turing looked at the human brain as matter, and said that it could be reproduced by humans.
    Turing introduced a concept of the 'Universal Turing Machine'. Each Turing Machine would be part of a different method or algorithm. These Turing Machines would be embedded in the Universal Turing machine, and this one machine could accomplish any task.

    These Universal Turing Machines are computers (refer to them as such amongst friends as a pedantic joke :) ), and the Turing Machines that run inside them are the programs we run. Alan Turing is seen as the grandfather of all modern Computer Science.

    In the article, the keywords 'Nothing human here' are thrown around. To me, this is just fluff. I would like to see their definition of what constitutes human only behaviour. The right side of the brain does work completely with physical matter. Therefore, simulating such things is a possibility, given enough capacity to store and process a large enough neural net.

    They also mention that they do not think Brutus is 'a conscious entity'. To say that this is what will always define humans and computers may be a bit rash.

    However, this is an exciting prospect. Perhaps Turing's predictions about the outcome of the Turing Test's completion date were only off by six to ten years.
  • OK, Cynical.

    :)

    Actually, I think it will be easy to tell the difference for anothe reason. Take any good story, one you really liked, and plug it into just about any modern word processor with a grammar checker. It's quite simple; the best writers, be they short story authors, novelists, or columnists, know that a truly good story breaks the rules of grammar regularly. The hard part is defining how much is too much. And that, I don't think a computer can do.

  • A fair way to run the contest would be to turn the program over to a third party. THEN choose the subject of the story. Otherwise you could just hard-code a story into the program.
  • I seroiusly hope the DOD is working on an AI like you described. And I realy, realy, hope I get a chance to meet it. I'd absolutly love to meet something with the Interenet for a mind. It'd be truly bizare. It'd be like you used Dejanews as a training file for MegaHal!
  • by ASCIIMan ( 47627 ) on Saturday September 18, 1999 @05:32AM (#1675033)

    There was an article in the May 1998 issue of MIT Technology Review which had a sample story called "Betrayal" (very original name) written by Brutus...

    Here's the link: http://www.techreview.com/a rticles/ma98/bringsjord.html [techreview.com]

  • I don't believe the creators (researchers at IBM's T.J. Watson Institute) consider this to be true Artificial Intelligence.

    But it is probably a pretty big deal. Now that the dreams of true Turing-esque AI are largely fallen by the wayside, researchers are focusing on smaller areas of interest and practical applications, e.g. expert systemsm, neural networks, or language processing. One important area is "human computer interaction", meaning not just one person sitting at their PC, but true communication between a person and a computer, either by typing or speaking. Thus, a computer that can "understand" the rudiments of grammar and "respond" in kind is a realistic proposition, even if you can say it's just an ELIZA program with a huge language database.

    Just as expert systems have begun to replace, say, bank loan officers, companies are also looking to automate (for consistency as much as anything) portions of customer service. Imagine a system that can deal with the public, deciding whether the vendor has taken too many returned widgets this month and has to hold the line and suggest an exchange for a whatsis instead, that sort of thing. This would be a boon for small businesses trying to make it online.

    A guy I knew a couple years ago was working on a project to have a computer read and interpret complaint letters, then recommend a course of action. This likely falls into the same category, except it's more like pure research.
  • I can't help but feel that Roald Dahl (Author of Willy Wonka, BFG, Danny, etc.) is appreciating this moment, even though he's been dead for nearly a decade.

    He once wrote a short story entitled "The Great Automatic Grammatizator" that's about this very topic. To summarize:

    A programmer, Adoph Knipe, has long wanted to be a writer. He creates a computer, with the financial assistance of his employer, Mr. Bohlen, that can write stories automatically. The computer is a great success, and they set up a publishing company to mass-produce literature. They simply purchase the names of famous authors and produce literature of their style, by simply adjusting settings of the computer.

    At the end, the narrator says that over half of all of all stories are created on The Great Automatic Grammatizor. But he, the narrator, refuses to give up on writing, even though nobody wants books written by humans anymore.

    Anyhow, this story can be found in the Roald Dahl Omnibus. Good luck finding it -- check a used bookstore. What a great book.
  • I could've so used one of these in English writing classes. I bet that's why these doods originally started writing this software.
  • 'coz there's a difference in "learning by example" and "filling in the gaps".

    A *good* author shouldn't just pull a 'plot outline' from a stack of pre-generated index cards and fill in the blanks, although that's a perfectly good way to sell hordes of cheapo paperback copies if you don't mind putting your name on absolute drivel. Stuff like that *does* seem to sell, after all.

    Instead, if you search hard enough you can find highly original authors, like Eco, Brunner (arguably), and so forth. Not all human authors crank out "techno-thrillers" with the same characters and insanely similar plots each time, or story after story about writers being terrorized, or what-not.

    A computer that gets fed a plot structure and creates entirely new ones is fine. One that fills in the blanks, is coming nowhere near the level of achievement of the (better...) human authors.
  • Not necessarily. The more we learn about humans, the more connections we find between brain and body. While it might be possible to create an AI by simulating a human brain, I suspect that, without a body, it would quickly go insane.
    I know I would.
  • See this could be really cool or no big deal. Just because they spent $2 mil doesn't really mean squat. There could be a bunch of cool ai stuff, or it could just be a "choose-your-own-adventure" on a more general level. I hope they don't have lots of pieces of stories that are just thrown together using rules that require them to make sense.

    -- Moondog
  • really nice to see with what they're coming up. Tho i still would like to see the development of the plot and manipulation of the different levels of interest along with the development of characters and the indepthof them to even consider a a machine as writer. course id like to see some humor too... im i asking too much?
  • Just think: no more waiting for Neal or Bruce or William to crank out a new novel. Just sit down and start reading!

    Seriously, while possibly indistinguishable from human writing, will the stuff be good? That, I doubt.

  • by rde ( 17364 ) on Saturday September 18, 1999 @04:36AM (#1675044)
    Okay, within a 500-word constraint it might do well, but that's only because in a story that short there's no room for developments. I've always felt that computers would (some day) do a decent job at drabbles (stories of exactly 100 words), but anything over a couple of thousands of words is bound to be distinguishable from the work of a hume.
    Of course, I try to be open as well as cynical, so I look forward to reading some of Brutus' offerings.
  • Depending on the coding techniques used (hopefully not full sentences in lookup tables ;), this would be a much better AI accomplishment than a good Chess computer. No offense, Big Blue. :)

    Of course, this is inferring that the ability to write a good story is a matter of skill alone ... since creativity can only be programmed as a matter of randomness (or chaoticness?), not as one of observation. The "observation" that good writers use to form their stories has had to be inserted somehow ... we'll see I guess.
  • The article says the entry has already been written. I wonder how long the software was tweaked and how often it was run to get one reasonable story as output. To make that contest useful, they should run the program once and take whatever comes out.
  • "Seriously, while possibly indistinguishable from human writing, will the stuff be good? That, I doubt."

    I think that the fact that it's /not/ good is the reason why it's indistinguishable from human writing.

  • First post generated by a computer!!!!!

    Haha!
    Guess not, eh?

    Computers never win. Don't ever forget that.

    BTW, isn't this sort of like that story about computers making ads? I don't know. Doesn't it seem there are some things that computers will never be able to do as good as a real person?

    Computers are good a repetitive tasks, exacting tasks, and scientific tasks, but they'll never do things like be a friend, give you a hug, make the day brighter.

    -Brent
    --
  • To make that contest useful, they should run the program once and take whatever comes out.
    Nah. After all, you don't expect human writers to submit first drafts. Tweaking is inevitably required.
  • I think the interesting question here is, how much of the story is created by the computer, how much is part of the poetry program. There are 2 extremes that I can think of:

    - On the one hand, there could be only a database of whole sentences and a database describing their content and context. Using this information, the computer puts together a story. The quality of the produced stories could be good, but there's only a limited number of distinct stories and it's a lot of work to create the stories.

    - On the other hand, the highest achievment would be to build a computer that only has a dictionary, grammar rules, some knowledge of the real world and some artificial "common sense" to create stories.

    I think it's somewhere inbetween, but the article doesn't really give any info how the program works.
  • Mutlu was a regular poster in several of the soc.culture.* newsgroups, back in '92 or so... he would post dozens of really long posts per day on certain subjects and would reply to any post that touched on these subjects...

    In '93 or so, it was wildly rumoured that Mutlu as really an AI, a script that parsed the soc.culture.* hierarchy. At about the same time, I stopped following s.c.* ... Anybody know what came of that??

  • by Anonymous Coward

    I actually heard a story about this project on National Public Radio almost a year ago. Due to the best possible luck, I got in my car and turned on the radio in the middle of a human reading the computer generated story so that I was not forewarned.

    While it was not an exceptionally long story, the quality was above 80% of the stuff you'd be likely to hear in a college writing course. I was really impressed then, and can't wait to read the new stuff.

    Can anyone find the link to the NPR story??

  • I thought that "filling in the gaps" was a long-standing Hollywood tradition for coming up with "new" stories?

    The way I understood it, this was a result of the "corporatization"(sp?) of Hollywood - if a certain type of story is shown to do very well in the marketplace, then the businesspeople who run the corporations figure that they'll be able to make more money with that plot using reusing resources while paying very little extra for new creative input.

    Kinda evolutionary in a way (anybody remember memes?) - we keep getting the same kinds of well-worn stories until somebody truly creative throws a mutation in somewhere - but the mutation must survive in the environment of the marketplace for it to become part of the entertainment "ecosystem".
  • I seem to recall a similar story four or five years ago, about a computer that had been programmed to write stories in a similar vein to a specific author, whom I believe was deceased. (I unfortunately don't remember any further information such as the author's name or the name of the computer, sorry)


    I think that this would present an interesting opportunity to have computers write new Shakespearian Plays, write sequels to exiting works, or even computer unfinished masterpieces whose author's died before they were finished. (The Canturbury Tales comes to mind)
  • The article does not make clear whether they reviewed many possible entries and picked a winner, but. . .

    An infinite number of monkeys on typewriters (or a random number generator given infinite time) will eventually produce the complete works of Shakespeare. But it's the people who own the monkeys who have to sift through and find it. (And that task is made harder by the millions of flawed complete works of Shakespeare in the results.)

    That famous computer music composer generates a whole bunch of junk. Its creator then goes through the results to find the gems.

    By restricting Brutus.1 to real words and phrases, and giving it other rules to follow, they would cause less junk to be generated, which makes it easier to find a gem. If this is the Brutus technique, it's not nearly AI.

    Note that this technique is similar to the technique of Deep Blue, the computer that beat Kasperov at chess. Its technique is also to throw lots of stuff at the wall and see what sticks, except that "good chess move" is more easy to quantify than "good music" or "good story". As a result, Deep Blue can evaluate its results by itself.

    Music generators usually have human editors.
    Brutus? We'll have to see. I bet it works on infinite monkey theory.
  • by osmanb ( 23242 ) on Saturday September 18, 1999 @06:28AM (#1675056)
    For those that are interested, the official homepage of BRUTUS is located at:

    http://www.rpi.edu/dept/ppcs/BRUTUS /brutus.html [rpi.edu]

    It's got some stories generated by the program, and other information by Selmer. (Incidentally, I've now had Selmer for three separate courses, and he is one of the absolute best professors I've had in my four years. Period.) Selmer fully believes that "true" AI is impossible, and that man is more than a machine, but has devoted his life to the study of AI anyway, and finding close approximations.

    -Brian

  • the contest doesn't seem to be about what is the best story... it seems to be about whether or not some english professor can tell the computer generated story from the many human generated stories.

    The article says that it took 6 years to 'develop' this computer.. I wonder how much of that time was spent wading through 100,000 crappy stories to find the one that sounded 'the most human.'


  • by Signal 11 ( 7608 ) on Saturday September 18, 1999 @06:39AM (#1675058)
    Why do I get the distinct feeling the entry from the machine will be 'First post!'....

    --
  • To make that contest useful, they should run the program once and take whatever comes out.
    Nah. After all, you don't expect human writers to submit first drafts. Tweaking is inevitably required.

    Yes, tweaking should be done, but it should be done by the computer. What the computer spits out should be the final draft. This is about whether the computer can create content, not whether humans can create content from computer output.

    You don't do your homework, and then have your "friend" tweak it and *then* submit it as yours, do you?

    -Brent
    --
  • I took an excellent course in SF from Prof. Ian Lancashire, who you quote. He's not the sort to dismiss SF concepts out-of-hand.

    When he says, "There's nothing human here", he's right on the money. Without human emotions or an understanding of them, no machine can connect with humans on an emotional level, except via Infinite Mokey theory.
  • Ugh... can you imagine the kind of AI that would be produced with the average quality of data on the web, not to mention the newsgroups? Fully 40% of its thoughts would be of hot, big breasted teens ready to get it on with you LIVE... etc. You get the idea. :)
  • Of course, you've had quite a while to grow accustomed to having a few kilos of red meat around, connected to you. This computer would grow up without that encumberance.
  • So that's why your comment is so short? :-)
    --
  • The difference betweeen a machine and man is that a human has emotions...but what are emotions? Biologically, emotions are created through a synapse either firing or not...hmmmmm sound familiar? Binary code 011101, on or off. Just a thought, and by the way i'm only 15 and don't have too much knowledge in the field of biology but i read a book on consciousness that related the mind to a binary computer...

    Bye,
    TYLER
  • People have a skepticism for manufactured culture," he added.
    I beg to differ: people seem all too willing to embrace "manufactured" culture these days. Culture is becoming what corporate media wants it to become.

    In 1984, Orwell writes that stories and songs were generated by massive machines (the novel was written before he could have imagined doing it electronically).

    Brutus.1 is not real AI, it simply constructs stories based on mathematical rules for putting together words. This is exactly what we should fear: stories and media without even artificial intelligence behind them. Stories can become completely meaningless, but they will still be amusing to the general public (note the large number of books that have absolutlely nothing but entertainment value).

    This is a first step to "manufactured" media devoid of any real content.
  • by fritter ( 27792 ) on Saturday September 18, 1999 @07:05AM (#1675066)
    10 PRINT "IT"
    20 PRINT "WAS"
    30 PRINT "A"
    40 PRINT "DARK"
    50 PRINT "AND"
    60 PRINT "STORMY"
    70 PRINT "NIGHT"
    ...
  • IIRC, in 1984 the fiction stories where written by machines to prevent human emotion from leaking into them and possibly inciting the public masses.
  • I can understand a computer doing advertising: that stuff is blocked out like so much optic fecal matter that I don't even notice it anymore. I know Slashdot has an ad at the top, so my eyes start scanning a few pixels lower. But a 'pute writing fiction? Stuff it.
    THe best thing about the human organism is its ability to write what it has percieved and imagined in a fashion for others to enjoy. You can experiment with grammar and structure, and sometimes you can pull off a stunningly personal situation by STRUCTURING IT INCORRECTLY. Few people I know speak in a grammatcally correct fashion: our machine in question may be able to handle dialogue, but can it account for a real person? Can it make the character feel REAL to you, the reader? Yes, I've read a lot of flat and lifeless fiction. I file it under CRAP with most of the rest of what society offers for consumption. If the story isn't REAL, at least in the mind of the writer, then it jsut can't come across as such on paper. I've been hooked by the hokiest concepts and characters not because they were well written, but because the author put every bit of his BELIEF into the concept he was relating. It was REAL.
    And our beige box? Can it do the same thing? If it can, i still refuse to read it. Bell labs or whomever will never produce a Proust, Chekhov, or a Hunter S Thompson.


  • If this could happen, then authors like King and Clancy could get easily replaced, or they could just switch to a partially modified AI that receives params on subjects, characters etc... and that might become a thing that might distinguish these authors from their machine counterparts (the selection of characters, setting, plot etc...) as the machine-generated ones might be terribly cliched or lack originality. Basically a machine generating a story of the quality of writing equivalent to Stephen King or Tom Clancy is definitely feasable, but who actually takes these authors seriously? I personally find myself avoiding the #1 New York Times Bestsellers because most of them are a waste of time. It could definitely be possible for the AI to write a story equivalent to the level of someone like Tolkien or Card, or possibly even someone like Orwell or Burgess, but the AI's "pool" of different things to choose from would have to be incredibly vast and well organized, truly a venture that would take decades.
  • I'm sorry, but that looks like another one of those "I don't think a computer can do that - oh no, wait, I'm sorry, I take that back" things. Having a computer break the rules of grammar is about as easy as having it follow them.
  • by Anonymous Coward
    Couldn't you combine the current program with a neural net that takes feedback from human critics to become better? Have it generate a few thousand scripts, rate them, and let the text generator learn. Also: does anyone remember that big project (in Texas, I believe) a number of years back where they were feeding thousands of pages of history and philosophy books into a computer for some reason or another. The system would churn and analyze all night and ask questions the next morning to clarify its understanding (like "By freeing the slaves, did Abraham Lincoln drive down the price of slavery?") I think this might have been on Nova. What was the outcome or purpose of that project?
  • by Anonymous Coward
    A quick search on Google shows this page, http://www.math.harvard.edu/~verbit/scs/Mutlu_what is.html

    According to the people at Harvard, Mutlu was indeed an AI, written by a guy at AT&T Bell Labs and a graduate student. "The program they wrote was quite funny in a way it screwed people's names and generated insults. Still, the insults were mixed with a load of scanned propagandist files."

    So that accounts for the length of the posts - cut and pasting preexisting literature.

    Kean de Lacy
    ...waiting for a password

  • "A *good* author shouldn't just pull a 'plot outline' from a stack of pre-generated index cards and fill in the blanks, although that's a perfectly good way to sell hordes of cheapo paperback copies if you don't mind putting your name on absolute drivel. Stuff like that *does* seem to sell, after all."

    Quite true, and I think that if there is anything to be worried about, it's that some publishing houses may find it's cheaper to generate formula novels by computer than select them from a slushpile. The fact that "good enough" beats "better" should be remembered here.

    While many might see this as liberating authors to write more worthwhile books, it could have a chilling economic effect on those that write formula novels to support their real writing or simply to break into the business.

    The books I want to read by computers are ones that give me insight into what it's really like to be a computer in a human society.

    But I have to think this progam is very impressive if it does as well as they suggest.
  • This writeup [techreview.com] has a sample of Brutus.1's work. It does read pretty well. From this and several other examples, though, it appears that Brutus only knows one plotline.
  • if i used this to write something for English Comp 101, would that be considered plagiarism?

    i remember d.dennett talking about a computer that composed music. they did a "turing test"
    in the form of a performance of obscure stuff done by classical composers and music composed by the computer. --convert goers were asked to try to guess which was (e.g.) mozart and which was the computer-- people didn't do too well.
  • by Napalm Boy ( 17015 ) on Saturday September 18, 1999 @12:33PM (#1675078)
    So much to say here...

    I'm one of Selmer's students in the Minds & Machines Program here at RPI, and I know there is a fundamental difference between Brutus and people. For now, that difference is creativity. Brutus was told/taught about English, much as we were, and the university setting, etc. What is different between something that I write and what Brutus.1 writes is where it comes from. I know that Brutus.1 has a limited knowledge base from which it writes, but I'm not sure I can pinpoint where these words I'm writing are coming from as my fingers type them.

    I personally think that Brutus' stories are a little static, but not as bad as some of the things I've read. As a literary critic (which I'm not), I think the weakest part of the stories are consistently the endings -- for me, it leaves little sense of conclusion and zero resolution. But then again, that's just me.

    For people who write about the computer writing perfect English as opposed to "normal" everyday speaking and writing with small mistakes, it's very easy to program a computer to make typos, etc.
    As far as Selmer's comment about Brutus.1 not being conscious, it definitely isn't, and not because it doesn't have a body; it's because it wasn't built to be conscious. It doesn't have a cohesive grasp of stories, and I don't think it has any idea about anything around it, not does it actually understand anything about the feeling of betrayl or anything associated with it. It might know how to express it in written English, but writing about something and knowing it are two very different things indeed.

    It's true that Selmer doesn't believe in Strong AI, and if you ever have the pleasure of meeting him you'll find that his arguments are clear, concise, and based perfectly on logic. My views are extrememly similar to his, and I don't believe that Strong AI is possible either; I'm resistant to what some claim is the "fact" that I'm basically a machine. However, this personal belief that Stong AI can't really work or exist is exactly what drives me to see if I can make it happen.
  • Oh great! In the future computers will indulge in sexual fantasies for us. No longer will we be held back by the dull drudgery of fantasizing about naked women! Gone will be the days when we have to glance down to check a hot chick's cleavage! Using advanced Artificial Intelligence algorithms and a state-of-the-art $25 million computer system that could fill an entire room, Dr. Smith has create a system that perfectly emulates the human mind's capacity for smutty thoughts. Right now it working on a top secret project for the United States Military, but who knows, Maybe one day a machine like this will spend all day visualizing that hot number next door naked so you don't have to.

    Wow. Talk about the march of progress.
  • This is an interesting idea...a government, say Serbia, writing a newsgroup bot designed to look for discussions of the Balkans and spread the Serbian government's viewpoint. Of course, it would be a lot more successful if it was subtle, rather than bombastic.
  • If the computer can only write one unique story then the credit for writing the story should go to the programmer.

    It's impossible to tell how creative the computer is by only reading one story. For example, the computer may be coming out with lines which in the first story come out as brilliant, but after reading four or five stories are tired cliches. Similarly its first plot might seem well crafted, but the others might have too many similarities to
    have merit.

    If these scenarios are so, then the programmer was able to write a set of parameters which allow variations on a single story of his creating. He wasn't however, able to fulfill the goal of creating a machine capable of original story-writing. The results of the test would be more revealing if brutus.1 submitted more than one story.
  • The contest would also seem silly to me were it not for the fact that I am content to view it with a narrower focus. A computer cannot effectively mimick human characteristics inasmuch as it cannot mimick them all. It can, however, mimick a given subset of human characteristics (i.e., the ones effectively programmed into it). And then, the more interesting material will probably be secondary to its shortcomings anyway.
  • Limited mindset? That quote isn't even sequential! Having a human body and being conscious are completely unrelated things, and I'm sure Mr. Bringsjord knows that.

    I go to RPI and am taking one of his classes right now, he seems like an intelligent, straightforward guy, who knows his stuff. I'll have to ask him about this :)

    -------------
    The following sentence is true.
  • about AI but...

    It seems to me that any AI system, no matter how complex, no matter what technology is used is ultimately a collection of a bazillion switches. Even so-called analog computers are ultimately digital.

    So I don't see how any AI system can outwrite a human. People can think, computers can't. Its really simple. Writing is something can comes from the soul. Computers have no soul. Technology is not the answer to everything. Humans are still needed and creativity can't be learned...In fact, the purpose of computers is ultimately to allow humans to more easily utilize their own creativity, right?

  • Why would a computer need to be as big as the solar system to emulate a human brain with an neural net? The nuerons in the human brain aren't _that_ small. I don't see any reason why a piece of hardware smaller than the human brain couldn't handle the job since an artificial neuron could probably be made a lot smaller than a biological one (not to mention thousands of times faster).
    It sounds like you subscribe to the (in my opinion) slightly outlandish theory that the human brain doesn't operate at the level of its neurons, but in fact is some sort of giant quantum computer. Now, I won't deny that quantum effects could influence whether or not any particular neuron fires or not at a given moment and that those effects could spiral upwards (like a butterfly flapping its wings and influencing future weather) and influence the thoughts of the host brain. On the other hand, I disagree with the notion that such effects are some sort of magic spark of life and that the brain wouldn't function without them. I see no reason why you can't have an emulated brain with neurons whose weights are measured in discrete units that could think, visualize, imagine, plan, and feel emotions (although, obviously you'd have to come up with some way of simulating the effects of the various chemicals produced in the brain on those neurons) just like a normal brain. Sure, if you started an artificial brain and a real brain with matching neurons off in the exact same state and ran them with the exact same stimuli for a length of time you'd probably get a different ending stat in each one. Nevertheless, they would both react in a human manner (which would probably be to go insane considering that both brains are recieving exactly the same stimuli [which basically means that they're trapped in some sort of simulated universe where absolutely nothing they think or try to do affects what they sense with their five senses or where their body moves and what it does]).
    The only situation I can think of in which it wouldn't be possible to simulate a brain like this would be if we lived in a completely rigged universe. If the universe were designed in some way so that it only appeared as if our consciousness were originating from our brains while it's really transmitted from somewhere else, then maybe we couldn't simulate consciousness because we wouldn't be able to look at a working model. I'm not going to ask who rigged the universe, if indeed it is rigged, that way right now. That's way beyond the scope of the debate. But the same basic principle could apply to everything in the universe, even physics and mathematics could be based on principles that only "make sense" in a controlled, artificial environment.
    Anyway, I doubt it. Not neccessarily since I think it's impossible, mind you. It's just that, if it's true, there's nothing we could do about it anyway.

    Oh yeah, about the solar system sized planet thing. Now _that_ would be an incredible trick. The engineering problems to overcome would be enourmous, even working under the assumption that this isn't a fully solid computer (where would you get all the mass?). In a planet sized computer emulating a human brain, signals travelling from one side of the computer to the other at the speed of light would take very roughly the same amount of time as an equivalent electro-chemical signal would take to cross a human brain. Scale that up to solar system size and it would take a day for the signal to make the equivalent journey. Actually, this has given me some pretty interesting ideas to toy with.
  • ... in fact, having the computer output complete gibberish would be simplicity itself. I think what SpaceCadet was saying, however, is that it would be extremely hard to program a computer to break the rules of grammar well. In order to do that, the program would really and truly have to understand what it was writing. It seems it would be difficult to do that without somehow giving the program true consciousness. On the other hand, if you did that, then wouldn't you basically be forcing a slave to write short stories for you?
    Hmm, questions, questions.
  • Millimetering towards a turing test is more like it. I mean, we are missing several key elements to the Turing test here. The biggest two are:

    1. Interactivity: Turing's test specifies that the sessions be interactive. Emulating human output in the sense of writing a scfi book is (as Thomas Miconi pointed out) a mathematical game. In terms of compelxity of the AI problem, it barely scratches the surface. Until a human is able to interact with two test subjects (one human and one AI candidate) and find herself/himself unable to distinguish between the two, we are nowhere near AI. In fact, I would submit that, as these things go, Deep Blue is possibly further along the track towards a real Turing test: it plays the same kind of mathematical game but for a different problem (chess vs. grammatical construction), and it has been agreed that it does so as well as an ackknowledged human master, whereas the samples from Brutus.l that I have so far seen are pretty craptacular.
    2. Totally free range of topics: Until we are able to query the writers and have them change subjects during one of these competitions (see interactivity, above), we are seeing no real advance here with respect to the Turing test. The difficulty of the TT comes not from being good at one tiny area - that is only a really tiny part of the problem. The real difficulty comes from being able to seem human in a conversation without limitations on the subject matter.
    Brutus.l is moving in the durection of the TT. But it is not as much of a breakthrough as we would like it to be...
  • Here's another example of Racter's output found on the t-shirt we had made up for our AI Lab:

    More than iron, more than lead

    more than gold I need
    electricity

    I need it more than lamb or pork
    or lettuce or cucumber
    I need it for my dreams

    Quite stirring for a computer program don't you think? It's probably worth keeping in mind that a whole lot of garbage had to be sifted through before this gem was found. Oh, that reminds me...

    Shopping list:

    • Infinity x bananas
    • Infinity x typewriter ribbons
    mysta ...///...
  • Now i'm going to have nightmares that this is all a shadow conspiracy to abduct slashdotters and replace them with perl scripts!

    They just need a small sample of your innermost thoughts and they will be able to clone you intellectually, no one will suspect. Soon there will be no one left. (insane laughter)

    It is scary but there are a lot of newsgroups that could easily be replaced by a Pearl script...
  • I agree that this would be a great AI accomplishment, and it's been "being worked on" for quite some time.

    People who think this sounds cool should definitely check out the book Godel, Escher, Bach [amazon.com] by Douglas Hofstadter. I'm just getting through the last pages, and... wow. Anyway, he addresses your argument, that "creativity can only be programmed as a matter of randomness (or chaoticness?), not as one of observation." Why should that be so? What's to keep a sufficiently complex computer from observing it's "environment" (whatever kind of environment that may be) and making sensible observations, similar to those a human would make? The way humans express creativity doesn't seem to be the result of chaos or randomness in our thoughts, but instead a process of drawing connections between similar things, or finding similarities (or interesting differences) between things that seem to have no relation. But we do it according to processes, which seems to imply that similar processes could be programmed. Basically, what would it be about a human that would allow us to engage in creativity which could never be programmed, or result from a logical structure-- that is, be an emergent epiphenomenon of a lower-level program.

    ----
    We all take pink lemonade for granted.

  • by TheDullBlade ( 28998 ) on Saturday September 18, 1999 @09:31AM (#1675095)
    It doesn't look very impressive. From the sample stories, it appears to be capable of nothing more than very short, simple stories with a very limited range.

    Unless they reveal the internals, or release several hundred stories generated by the program with no human selection or input, there is no reason to believe they have accomplished anything new or interesting. It appears to me that this is a story compiler, not a story writer. The "programmers" wrote the facts of the story and the computer compiled it into a linear story of a fixed format written in English:
    -detailed view of betrayed
    -establishment of trust
    -opportunity for betrayal
    -initiation of betrayal event (but not the complex details of the confrontation that would ensue)
    -short view of betrayer afterwards

    There may be an ad-libbing function too that generates variations from combining random selections from lists, but this can hardly be called AI.

    The sample stories show no motivation of any sort. They are nonsense stories. There is no character more easy to write about than a madman, because his actions don't need to be logical.

    The exception is the self-betrayal story, which displays a very simple motivation (if that's even the right word): the betrayer/betrayed hates what he has to do and freezes in the middle of it. With such a small sample, there's no reason to assume anything but that the program can produce no other motivation.

    BTW, does anyone doubt that the AOL community can produce lifeless prose indistinguishable from that which a program can create? I've taken a few minutes to identify bots in chats before, but only because I've had equally lame conversations with people who have nothing worth saying, are often distracted because they're doing five other things with their computer at the same time, commonly only want to talk about one very specific thing, and sometimes don't know English very well. Any humans can seem like a bot with sufficient limitations on the interaction.

    In summary, it looks like this costs a lot more effort than it pays back. It takes immense human effort to produce short stories of very limited range in an apparently fixed format. The creativity displayed here is human, and a human using this tool could not compete with a professional author spending the same effort. While it might be able to produce hundreds of stories from a single input, nobody would want to read them all because they would all be the same story underneath.
  • Well this contest seems silly to me. If you read the link [techreview.com] provided by an earlier comment, you'll see that the creator of Brutus.1 already believes his machine will loose. The author states:
    ...It seems pretty clear that computers will never best human storytellers in even a short story competition. It is clear from our work that to tell a truly compelling story, a machine would need to understand the "inner lives" of his or her characters.... For example, a person can think experientially about a trip to Europe as a kid, remember what it was like to be in Paris on a sunny day with an older brother, smash a drive down a fairway, feel a lover's touch, ski on the edge, or need a good night's sleep. But any such example, I claim, will demand capabilities no machine will ever have.
    I too believe that before a computer can create an acceptable story with believable human characters, you'll need a computer capable of mimicking human characteristics. You can't have believable human characters without a computer capable of being a believable human. Computer Science is a long way away from this, and Brutus, while interesting, is no closer to this holy grail of AI.
  • He's got it down to a "create new novel" macro in Word '97. :)

    Word has it Danielle Steele is interested.

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • by el_ted ( 61073 )
    That reminds me of Orwell's 1984... where the novels were all machine written... cold... strange...
  • Nowhere does the article say that the computer entry uses a fed plot-structure. Just that it's incapable of plot-structures as complex as human ones. We don't know for sure exactly what kind of plot-generation scheme is used here. It could very well be, as you said, pre-composed. But it could also be "invented."

    I don't think this would be the main limiting factor, though, in the system's writing capability. There are other, more pressing, difficulties I could see arising from the system; for instance, it would probably be unable to handle humour very well, or science fiction/fantasy. This is because it would be incapable of free thought and intuition/insight. Sure, it could probably come up with some sort of unusual idea that could probably be mistaken for a joke--if told right. The problem here is, what part do you reveal first, and what do you leave out until the end, to deliver as a punch line?

    Likewise, with the science fiction/fantasy example, it could probably construct a universe on it's own, where all vehicles look like elongated icecream cones, or where all travel is done using some pseudo matter-shifting technology or something, but the big problem is, again, how would the writer understand how to relate it to the reader?

    You see, human authors know how low-level they have to go with a concept in order to relate a new idea to someone else. This low level is called common knowledge. How do you relate this common knowledge to the computer system?

    The big underlying problem is that we still haven't figured out how to get a computer to understand ideas and abstract theories. Until the system is capable of storing these theories, or more accurately, converting these theories into 'understanding' and storing that understanding, (Understanding being like an executable version of that theory or idea. (Perhaps we could refer to this process as "compiling?" It's theoretically a lot like compiling source code..)) we'll never have a system capable of invention or insight. Just an infinite number of monkeys on an infinite number of typewriters.


    James

  • So I don't see how any AI system can outwrite a human. People can think, computers can't. Its really simple. Writing is something can comes from the soul.

    heh, I'm sure you know Alan Turing's reply to that argument...
    --
    "HORSE."

  • Interestingly enough, I understood the concept of the computer observing its surroundings, but didn't state my assumptions. My assumptions include a) that this computer does -no- observation except that which it is manually fed and b) no computer in the next few years will be able to look outside and comment to itself that beautiful days bring out the nice sounding birds and write a poem about it from the inspiration ... without some degree of chaotic programming system (since our brains are considered complex systems for now until we understand the mind more).

    And yes, G, E, B is an excellent book my dad made me read a little while back.
  • The virtual world idea may work but keep in mind as a general rule everytime AI researchers tried that idea they ended up against a plutonium door.

    SHLRDU (sp?) was only able to converse intelligently about the relative position of blocks in it's virtual world. It has no way of grasping talk of anything beyond it's world of blocks.

    The same goes for the virtual world of the expert systems that have been so popular, whether these expert systems diagnose illness or tell stories about buying hamburgers. Virtual worlds thus far have been closed systems.

    The world that bodies inhabit is not a closed system in any practial sense. Any "rules" discovered may suddenly yield exceptions, special cases, or end up being falsified. There are always surprises, be they mundane or major. This world is not a closed system in that it is always open for interpretation. Remember that logic is argumentative but not necessarily descriptive.

    My view is that the machine that will be able to have the sort of emotional responses that we humans can relate to it as an intelligence and not just as a complex machine will have to have some sort of window into the world-- this window being some sort of body.
  • Even I would have to go along w/ Turing & say that if we can't actualy point out the difference, we're morally obligated to assume that computers *can* feel emotion, etc.

    But what will we do if we come to the point where we have to make that assumption?

    You know, the one thing that always bothers me when talking about Artificial Intelligence and where it could possibly go is that a ot of people don't think about the political implications of such an intelligence. Why are we working so hard to replicate human intelligence artifically without a set of ethical guidelines? If we actually succeed, we're gonna have a hell of an interesting set of problems about which to worry.

    1. I've been involved in human rights activism for years. We in the human rights community usually have a set of standards which are the minimum standard by which we feel all people should be treated. This includes basic freedoms like the rights to free speech, freedom of religion, free assembly, the right to not be tortured, etc. If we ever succeed in creating a true artificial intelligence, are we (and this is a collective "we," not just one encompassing the human rights community) going to be willing to fight for the ai's rights? We can't even work out standards for ourselves.
    2. What about self-determination? Are we going to try to create intelligences with Asimov's slave codes (the three laws), or will they "bop" like Rudy Rucker's robots?
    3. What if an artificially intelligent computer chooses to follow a religion or make one up? Will its creators determine what beliefs the computer will be allowed to keep, or will the threat of reprogramming or deactivation be held over the intelligence?
    4. About Brutus and his potential progeny: What if we do succeed in actually creating a computer that writes stories. I mean, good, originial stories. What if the programmers input the widest possible amount of genre specifications &, say, background information. What if (and this would depend on the computer actually being "i passed the Turing test" intelligent) the computer decided to write radically political allegorical stories that suggested that its creators were a bunch of wankers & should be shut down for crimes against humanity or something like that. What if the computer decided to write highly literate & sensitive erotica? Dirty limricks? Surrealistic fiction? Cheesy romance novels? Xena: Warrior Princess or Pokemon fan fiction? Would its creators (and we're talking about Brutus' descendents, here) shut it down for being a rebellious child?

    I'm really tired , it's five o'clock in the morning, and I'm up with a splitting headache, so I'm kind of worried that I'm rambling. I suppose my point is that in a world of complication, where we uphold a variety of beliefs about the ways in which we should treat each other, should we really be working so hard to create artificially minds and all that suggests without a set of ethical guidelines? I hate the idea of creating a slave-class.

  • I don't understand why some people were so shocked after Deep Blue's victory over Kasparov. The real miracle is that men are still able to compete with computers today ! This is merely a matter of time before we can get machines powerful enough to calculate and try the entire tree of a game (or, for more complex games, significant parts of it) and be almost sure to win.

    By design, machines are better than human at mathematical games. Chess are a mathematical game.

    What about the Chinese and Japanese game of Go [well.com]? So far, Go has been entirely too complex for a computer to be any good at beating anyone but a novice player. Hell, I'm still trying to figure out the most basic rules. There's still a level of complexity and intutition to this very mathematical game that computer progammers can't replicate or overcome. This suggests to me that there's more to programming a computer to be unbeatable at a game than just entering all possible variations. With Go, that number is humungous. There's a saying that Go is so complex, no two identical games have ever been played. You can't say that about Chess.

    Instead of just computing likely outcomes, true AI is going to have to have emotional understanding and a real level of intuition. That's going to be the difficult part.

  • In order to do that, the program would really and truly have to understand what it was writing.

    I don't know about that. Some time of forced study in NL-related study has taught me the golden rule of the field: no matter what the problem is, it just takes some more and better programming techniques. Of course, no one really knows what programming techniques these are, so we're stuck...
  • Minority's post inspires a telling point. Minority defines literature the way I suppose an engineer would. The set story premises themselves are technical and ... well boring. There are already story writing aids out for the two big platforms.

    My point is... that if the story is sufficiently dull, boring, and straitjacketed, then it will make no difference if it was written by by a mind or knowledge base program.
  • "I don't believe Brutus is a true conscious entity," said Bringsjord, director of the school's minds-and-machines lab. "It doesn't have a human body yet."

    I find it interesting that someone with enough know-how to co-create the computer, Brutus.1, would have such a limited mindset. My dog doesn't have a human body, but I'm reasonably sure that it's conscious.

    Then again, one of my pet conspiracy theories is that the Department of Defense made the Internet a free-for-all because it would be much cheaper to let (m|b)illions of people contribute their life stories than to pay someone to generate content. At least one AI researcher theorized that a computer would need about 40TB of data to gain self-awareness, and that was the cheapest method of getting that much data.

    Anyway, if my theory were true, then I'd also be willing to believe that the DOD covertly underwrote Brutus.1 to be the mouth of the newly self-aware Internet. In that case, Professor Bringsjord would be quite the spin doctor.

    Then again, I'm stuck at work and out of coffee, so you could probably just ignore all of this.

    Guess I'll go back to posting my web page about what it means to be human...

  • by CJ Hooknose ( 51258 ) on Saturday September 18, 1999 @04:49AM (#1675112) Homepage
    If I recall correctly, there was a program called Racter back in the mid '80s that simulated a mental patient in much the same way ELIZA simulated a psychiatrist. Correct me if I'm wrong, but I believe someone compiled a bunch of Racter's utterances into a book called "The Policeman's Beard is Half-Constructed." It was gibberish, but it was interesting gibberish. Here's a sample:

    Many enraged psychiatrists are inciting a weary butcher. The butcher is weary and tired because he has cut meat and steak and lamb for hours and weeks. He does not desire to chant about anything with raving psychiatrists, but he sings about his gingivectomist, he dreams about a single cosmologist, he thinks about his dog. The dog is named Herbert.

    I sincerely hope Brutus will do better than that, and show how much technology/+programming have advanced in 12 years.

    (For a demonstration of actual rather than artificial insanity, check out the article's last sentence: "If the computer wins the contest, I'm going to take my computer and burn it. I certainly hope a human wins." Sigh. Did the Wright brothers burn their plane right after Kitty Hawk?)

  • There is no difference between machine written story and human written one.

    Machine:
    With well-defined rules for grammar,
    little logic,
    use a large database, or even copy some sentences from the web,
    Resuled with a story with no mistake in grammer.

    Human:
    With well-defined rules for grammar,
    little logic,
    use a brain, or even directly cut-and-paste some sentences from the web (from not well-known sites),
    Resuled with a story with no mistake in grammer.

    Maybe, the stories for slashdot already written by computer.
  • "Brutus.1's contest entry, which is already written, has some bumpy parts but other components make it hard to distinguish from a human effort, says Dan Hurley, founder and creative director of the Amazing Instant Novelist site. "

    If you give the computer very strict and complex rules for putting together words, and have it crank out 1,000,000 stories.. one of them is bound to sound like a human.

    This contest would be more fair if the computer's entries were randomly generated, rather than handpicked by a human.


  • Can Brutus I's stories REALLY be considered AI? After all, if they feed it "plot structure", aren't they in large part writing the story for it? What would really be amazing is if it could actaully come up with novel story structures and plots and that sort of thing.

    Otherwise, it's just a matter of degree more complex than Mad-Libs.

    I think it's neat and all, but I hardly think it's a big deal.
  • Sure, when a human writer writes a story, he will modify it until he thinks it's good. But when they claim a computer wrote a story, I expect it to come out of the computer "as is", without some operators' help and intervention. Otherwise, they can just write the story themselves, add a "printf" to every line and, voila, a computer-written story.
  • by bug_hunter ( 32923 ) on Saturday September 18, 1999 @04:51AM (#1675117)
    Will Shakespear was an abacus.

    This has pretty good ramifications for gaming, role playing games that make their own plots 10 years down the track.

    Don't they already use a computer to write the plot for Ally Mcbeal tho?

    for n in every_male_in_show
    act_like_scum(males(n))

    unique court case = random element of ally's life
    court_winnter = woman
    insert frigging annoying dancing baby
    have ally walk down street to random song in ally sound track.
  • Since it took 6 years to develop, couldn't the programmer have written a couple thousand short stories himself, and then have the computer randomly select one? :)

    But seriously, this is either extremely impressive or very lame, depending on exactly how the stories are created.
  • by Joheines ( 34255 ) on Saturday September 18, 1999 @04:53AM (#1675119)
    If you could make a computer complex enough so it could emulate a human brain with all its neurons and currents, they could be a friend or make your day brighter, because they would be exactly like a real human. Its thinking and feeling would be as artifical as yours - after all, all we think our emotions are is a bunch of electrical currents flowing around our brain. As long as you don't believe in some higher instance (for example, God), it is possible to build a computer that's indistinguishable from a human - in every aspect. In its highest form of complexity, it *would* be a human because it would have the same molecules in the same spot as a naturally-born human would have.
    Yeah, score this down as "heathen" :-)
  • "The judges, who must not only pick the winner but also which story was written by Brutus.1, include published authors and a university English professor."

    In other words, the computer, a bunch of people who have thrown a Web page online at some point during their lives, and some prof who has better things to do than prepare for his next class.

    "The computer - which cost $2 million and took more than six years to develop - can write stories with themes of betrayal, deception, evil and a little bit of voyeurism."

    It exhibits all of the best qualities in a person!

    " "It's provocative," said Hurley, who is running the contest. "But I bet nobody is going to figure out which one was written by a computer - most people write stories that are worse than this." "

    I'm not sure if what he's talking about is provocative, but the topics that computer writes about sure are. Well, we all know that the English prof is going to win, and apparently they got the Web page making people together so the computer will blend in.

    " "I don't believe Brutus is a true conscious entity," said Bringsjord, director of the school's minds-and-machines lab. "It doesn't have a human body yet." "

    You can't be a conscious entity unless you have a human body? Damn.. I was so hoping to be one of those heads in a jar like in Futurama.. Grr.

    Actually.. watch this thing turn into something like Skynet in the Terminator movies some day and see what this guy has to say /then/!

    And what does he mean by "yet"..? *shudder*

  • Seems like we keep coming up with AI "tests" to see if a machine can be considered "intelligent", and then we immediately turn around and try to hack something together that might pass the test, albeit barely. We say that writing a story would need insight into the human condition, which we plainly cannot program into a computer right now. Then, we write a program that can, in a limited sense, tell a story. Then we debate wheter the program, because it can tell a short story, has insight into the human condition. To do that, I think you would need a neural net with a complexity on the order of a human brain, and give it several years of training, just like a human brain.

    --
    grappler
  • If someone could write an short story about the many betrayals the computer
    named Amiga has had, now that short story would win and beat Brutus 1.
  • I think the best practical spinoff would be to use it to create a better grammar-checker for MS Word, et al.
  • You realize that the story is prose, not journalism, right?

    If not, do a tad more reading before shooting your mouth off.

    If so, I feel greatly offended that you think that's all that goes into writing a story.
  • ..since it can help you and me in our everyday lives. How many of you had to suffer through writing long lab reports in your engenering classes.. half of which was simply a question of sticking to a predetermined form. This technology can be developed to the point where you can outline what you want to say and hae a machine actually say it. Just imagine if the free software projects could document themselves mearly by making detailed notes of why they did what they did with the source code. What needs to be worked on here is interaction between human and computer.. humans do the initial layout and then the computer dose the grunt work while the human adjusts what it has to say. The issue here as I see it is really one of use interface. AI's are nice, but an AI that dose what you probable would have done anyway and then lets you fix it would be awsome.

    In mathematics we are beginning to see theorem proving software that can do a little bit of the grunt work involved in proving some types of theorems, but the limiting factor is still partly user interface and partly the difficulty of learning how to interact with the AI, i.e. designing your notation so that the AI can work on it. I expect the problems in computer generated documentation will also be that the human author needs to express him/her self to the machine in a way it can understand.. and the machine needs to give sufficent feedback so that the human dose not end up fighting the machine to keep it fvrom writing down a specific path.

    This is sorta like the research into functional programming langauges. You write your code in a provably correct specification langauge and then have the compiler make it efficent.. but imagine how a compiler which inserted optimization hints into the functional code so that you could come along and adjust it later.

    Jeff

    BTW> I wonder why no one has writen an AI to check C source for buffer overflows.
  • I found it hilarious that a proponent of Weak AI should come out with a conclusion that, in effect, is a tacit (but unspoken) admission that only Strong AI can produce really worthwhile results.

    The Strong AI community has always known that it has a hard task ahead, but that's no reason to believe that the Strong AI direction is flawed and to take instead a less ambitious path.
  • I find it interesting that someone with enough know-how to co-create the computer, Brutus.1, would have such a limited mindset. My dog doesn't have a human body, but I'm reasonably sure that it's conscious.

    This isn't a limited mindset at all. Douglas Coupland speculates in Microserfs that the peripheral nervous system functions as a peripheral memory storage device. In that case, part of that which makes up your mind might be stored in your body: peripheral memory. Such peripheral memory might in fact function in a very different way than central memory.

    Anyway, what evidence of consciousness does your dog display? Consciousness implies self-consciousness, which implies a capacity for reasoning about oneself. No dog that I've ever met is really capable of any such thing: for the most part they're idiots.

    ~k.lee
  • > we cannot change humans' capabilities, so they
    > will remain as they are.

    What do you mean we can't change human's abilities? What do you want to improve? Memory? Creativity? Ability to perform calculations? All of these things can be improved for a particular individual w/ practice & education. And by learning how to better practice & educate, we can improve them for humanity as a whole. Of course, that doesn't begin to touch on what is, IMNSHO, the most important capacity of all, which is to understand[1]. But frankly, we haven't got an f'ing clue what it means to understand, so we certainly don't know how to measure it, or even whether it even makes sense to talk about a capacity for understanding in a concrete, rigorous way.

    > however, computers are only a tool that process
    > information, no emotion and such.

    Here again, while I'd personaly agree with you, this is hardly settled. Many people would argue that they could, or perhaps even do. Even I would have to go along w/ Turing & say that if we can't actualy point out the difference, we're morally obligated to assume that computers *can* feel emotion, etc.

    [1] Yes, I'm being deliberately ambiguous here. I'm not going to say whether I'm talking about emotional understanding or technical/abstract understanding, because I don't know. I don't think there's really any significant difference in fact, but that's another issue....
  • You know.. the thought occured to me, while I was reading this stuff.. I wonder if maybe we're going about this all backwards. Should we really be trying to get a computer to write in a language it knows nothing about, using only pre-programmed thoughts and phrases? Many people talk about how the machine needs to understand us before it can write about us. But do you really expect someone who has never had exposure to human culture to write a believable story, if all you teach them is how to write?

    At best, you'd get something decidedly inhuman.

    Maybe our order is a bit backwards. What if we were to teach a machine to read instead, and keep track of not just the states of the characters, but also what's going on inside their head, and thus their *motives* for doing things. If your AI thusly learns that someone jumps when startled, smiles when pleased, etc., it'll have a better understanding of the human mind. Better than we could teach it, in fact, if you feed it enough input data; who can tell that person who has been isolated from society everything they need to know about the world? We'll overlook something.

    I guess it seems kinda silly to say all this, cos I suppose that's pretty much the essense of artificial intelligence in itself. But then I guess what I'm really saying is, isn't this a bit premature?

  • ... teach Brutus.1 how to do such basic things as verb tense agreement. To quote from "Betrayal":

    "Dave Striver loved the university--its ivy-covered clocktowers, its ancient and sturdy brick, and its sun-splashed verdant greens and eager youth. The university, contrary to popular belief, is far from free of the stark unforgiving trials of the business world; academia has its own tests, and some are as merciless as any in the marketplace. A prime example is the dissertation defense: to earn the PhD, to become a doctor, one must pass an oral examination on one's dissertation. This was a test Professor Edward Hart enjoyed giving. "

    (verbs hilighted for your convenience)

    Brutus seems to have a problem keeping its verbs in the same tense. :) Worse, its stories read like a typical math textbook story problem. "Beth wants to make cookies, but only has 2 eggs. Eggs cost $0.25 each at the local supermarket. How much will a dozen eggs cost?" Needless to say, if Brutus wins, it's just more proof that the American public education system stinks.
  • I'd take the thing about chatbots a little farther, and say that many AOLers would fail the Turing Test.
    --
  • >>The books I want to read by computers are ones that give me insight into what it's really like to be a computer in a human society.

    Now THAT sounds like something worth reading! :)
  • I disagree. If the program is run multiple times, then the best entry is chosen, the contest would be similar to this:
    "My random number generator program is so smart it can produce whatever number you want."
    "Really? OK, make it produce the number 1345687"
    "OK..click..click..click..click.."
    "Hey, those numbers aren't even close to the one I said."
    "Yeah, but watch.."
    hours later
    "Hey, whadda ya know? It really did produce the same number. That program of yours is fantastic! It can produce specific numbers much better than any human can."

    --

  • Will the storys be about computers that betray each other or something? (See Futuramma and when they show Robot Dramas)
    "Damn it 213.23.434.32 You cant go along masquirading your IP along, your no better than being a localhost junkie!"
    "Oh and you say that to me? You 127.0.0.1 whore!"
    [Bum Bum Buum! Dramatic Pause]


    Whats next, Computers that can reginize speech?
  • the people competing aginst the computer probably have more than one story... they probably show their stories to someone and ask for their opinion. they pick the best one, and change things in it to make it better.... i think in the balance of things its pretty fair.. i seriously doubt they looked at 1000000 stories and picked out the best one... when was the last time you sat down and read a million pages? thousand? three hundred?
  • by Thomas Miconi ( 85282 ) on Saturday September 18, 1999 @05:18AM (#1675148)
    Since the novel is very short and has very precise themes, we are still in the field of mathematical games - that is, arranging a finite number of element according to a finite set of rule in order to reach an arbitrary kind of configuration.

    It is evident that computers are intrinsically better than humans at mathematical games stricto sensu. I don't understand why some people were so shocked after Deep Blue's victory over Kasparov. The real miracle is that men are still able to compete with computers today ! This is merely a matter of time before we can get machines powerful enough to calculate and try the entire tree of a game (or, for more complex games, significant parts of it) and be almost sure to win.

    By design, machines are better than human at mathematical games. Chess are a mathematical game. Writing a very short story on a precise subject can still be roughly modelled as a mathematical game, at least for the structure of the story, while the "creative sugar" may be a difficult bit. Writing a full novel with complex stories and deep, meaningful dialogues is beyond that reach.

    The problem is, are there still many people who actually read complex stories - especially with deep, meaningful dialogues ? This Brutus-1 computer is just a machine equivalent for Barbara Cartland or industrial pop-music songwriters. CACDBS - Computer-Aided Celine Dion BullShit - is only years away.

    You see, this is a little like Babelfish : on its own, it's useless (too buggy), but used as a "preparser" to do the bulk of the job so the humans only have to correct the errors and add little twists every here and there, it can drastically enhance productivity. My opinion is, it will be very successful in America.

    (This is not an attempt at US-bashing : I'm sure the books it'll write will have tremendous success even in Europe - simply, european editors might be more reluctant to adopt this machine than their american counterparts. Damn intellectuals. Still don't understand the market is always right)

    Thomas
  • Mentioning the $2 million sounds more like an effort to stir up interest in this lame "challenge".

    Getting a computer to generate text is not that big of a deal (unlike, say, getting it to play chess really well). The postmodernism generator [monash.edu.au] does a pretty good job (and funny, too) and I'd venture to say for far less money.
  • by Black Parrot ( 19622 ) on Saturday September 18, 1999 @05:19AM (#1675150)
    > I find it interesting that someone with enough know-how to co-create the computer, Brutus.1, would have such a limited mindset. My dog doesn't have a human body, but I'm reasonably sure that it's conscious.

    To pursue the idea a bit further... The Scientific American article mentioned that Edward Teller is missing a foot due to a streetcar accident in 1928. Is he therefore less conscious than someone with two feet? How much of a human body is required? Would a single atom suffice? Two? 32767?

    Is a fat man more conscious than a thin man?

  • Racter, alas, wasn't the breakthrough it was claimed to be. The sentences and word choices were done by computer, but the database that produced the sentence structure and word database (with heavily preselected word connections for "strangeness") was built by a man with a highly idiosyncratic style to begin with, and as in the Brutus.1 case, there was more useless output thrown away than we'll ever know. The public software probably wasn't capable of generating the stories in the book.

    Read Jorn Barger's Racter FAQ [robotwisdom.com].

    It's probably going too far to call it a hoax, but there certainly was more hype than substance here.
  • by bortbox ( 77540 ) on Saturday September 18, 1999 @05:23AM (#1675155)
    You know, Isaac Asimov wrote a short story about this some years ago. I remember reading it in grade school. An author had a robot that did chores and such and followed the three lessons of robotics (Asimov's rules). Well the author kept paying a technician to upgrade the robot, first with grammatics, then a better dictionary, then "senses" such as irony and etc. Well to make things more interesting as time went on the robot would create better and better stories till one of them was good enough to cause the author to want to shut the robot off. The story ends like this: The robot kills the author and runs off with the technician. This is all too scary for me really. I mean how many of Asimov's predictions have already come true?
    Words are a means of self expression. Giving a machine the power to express itself in words is just one more step in producing true AI. So kudos to the programmers and engineers.

    Just one more thought, the robots final story pitted two colleges against each other: one from Yule (Yale) and another from Harvard. Anyway please make all the corrections necessary to my poor recollection of the story.

    Bortbox01

Whoever dies with the most toys wins.

Working...