Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
GUI Software Technology

Executive Secretary In Every Computer 320

An anonymous reader writes "BusinessWeek Online just ran an interview with a researcher from Sandia National labs whose team has developed an alternative approach to artificial intelligence. They have come up with a software program that models a computer user's behavior and gives the user advice, corrects his errors or saves files according to the user's own logic. The idea is for computers to learn how to use with users -- instead of vice versa. The software has already been tested with air traffic controllers."
This discussion has been archived. No new comments can be posted.

Executive Secretary In Every Computer

Comments Filter:
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Wednesday August 27, 2003 @09:33AM (#6804145)
    Comment removed based on user account deletion
  • by tds67 ( 670584 ) on Wednesday August 27, 2003 @09:34AM (#6804148)
    What happens when the user is a sick, twisted and sadistic person. Will the computer adapt to that kind of user?
  • Nighmare Scenario ! (Score:3, Interesting)

    by CmdrGravy ( 645153 ) on Wednesday August 27, 2003 @09:34AM (#6804151) Homepage
    Great, so now Technical Support / Helpdesk staff will have to learn the individual way everyone's PC is deciding to work when talking people through how to do things !
  • by zonix ( 592337 ) on Wednesday August 27, 2003 @09:38AM (#6804192) Journal
    The idea is for computers to learn how to use with users -- instead of vice versa. The software has already been tested with air traffic controllers.

    Not exactly comforting, if you ask me! I expect air traffic controllers to know their systems and how to use them. What happens when this software has learned to compensate for one traffic controller's particular errors, and then suddenly another traffic controller takes over his/her station?

    z
  • by Znork ( 31774 ) on Wednesday August 27, 2003 @09:44AM (#6804250)
    Indeed, this sounds exactly like Clippy. I read an article on Clippy a few years ago. Clippy was a great idea, that was supposed to help in just these ways. During R&D it worked very well.

    Then MS marketing got involved. They decided that Clippy didnt get activated enough. Clippy in its research version might have popped up once a month when a user really needed help. However, once a month would not justify the expense of development and marketing, nor could it be hailed as a great new feature if the users almost never saw it.

    Enter the new and marketing improved Clippy any MS office user over the last decade has had the misfortune to experience. Junk the I part of AI, and just make an annoying paperclip instead of a helpful tool. I can only imagine how the researchers felt about having their nice idea turned into something like what Clippy got to be.

    Maybe we'll see a real implementation of this kind of technology at some point in time. But I'll bet any commercial application of this is more likely to get written by popup ad companies, and jog the ATC guys elbow by suggesting which airline he should be using or something...
  • by handy_vandal ( 606174 ) on Wednesday August 27, 2003 @09:47AM (#6804283) Homepage Journal

    Remember oliver, the electronic personality extender predicted by Alvin Toffler in "Future Shock" ...?

    There's an interesting passage about olivers in John Brunner's excellent novel, "The Shockwave Rider":

    "... so-called olivers, electronic alter-egos designed to save the owner the strain of worrying about all his person-to-person contacts. A sort of twenty-first-century counterpart to the ancient Roman nomenclator, who discreetly whispered data into the ear of the emperor and endowed him with the reputation of a phenomenal memory." (pp. 41-42)
  • a few aspects (Score:3, Interesting)

    by jlemmerer ( 242376 ) <xcom123@SLACKWAREyahoo.com minus distro> on Wednesday August 27, 2003 @09:53AM (#6804333) Homepage
    Wired News [wired.com] has a similar article. Maybe you could just combine the new AI with the cute exterior ofClippy [microsoft.com]. On the other hand side it would be interesting how much space you have to allocate for the AI database. as far as i remember A.L.I.C.E. [alicebot.org] needed a quite large AIML file to be just somewhat intelligent. If now the computer should also remeber patterns in behavior and not just talk to you (Alice is a pure chatbot) then in my opinion you need quite large amounts of data to be stored. This could be useful for larger companies with a dedicated AI Server to help their employees (if we talk about AI in a network, why not call ist SKYNET), but on a normal desktop? I think that's too much.

    And to focus on another problem: if this thing learns about you behavior, don't you mind about your privacy? We are all paranoid about cookies and other spyware, and then some people actually want us to deliberatly install it? Just imagine: Your boss next to you because you want to show something to him and then the computer asks: "Hi XY, you haven't visited ./ today, normally syou surf it for ours during work. Can I help you get there?"
  • by Anonymous Coward on Wednesday August 27, 2003 @09:55AM (#6804344)
    His name is Clippy, and I hate him.
    This got modded as funny, but it would've been better modded as insightful. Nothing slows a salty computer user down more than a computer that stops every eight seconds to ask him a question or worse, start some processor intensive image manipulations when said user is trying to get actual work done.

    What would really be useful is an OS where everything is controlled through scripts I write myself. Applications, through the OS, would be controlled by scripting, too. Then I can tell the computer how I want it to act, instead of it having to learn what I'll probably want, then guess at it.

    It scares me that this sort of software is needed for air traffic controllers. Those guys should know the software they're using inside and out, frontwards and backwards. I expect an ATC to be able to fix any problems with the computer (even though the better solution is to move the ATC to another machine and have a tech come in and repair). The stupidity of the average computer user is infecting all levels of software design :(
  • Fun parts... (Score:3, Interesting)

    by RyoSaeba ( 627522 ) on Wednesday August 27, 2003 @10:03AM (#6804421) Journal
    Thanks to our software, when you stop the simulation and ask the computer and the operator, "What do you think is going on right now?" about 90% of the time you get the same answer from both.
    I don't know for you, but i think 90% is way too low for anything good to happen.... Imagine spam filters having only 90% success, thus missing 10% of spam... no fun, he?
    The systems we're building now require rigorous collection of data from a person to create a model
    Another way to say they can't yet analyse what a user is doing, s/he must be doing it in precise ways... So the user will have to adapt for the software to learn :-)
  • From the Article (Score:3, Interesting)

    by barryfandango ( 627554 ) on Wednesday August 27, 2003 @10:06AM (#6804438)

    "some fear that the concept suggests an ominous encroachment out of a sci-fi movie. Cognitive psychologist Chris Forsythe, who leads the Sandia team, insists that the machines are designed to augment -- not replace -- human activity.

    This sort of writing is the result of either a sensational and poorly informed writer, or a company hyping its product way beyond its capabilities. AI has not even reached the Bronze Age yet, and the idea that a concept like this threatens to make humans obsolete is laughable.

  • by Lodragandraoidh ( 639696 ) on Wednesday August 27, 2003 @10:06AM (#6804440) Journal
    The next killer app, in my opinion, is the application that allows you to not only save content, but also the context (or contexts, even - human beings don't keep things in their head under one strict association - there are multiple pointers to the same information) behind that word doc, picture, etc.

    I would love to be able to quickly find items that I need that were saved years ago. Almost every day I have to find such things on my disk, and having a searchable interface (particularly for binary encoded files, such as executable or graphics files - which have little searchable text inside of them) that works would save hours every week.

    Instead of only having a limited amount of information, filename and directory, you would be able to search over multiple hierarchies as well as descriptive text - even for binaries. This would put the user in the driver's seat, allowing her to build relationships within the data that have meaning to her.
  • by tuffy ( 10202 ) on Wednesday August 27, 2003 @10:09AM (#6804467) Homepage Journal
    I don't understand the whole line of research that believes computers need to be more "clever". Perhaps the assumption is that the user is an idiot, won't be getting any smarter, ever, and could use a bit of patronizing hand-holding in order to get anything done. But my thinking is that if such a "clever" system is necessary, the computer system hasn't been designed correctly to begin with.

    I want my computers to present me with clear and unambiguous output. In return, I will give them as much unambiguous input needed to get the job done. Save the "clever" AI for Doom 3 and let me get back to work.

  • Re:think lewinsky (Score:3, Interesting)

    by BCSEiny ( 656170 ) on Wednesday August 27, 2003 @10:17AM (#6804528)
    Doesn't anyone see what is going to happen. Those who have read dune (which is a lot of people on slashdot i would bet) would know this exact same thing happened in the dune universe. Eventually humanity got tired of having computers take over everything and they destroyed all the computers. This same thing will eventually happen if we do not stop the complete integration of computers into our daily lives. It is my opinion that young kids should not be allowed to use a computer (or calculator) for many years up till about high school and then sparingly. The inability to do math in our heads will be the beginning of our downfall. I have an unbelievable amount of friends who are willing to turn over all tasks to a computer. I find this very sad. "Fear is the mind-killer..."
  • by bruce_the_moose ( 621423 ) on Wednesday August 27, 2003 @10:18AM (#6804539)

    MS has been trying to add "helpful features that learn to adapt to how the user works" for years, clippy being the most notorious example. I hate them all. Many times my colleagues have heard me yell at some office program, "don't be so damn helpful!" I really don't want everything I type that has an atmark in it turned into a clickable email link.

    This company will likely be purchased by MS shortly, and their overhelpful time wasters incorporated into the operating system (along with a few egregious security holes, of course). And once that happens, my first question to MS tech support will be, "how do I turn this useless feature off." Shortly after win2k machines started becoming common in my office, a "how to turn off personalized menus" FAQ became very popular."

    Whyizzit smart people are wasting time and money on projects like these? Computers should behave like Forest Gump, and do "whatever the hell it is I tell them to" and no more.

  • Re:Scary ... (Score:4, Interesting)

    by bamurphy ( 614233 ) on Wednesday August 27, 2003 @10:22AM (#6804574) Homepage
    I remember reading a while ago about the comparison between the computer-learns-human style of doing things vs. human-learns-computer.

    The examples I believe were the current Palm OS with its logical if somewhat odd "grafitti" system. It was compared to the old Newtons which attempted to learn the user's handwriting, as well as the new tablet pc's.

    Basically the long and short of it was that the order of % correctness went newton > tablet > palm. Although the tablet pc's do a pretty good job interpreting, they still "make mistakes" when someone's writing gets really sloppy. On the other hand after a minimum of time the average user can use graffiti with a high level of accuracy and can understand the malformations of a sigil that might produce an error while being made.

    All in all though it seems most of these attempts to "learn" what a user may do are misplaced. I try to keep my "websites" directory very well organized, as well as my "print work" directory, but both vary in structure from each other, even before my own mistakes and idiosyncratic files. And my applications directory is a completely different story... and lets not even get started on consumer media. Shouldn't this all be handled by XML soon anyway?

    We've still got the world's best massively parellel computers in our noggins. Pattern recognition OWNZ.

  • Not new. (Score:5, Interesting)

    by teamhasnoi ( 554944 ) * <teamhasnoi AT yahoo DOT com> on Wednesday August 27, 2003 @10:27AM (#6804605) Journal
    Open Sesame (1993!) by Charles River Analytics [cra.com] for the mac did stuff like this: would 'learn' when you did things and open programs for you, where you saved files, how often you rebuilt the desktop, ect.

    You could also direct it by voice command. I had this program back in the day, heady stuff at the time.

    Here's a pile of other stuff on Software Assistants. [nec.com]

  • by sholden ( 12227 ) on Wednesday August 27, 2003 @10:27AM (#6804608) Homepage
    I take it you have personally advanced the state of the art more than this [microsoft.com].
  • by Anonymous Coward on Wednesday August 27, 2003 @10:37AM (#6804683)
    I totally agree. I hate this concept in ANY of its various manifestations, in other products like cars, or in the idea of personalized or predictive marketing. Humans are too changeable, and it will never work. Here's a simple but real example: Let's say I bought a pair of black slacks last week. What I am now more or less likely to buy next? More black slacks, because I bought one pair and obviously must like them so maybe I want to buy more? Or anything BUT a pair of black slacks, because I already own them - I just bought a pair! Then, maybe the week after I bought them, I noticed that white fuzz shows up dramatically on black clothing, and vow to myself next time to never buy black clothing again. Or not.

    Now, go ahead, use all the predictive marketing tools at your disposal and predict what I'm going to do/buy next. You can't, there is no way you can ever guess with accuracy.

    Clippy, or zippy AI, all of it is useless and annoying. I'll do something different just to spite them. Anything which aims to be predictive is doomed to failure and a bad idea.
  • More crap... (Score:1, Interesting)

    by biz0r ( 656300 ) on Wednesday August 27, 2003 @10:55AM (#6804815) Homepage
    Do people REALLY think this will work for a day to day user? I think not...this simply is another layer of fog in between you and the operating system that very well may not match your needs/wants. Personally, I prefer to know exactly where I place files...it kind of helps with knowing where they are. Also, pop-ups are annoying...no matter what form they come in they always get in your way somehow. What if a user changes his habits? That means the software no longer is working for, but against you. What if someone else jumps on your computer for a day while you are out sick? It will merge some of their habits with yours...creating more crap to deal with.

    Really, this is similar to adding another abstraction layer into software...another source of error, except in this example...its definitely prone to error...causing myself and countless other admins/software engineers lots and lots of headaches.

    Rather than working to make computers use with the users...which is ass-backwards, creating all sorts of nasty problems IMO...how about we make users learn to use the technology properly...like it should be?

    I should also note that there are several bottlenecks with implementing this accross all software, since all software works in different ways...that means each developer will have to write this 'AI' crap into it...I don't know about other developers..but I say hell no to more cruft.

    Humans are not predictable enough in their habits for something like this to work...even the same person changes their habits over time...which will make keeping up with what data is where even more difficult.

    Thanks but....I think I'll pass on this one.
  • by danila ( 69889 ) on Wednesday August 27, 2003 @10:58AM (#6804851) Homepage
    May be someone can write a module for Alice [alicebot.org] integration with Slashdot. I think the dialog-based parsing engine would work just great after some tweaking.

    An alternative approach would be to first parse Slashdot archives to get a lot of posts, articles and moderation data and then use Bayesian theory to decide which sentences/keywords should be included to produce highest moderation based on the words in the blurb (or the linked article, but parsing that would be against /. spirit).

    It can be further enhanced using the poetry evolution [slashdot.org] engine. If we limit the system to very short posts (cliche jokes or smartass oneliners), it might work quite well (feedback, of course, would be the moderation).

    Any volunteers?
  • Re:From the Article (Score:3, Interesting)

    by dasmegabyte ( 267018 ) <das@OHNOWHATSTHISdasmegabyte.org> on Wednesday August 27, 2003 @11:03AM (#6804894) Homepage Journal
    Why is it laughable?

    Robots have replaced workers in factories.

    Dictation programs have replaced secreataries and typists.

    Tools like Google, SQL and mapping software do a better job of researching information than people do.

    Machines perform very well in tasks where we boss them around. They don't perform equally well when they have to perform a lot of decision making. This is an attempt to bring them to a more passable level. And since technology is always replacing people, I think designing technology with the vision of augmenting a person's computer usage is very noble. And it's something that's very important to point out when we've got doom and gloom pundits everywhere.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...