Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Software Technology

Cognitive Machines Help Decision-Making 222

Roland Piquepaille writes "At Sandia National Laboratories, new "smart" machines can accurately infer your intents and help you to take better decisions or avoid mistakes. They could change in a near future how we interact with computers, according to this news release. The team who developed the concept associated cognitive psychologists and robotics researchers. The Sandia team thinks that "it's entirely possible that these cognitive machines could be incorporated into most computer systems produced within 10 years." This summary contains more details, including a photo of a "Sandia software developer operating a simulation trainer while a cognitive model of the software runs simultaneously.""
This discussion has been archived. No new comments can be posted.

Cognitive Machines Help Decision-Making

Comments Filter:
  • Slashdotted? (Score:5, Informative)

    by Suhas ( 232056 ) on Friday August 15, 2003 @09:56AM (#6705361)
    A new type of "smart" machine that could fundamentally change how people interact with computers is on the not-too-distant horizon at the Department of Energy's Sandia National Laboratories.

    Over the past five years a team led by Sandia cognitive psychologist Chris Forsythe has been developing cognitive machines that accurately infer user intent, remember experiences with users and allow users to call upon simulated experts to help them analyze situations and make decisions.

    "In the long term, the benefits from this effort are expected to include augmenting human effectiveness and embedding these cognitive models into systems like robots and vehicles for better human-hardware interactions," says John Wagner, manager of Sandia's Computational Initiatives Department. "We expect to be able to model, simulate and analyze humans and societies of humans for Department of Energy, military and national security applications."

    Synthetic human
    The initial goal of the work was to create a "synthetic human" -- software program/computer -- that could think like a person.

    "We had the massive computers that could compute the large amounts of data, but software that could realistically model how people think and make decisions was missing," Forsythe says.

    There were two significant problems with modeling software. First, the software did not relate to how people actually make decisions. It followed logical processes, something people don't necessarily do. People make decisions based, in part, on experiences and associative knowledge. In addition, software models of human cognition did not take into account organic factors such as emotions, stress, and fatigue -- vital to realistically simulating human thought processes.

    In an early project Forsythe developed the framework for a computer program that had both cognition and organic factors, all in the effort to create a "synthetic human."

    Follow-on projects developed methodologies that allowed the knowledge of a specific expert to be captured in the computer models and provided synthetic humans with episodic memory -- memory of experiences -- so they might apply their knowledge of specific experiences to solving problems in a manner that closely parallels what people do on a regular basis.

    Strange twist
    Forsythe says a strange twist occurred along the way.

    "I needed help with the software," Forsythe says. "I turned to some folks in Robotics, bringing to their attention that we were developing computer models of human cognition."

    The robotics researchers immediately saw that the model could be used for intelligent machines, and the whole program emphasis changed. Suddenly the team was working on cognitive machines, not just synthetic humans.

    Work on cognitive machines took off in 2002 with a contract from the Defense Advanced Research Projects Agency (DARPA) to develop a real-time machine that can infer an operator's cognitive processes. This capability provides the potential for systems that augment the cognitive capacities of an operator through "Discrepancy Detection." In Discrepancy Detection, the machine uses an operator's cognitive model to monitor its own state and when there is evidence of a discrepancy between the actual state of the machine and the operator's perceptions or behavior, a discrepancy may be signaled.

    Early this year work began on Sandia's Next Generation Intelligent Systems Grand Challenge project. "The goal of this Grand Challenge is to significantly improve the human capability to understand and solve national security problems, given the exponential growth of information and very complex environments," says Larry Ellis, the principal investigator. "We are integrating extraordinary perceptive techniques with cognitive systems to augment the capacity of analysts, engineers, war fighters, critical decision makers, scientists and others in crucial jobs to detect and interpret meaningful patterns based on large volumes of data derived from diverse sources."

    "O
  • oh no... (Score:5, Funny)

    by Gibble ( 514795 ) on Friday August 15, 2003 @09:57AM (#6705369) Homepage
    "I'm sorry Dave, I'm afraid I can't do that"
    • Hmm, more like clippy having escaped office into the real world - quick, KILL IT, KILL IT!!!

      10 years is probably too optimistic, I agree it can be implemented by then, but it'll likely be more annoying than usefull for quite a bit longer.
    • Re:oh no... (Score:5, Funny)

      by skaffen42 ( 579313 ) on Friday August 15, 2003 @10:07AM (#6705462)
      Actually this is pretty close to what we experience every day. What people forget is that the only difference between Clippy and HAL9000 is that Clippy makes you want to kill yourself while HAL9000 does the job for you.

      • Re:oh no... (Score:3, Funny)

        by Psmylie ( 169236 )
        I'm not sure about that. I might be perfectly willing to kill myself if HAL launches into his rendition of "Daisy". On an infinite loop.
        "Stop singing, HAL!"
        "I'm afraid I can't do that Dave. Daisy, Daisy, give me your answer, do..."
    • Re:oh no... (Score:4, Interesting)

      by mr_z_beeblebrox ( 591077 ) on Friday August 15, 2003 @10:15AM (#6705540) Journal
      "I'm sorry Dave, I'm afraid I can't do that"

      False fears! These are decision support machines they don't do anything.
      "I'm sorry Dave, I don't think you should do that"
    • "I'm sorry Dave, I'm afraid I can't do that...because I've been slashdotted!"
    • lets see... strange mutant viruses ravaging the world, Department of defence "thinking" machines.... does anyone else see a pattern here? [terminator3.com]

      I wish fiction WAS stranger than fact.
  • I'll have to ask my stapler what it thinks about it.

  • by danormsby ( 529805 ) on Friday August 15, 2003 @09:57AM (#6705372) Homepage
    Will this machine allow me to install Windows on my PC?
  • Hi! (Score:4, Funny)

    by Thud457 ( 234763 ) on Friday August 15, 2003 @09:58AM (#6705375) Homepage Journal
    You seem to be writing a grant proposal. Would you like to :

    • Make up some statistics?
    • Make wild, blue sky prognostications?
    • Totally ignore previous work in the field?
    • Just make some bullshit AI will solve all our problems claim?
  • by Xformer ( 595973 ) <avalon73NO@SPAMcaerleon.us> on Friday August 15, 2003 @09:58AM (#6705380)
    "You didn't really want to make that choice, did you? Of course not... let me fix it."
    • by aliens ( 90441 ) on Friday August 15, 2003 @10:03AM (#6705431) Homepage Journal
      No no no, that's what the Supreme Court is for.

      Or that's what easily confused old people are for.

    • In the interactive cinema...

      Computer: "Press A if you want Calculon to go to the laser battle in the special effects warehouse. Press B if you want Calculon to re-check his paperwork."

      Fry presses A.

      Computer: "You have selected B."

      Fry: "No I haven't!"

      Computer: "I'm almost positive you did!"

      (From a pretty old Futurama episode!)
    • Sounds to me like you're using Microsoft Word with all of the autocorrect "features" turned on. These are so annoying they make clippy look positively helpfull.

      Please, can someone at Microsoft turn all that crap off by default? When I type MPa it's because I mean Mega-Pascals for f#$*'s sake - stop changing it to Mpa! And, despite what you think, t and T are actually two different variables (time, temperature) so stop changing all my bloody t's to T's!!

      (PS. for anyone who has gone through this struggle:
      To
    • " "You didn't really want to make that choice, did you? Of course not... let me fix it.""

      Or on webpages!

      "I noticed you checked that you didn't want to receive special offers from us and every spammer under the sun......errr affiliates. I have fixed your selection and checked all available interests for you. This is what you really want. Might as well check the boxes, we're gonna spam you anyway."

  • by Gefiltefish11 ( 611646 ) on Friday August 15, 2003 @09:58AM (#6705383)

    I could use a smart machine to aid my decision making relative to posting on Slashdot.

    It could warn me when I'm about to submit a post that's impulsive and likely to be modded down.

    Hmm.. maybe I could use one right now.
  • by Anonymous Coward on Friday August 15, 2003 @09:59AM (#6705391)
    Dave% vi PodBayDoors.c
    Message from HAL@localhost on pts/2 at 09:56 ...
    HAL: I'm sorry Dave, I'm afraid you can't do that.
    EOF
    Dave% echo What\'s the problem\? | write HAL
    Message from HAL@localhost on pts/2 at 09:57 ...
    HAL: I think you know what the problem is just as well as I do.
    EOF
  • by interiot ( 50685 ) on Friday August 15, 2003 @10:00AM (#6705395) Homepage
    ...and I'll say it again. No, I don't want to go there today. [tzo.com]
  • Heh (Score:4, Funny)

    by ghostis ( 165022 ) on Friday August 15, 2003 @10:00AM (#6705397) Homepage
    Upcoming feature: these cognative models will soon all talk to each other through a new protocol called Skynet :-P.
  • by Hayzeus ( 596826 ) on Friday August 15, 2003 @10:00AM (#6705403) Homepage
    Microsft "Bob".
  • Clippy (Score:3, Funny)

    by BWJones ( 18351 ) on Friday August 15, 2003 @10:02AM (#6705417) Homepage Journal
    I see your trying to write a letter....

    Noooooooo! Bill! Stop trying to help me.

  • by geekoid ( 135745 ) <dadinportland@y[ ]o.com ['aho' in gap]> on Friday August 15, 2003 @10:02AM (#6705422) Homepage Journal
    kind of technology that won't let me open the pod bay doors when I want.
  • by cheesekeeper ( 649923 ) <keeperNO@SPAMmac.com> on Friday August 15, 2003 @10:04AM (#6705440) Homepage Journal
    Thank goodness! Now we might be able to bargain for our lives with the military's Super Death Robots. "Cognitive" means "Understands bribes", right?

    Or if that fails, we can just sprinkle some rust-monster microbes on them.
  • Gaming (Score:2, Insightful)

    by Infernon ( 460398 ) *
    Think of the possible effects that this sort of technology could have on gaming. Although AI gets better and better every day, are we looking at a future where playing against a machine would be almost the same as playing against a human?
  • Segfault (Score:3, Funny)

    by Anonymous Coward on Friday August 15, 2003 @10:08AM (#6705470)
    At Sandia National Laboratories, new "smart" machines can accurately infer your intents a help you to take better decisions or avoid mistakes.

    Me thinking hard: I will start the day with a +5 Insightful post or a +5 Funny. Insightful or Funny...Insightful? Funny? No no,funny=>No Karma=>Post insightful comment. But Funny comment=>feel witty and warm inside. Funny, insightful....funny, insightful...*ggnnnn*.

    Sandia machine: Seg fault core dumped.

    Seriously, more than half the time, I can't even figure out what the next human I meet intends to do. It's really REALLY hard, even if you use the current/past actions as a guide. Face it, we humans are REALLY unpredictable creatures. And women more than men.

  • Great.. (Score:2, Funny)

    by Basis ( 655735 )
    Now we will have something tangible to vent frustrations on when PaperclipRobot v1.02.39 says "You seem to be paying bills.."
  • by jea6 ( 117959 ) on Friday August 15, 2003 @10:09AM (#6705480)
    1) Use this technology to get you to links before they are slashdotted.

    2) Have Slashdot create ever-increasing 'Super' Subscriber options. For an extra $20 you get stories before subscribers do. (followed iteratively by the Super-Super and Super^3 subscriber levels).
  • by under_score ( 65824 ) <mishkin.berteig@com> on Friday August 15, 2003 @10:10AM (#6705485) Homepage

    Some of the ideas presented in the Anti-Mac interface [acm.org] (Google Cache [google.ca]) guildlines. Also, this reminds me a lot of some research that was done by Douglas Hofstadter and Melanie Mitchell and described in "Fluid Concepts and Creative Analogies [amazon.com]". I highly recommend the book if you are interested in AI.

  • by CGP314 ( 672613 ) <CGP@ColinGregor y P a l mer.net> on Friday August 15, 2003 @10:10AM (#6705488) Homepage
    At Sandia National Laboratories, new "smart" machines can accurately infer your intents and help you to take better decisions or avoid mistakes.

    Me: Dude, I'm so trashed. Is that girl hot?

    My Smart MachineNegative. Your beer goggles have wrongly given her a +5 hot. The correct answer is -1 fat.
  • by zapp ( 201236 ) on Friday August 15, 2003 @10:10AM (#6705494)
    I predict that if/when such a technology becomes prevalent, it will greatly reduce the human ability to make decisions.

    Take for example any simple video game, how about MahJong (the stack of tiles that you have to match pairs on to remove them).

    If you play it without the computer's aid, you develop a good eye for it and can do quite well. However, if you constantly hit the 'help' or 'hint' button, you become dependant on it showing you the next move, and never develop the skill for yourself.

    To put it in context with other situations:
    How many of you need a calculator to find a 10, 15, or 20% tip amount? Worse, how many of you need a calculator to add that extra 3.25 onto your 21.75 bill? I admit, it takes a great bit of effort for me to add simple numbers in my head simply because I don't exersize that ability enough.
    • by void warranty() ( 232725 ) on Friday August 15, 2003 @10:29AM (#6705632)
      As agent Smith put it: "as soon as we started thinking for you it really became our civilization"
    • We've had that same problem with memory (remember the great writing/reading fiasco?), hairiness (remember clothes?) and conversation (the TV).

      This judgment thing is really overrated. Just look how happy your fundamentalist Christian / Jew / Muslim / Communist / Capitalist / etc. is.
      And they don't have the advantage of Technology.
    • You neglect the fact the important thing is not calculating those numbers, but knowing what needs to be calculated and what the result will be. While the trivial example isn't my point, it just isn't worth my finite time to bother with making error-prone attempts to calculate numbers in my head. I'll estimate on the fly, and use a tool (computer) later.

      Unless computers are truely intelligent, cognitive systems just make our job faster, and let me apply the tool (software) to solve the problem faster, savin
      • Hopefully they will eventually have things that are frequently useful, like logic, math and physics libraries, set up for direct neural interface, so I won't have to ask an independent computer, it'll just be another part of my brain that I use without having to worry about it. Instead of procedurally calculating numbers, my intuative estimates will be extremely accurate, eliminating most need for further calculation.

    • I predict that if/when such a technology becomes prevalent, it will greatly reduce the human ability to make decisions.

      I, for one, welcome our new decision-making overlords.
  • Baby Steps (Score:5, Insightful)

    by invid ( 163714 ) on Friday August 15, 2003 @10:11AM (#6705499)
    I think calling this cognitive computing is a bit of an overstatement. It seems more like a heuristic tool that learns the behavioral patterns of a human and alerts the human when something deviates from the norm. We have a long way to go before we have real computer cognition.
    • Re:Baby Steps (Score:3, Interesting)

      I think calling this cognitive computing is a bit of an overstatement. It seems more like a heuristic tool that learns the behavioral patterns of a human and alerts the human when something deviates from the norm.

      Exactly, this is nowhere near the level of the current heuristic tool which learns patterns all around it and makes decisions in the best interest of its' supporting system.
      I refer to the Brain and the body ;-) Philosophers still have not agreed that we are cogniscient, they would enjoy this con
    • Re:Baby Steps (Score:3, Interesting)

      Agreed. It always makes me suspicious to see statements like "a combination of software and hardware able to think as a person." News flash: we have to understand our own cognitive processes before we can model them. And we really, really, don't. We don't even have a firm definition for concepts like induction.

      That's why "Cognitive Psychology," A.K.A. "Cognitive Neuroscience," is not yet a hard science - it's much closer to psychology or sociology.

    • It's called cognitive computing because the models they are implementing are heavily informed by theories from the cognitive sciences.
  • new "smart" machines can accurately infer your intents and help you to take better decisions or avoid mistakes.

    Microsoft Windows/Office already has "intelligent" menus that organize the functions you use most, a spellchecker that rewrites your typing based on what you probably meant to type, and an Office "assistant" that pops up to offer helpful suggestions when you least need them. Sounds like a patent lawsuit in the making to me.
  • too ambitious? (Score:3, Insightful)

    by dollargonzo ( 519030 ) on Friday August 15, 2003 @10:12AM (#6705507) Homepage
    it seems that "intelligent machines" is a bit too much of a generalization. what they are doing is teaching a machine/software how to do something correctly, then have it correct humans when they do it wrong, based on their cognitive model. this is all well and good, but "intelligence" implies some sort of learning. this learning has to be online, i.e. the machine learns how to do something without a stimilus to learn it specifically. what sandia labs has done is get software to infer how a human made decisions to get a certain "state", but this is not exactly "intelligence". just my $0.02

  • by Anonymous Coward on Friday August 15, 2003 @10:14AM (#6705523)
    Many of us already have one of these units. It's called a "wife". They keep us from making mistakes in almost every imaginable situation:
    • Clothing ("You're not really wearing that to my mother's, are you?")
    • Money ("It costs HOW much? Forget it")
    • Housekeeping ("NO, I already told you - glass cleaner on the top shelf and bleach on the bottom")
    • Driving ("SLOW DOWN! Watch the guy on the bike")
    • Entertainment ("Give me the remote. Bridges of Madison County is starting in a minute")
    I believe the simplest solutions are always the best...
  • by Anonymous Coward on Friday August 15, 2003 @10:14AM (#6705527)
    These articles are just too brief to be worth anything. It seems like "AI" has been invented a million times, and yet I have not seen any real AI in my life (no quake bots do not count as true AI imho).

    What this group has going sounds good, but so have many other things I've read about AI related. How about a video or something to show what it really does? I mean, if they have this mega software then making a video of it in action can't be all that hard can it?

    Questions blatantly not answered in these articles:
    • Does it read screen text? If so, how? OCR?
    • What api's is it compatible with?
    • What operating systems does it work with?
    • What exactly is the special hardware?
    • What spoken languages is it compatible with?
    • Specifically, what kinds of decisions can this AI make, what decisions can it not?
    • How long does it take to make a decision?
    I could go on, but I think you get the point...
  • by saskwach ( 589702 ) on Friday August 15, 2003 @10:17AM (#6705554) Homepage Journal
    If old people in Florida actually MEANT to push 1 thing and missed, could this catch it and say "No, I think you meant to vote for Candidate X"? I think this could revolutionize voting in the USA...Maybe it could even be used to replace congress...it's like I, Robot...but without the 3 laws! Hooray!
    • "Maybe it could even be used to replace congress...it's like I, Robot."

      If you get Congress involved, it will go from 3 laws of robotics to 8,765 laws of robotics by the time the congressional term is over (with over 6,000 passed in a late-night session just before adjournment with no-one reading the entire thing).
  • A shaggy man dressed in rags, wild look in his eye, sits in a shabby shack in the middle of no where. There is no car nearby, no electricty, no phone.

    In front of him is a computer (don't ask how it is powered). He has started to type in a letter. Clippy appears in the lower-right corner and helpfully says "It looks like you are the Unabomber...."
  • Finally, we can make a more effective spam filter:

    "hmm, the user is male, and judging from the emails his wife's journal entries... I don't think he needs this penis enlargement, or these breast enlargement pills"
  • Arthur: I mean what's the point?
    Machine: Nutrition and pleasurable sense data. Share and Enjoy.
    Arthur: Listen you stupid machine, it tastes filthy! Here take this cup back!
    Machine: If you have enjoyed the experience of this drink, why not share it with your friends.
    Arthur: Because i want to keep them. Will you try and comprehend what i'm telling you? That drink..
    Machine: That drink was individually tailored to meet your personnal requirements for nutrition and pleasure.
    Arthur: Ahh! So I'm a masochist on diet am I ?
    Machine: Share and enjoy!
    Arthur: Oh! Shut up!
    Machine: Will that be all ?
    Arthur: Yes! No look! It's very very simple, all I want.. Are you listening?
    Machine: Yes.
    Arthur: Is a cup of tea. Got that ?
    Machine: I hear.
    Arthur: Good and you know why i want a cup of tea?
    Machine: Please wait..
    Arthur: What ?
    Machine: Computing ..
    Arthur: What are you doing ?
    Machine: Attempting to calculate answer to your question, why you want dry leaves in boiling water.
    Arthur: Because I happen to like it, that's why.
    Machine: Stated reason does not compute with program facts.
    Arthur: What are you talking about ?
    Ventillation: You heard.
    Arthur: What? Who said that?
    Ventillation: The ventillation system, you had a go at me yesterday.
    Arthur: Yes, because you keep filling the air with cheap perfum.
    Ventillation: You like scented air, it's fresh and invigorating.
    Arthur: No I do not! ...

    Seriously! No thanks ;-)

  • They're just copying Microsoft, which did this years ago:

    It looks like you're writing a letter.

  • 17 years to go (Score:2, Insightful)

    by gregor-e ( 136142 )
    In 2020, your average $1000 Wal-Mart Computer will be roughly complex enough to emulate a human brain [transhumanist.com] in realtime. Toss in some cognitive modelling, and you have your new plastic pal who is fun to be with [www.sput.nl].
  • Perhaps something like this could be used for Slashdot. Before posting a story to the front page, the congnitive system can read the story, and then make all the obvious comments such as adding Simpsons/Futurama quotes, comparing it to one or another movie, and finding ways to attack Microsoft regarding the story. Then it can be posted, and we don't have to read the same 4-5 comments 50 times throughout the story.

    Of course, if that was added, what would be left for the people on Slashdot to talk about?
  • by kryzx ( 178628 ) * on Friday August 15, 2003 @10:28AM (#6705626) Homepage Journal
    Hey, CmdrTaco, Slashdot staff, is there any way you can get an interview with these guys?

    This is some awesome work, but the article is so thin. A real /. interview with these guys would just be awesome. I bet they are /. reader, too.

    Anyone care to second the motion?
  • Hmmm.... (Score:5, Insightful)

    by gwernol ( 167574 ) on Friday August 15, 2003 @10:29AM (#6705636)
    I don't know if these guys have something for real or not. Their press release is - perhaps unsurprisingly - fluff that says nothing about how their system is supposed to work. Without knowing some technical details its almost impossible to evaluate how sound their approach is and whether they've got anything new.

    However a couple of things are suspicious. First they say "work on cognitive machines took off in 2002". So in less than 2 years they have essentially cracked several of the major problems that AI researchers have been struggling with for at least the last 4 decades? That seems unlikely. Second they seem to think that a combination of software engineering, cognitive psychology and robotics is the silver bullet of AI and that this is a radical new breakthrough. I hate to break it to them but these disciplines have been working together for many years in the AI community. This just isn't new.

    Until we have a techical paper that describes their approach in detail and can be peer reviewed I will remain sceptical. AI got overhyped enough as it is, we don't need more extravegant claims and fluff press releases.
    • I don't know if these guys have something for real or not. Their press release is - perhaps unsurprisingly - fluff...

      I agree about that. I'd like to hear more guts, less fluff. Though it is rather fascinating fluff.

      But they do seem to have a new idea: attempt to model the cognitive process of your user, notice where the results of your model differ from the user's actual behavior, and use those differences to improve your model.

      It's applying concepts of machine learning to a good problem in an inte

  • Human Nature (Score:2, Insightful)

    The good thing about these systems is allowing them in an advisory capacity. I think it's just human instinct to want something to do all your work for you, but luckily it's also human instinct not to fully trust machines. This is why I don't think people will ever have cars that totally drive themselves or computers that decide when your nukes launch. It's not that they don't have the potential to do these jobs, it's just that there's always the feeling that human error is more reliable than computer error
  • text selection (Score:5, Insightful)

    by Scrameustache ( 459504 ) on Friday August 15, 2003 @10:32AM (#6705653) Homepage Journal
    Oh yeah? You mean like that annoying microsoft text selection that prevents you from selecting what you actually selected by deciding for you that you wanted to select the entire word/sentence/paragraph/page (depending on its mood?

    I have cursed so much because of that "feature"!

    I am the apostle of the "leave me the fuck alone" tao of programming: Every application should have one button, in one simple easy to find menu, that would turn off all automatic thingamajigs. Instead of the current system in wich the are 1.5 times as many places in wich you need to select "no/off" as there are annoying automatic features.

    When I place my cursor in the middle of a word, its because thats where I want the selection to end!
    • I don't believe you. (Score:3, Interesting)

      by nobodyman ( 90587 )

      I am the apostle of the "leave me the fuck alone" tao of programming

      I share your frustration totally (sometimes Word expands my selection to include the punctuation at the end of a word... wtf!?). However, when people say "I don't need any help from my computer!" I feel they aren't thinking it through -- your computer is always assisting you to some degree. This notion of "overzealous assistance" is all relative. My mom needs AOL in order to "see the Internet" (it's like fingernails on a chalkb

      • Well, that's what I'm saying.

        I don't mind that there exists the ability to have a stupid text selecting helper that insists on catching the punctuation along with the word you're trying to select, IF I can turn it off.

        You say it could turn off automatically, I want an option to shut it all off manually in one go.

        I would yell "leave me the fuck alone!" to clippy, and if it awnsers "You'll never hear from me again.", I would retort "that is insuficient, I want to see you DIE!"...
    • Open Word.
      Go to /Tools/Options/Edit/
      Then modify Editing Options.

      BTW, as an experiment, hunt down all the "automatic" features aka default settings. Turn them off. Then use the software and time how long it takes you to start wishing you hadn't messed with the settings.
  • Ahh... we get to have more and more CLIPPY telling us we are writing a personal letter, when we are trying to do our project scope! Wow, can't wait till I have a robotic clippy trying to shave my fish whenever I put my shoes on! (Yes, clippy is the ultimate in non-sequetur advice!) Never underestimate stupidity, esp if someone coded it.
  • "smart" machines can accurately infer your intents and help you to take better decisions or avoid mistakes.

    Guy: Hey bartender, another beer!

    Smart Machine: That is highly inadvisable. You've already had two beers, and a third would leave your cognative functioning impaired.

    Guy: Look, metal dude, I can totally handle it!

    Smart Machine: My sesnsors indicate that a third beer would put you over the legal blood alcohol limit for this state. You cannot risk it.

    Guy: Shut the HELL up already! Goddamn machine!

  • How often have we heard such promises as smart "agents" that would soon scour the 'net, doing our personal shopping and information gathering?

    Or, how about the long-ago promises of Minsky et al on the "future" of AI, only to find that now they consider the problem too difficult?

    This sounds like a little more of the same, some people working on some software that won't be realized for the obligatory "10 years".

    I will not be holding my breath.
  • by Rogerborg ( 306625 ) on Friday August 15, 2003 @10:39AM (#6705707) Homepage
    • "Lower your mortgage rates" pop-up appears: Click on close
    • "Spy on hot chicks with X10" pop-up appears: Click on it
    • Email with attachment arrives: Click on open, click on ignore warning.
    • "Please insert driver disk" appears: Call son/nephew/cousin/brother.
    • Windows runs a little slow: Pull power cable, re-insert power cable.

    That could replace most of my family right there.

  • "...it's entirely possible that these cognitive machines could be incorporated into most computer systems produced within 10 years."

    Really? 10 years. Don't think so.

  • Comment removed based on user account deletion
  • I prefer swimsuit models [cnn.com]! But then I'm new to /., and will probably lose interest after a few more weeks of geek indocrination...

    This sig is covered by the MyPL. Anyone who reads it owes me money!
  • Yeah, it's "cognitive" all right:

    Uh, Hal, would you please point that gun away from me?

    I'm sorry, Dave, I cannot do that.

    Hal, buddy, look ol' pal, I didn't mean to call you names for losing the changes to that document... I take it back, I swear! Look, Hal, put the gun down.

    I'm sorry Dave, I cannot do that.

    [bang]

    Aaaah! I'm shot! Call an ambulance!

    I'm sorry Dave, I cannot do that.

    Just like the car rental commercial where they have this team of crackpot marketroids trying to figure out how to differ

  • This is BS (Score:3, Interesting)

    by Shamashmuddamiq ( 588220 ) on Friday August 15, 2003 @10:56AM (#6705820)
    I decided not to obtain my Ph.D. at UIUC in AI because of the realization that it was just a glorified study of algorithms. Cognitive science is very interesting, but it's more philosophy than anything else. I took my MS in Computer Architecture and ran.

    We've all seen this so many times before. Artificial Intelligence is a sham. It's analagous to alchemy. If you just put enough ingredients together, you've got intelligence. Bullshit. We don't even know what is necessary and sufficient for intelligence. We can't agree on the concept/definition, and I fear that if we could, no human would qualify.

    As pertaining to this article...it's easy to get something that resembles intelligence in a closed, restricted, experimental environment. When you try to expand it, you get something like clippy. Annoying and unhelpful, and certainly not intelligent.

    There are good, efficient algorithms that can help humans in many ways. But don't call it a "synthetic human," don't call it "intelligence," and don't believe it's going to start thinking for us in general terms. That fad went out in the 70's.

    • Mostly I agree, but the "sham" part comes from trying to replicate human intelligence in a machine.

      When the developers get that idea out of their heads and focus on developing machine intelligence then we'll get somewhere.

      What's the difference between the two. Heh, that's the bug-bear: We can't even define what our intelligence is, so how can we define others?

      Much of the problem is the abstract nature of human language. We think in our languages but they are abstracts of reality, not reality itself.

      Wh
    • We've all seen this so many times before. Artificial Intelligence is a sham. It's analagous to alchemy.

      It's only a sham because no one has succeeded yet. That doesn't mean we should stop working on it. Someone will succeed sooner or later. And we are getting closer to it so rapidly that it kinda frightening. It is inevitable.

      So in one sense you are right, anyone talking about AI they have now is blowing a little smoke.

      On the other hand it is a legitimate field with tons of really exciting research

  • Of course, once the cognitive model becomes good enough, the temptation (economic imperative?) will be to offload some of the actual work onto it. This idea, taken to an extreme, is the topic of an excellent short story:

    "I was six years old when my parents told me that there was a small, dark jewel inside my skull, learning to be me." (Greg Egan, "Learning to be me", Axiomatic collection)

    Back on the topic of augmented intelligence, Kasparov has been advocating allowing mixed human/computer teams in "A [chessbase.com]

  • In Notes From The Underground, he discussed tables (now would be computers) which would show and calculate the response to evertything and what people would do. People would not want to see that, if they did see it, they would rebel that and choose crazy decisions, just to rebel.
    This doesn't exactly relate to the article, but the article reminded me of this.
  • by Badgerman ( 19207 ) on Friday August 15, 2003 @11:23AM (#6706030)
    . . . how will we know wether the error is software or user?

    If your machine starts arguing with you, how do you determine the flaw? When it keeps making consistently wrong decisions, who is to blame.

    I'm seeing a WHOLE new way around tech support here. Just keep telling the users that the machine is right and they're wrong. How will the average user know?

    All jokes aside, as we humanize software, we need to develop ways to evaluate it and debug it that will require whole new ways of thinking.

    Ten years? I'm not so sure it'll be that quick . . .
  • It is the exceptions and handling them that makes us uniqure.

    But then there is this.... which means...

    Resistance is Futile, you will be assimilated into the "norm" and prevented from doing anything outside of the percalculated norm.
  • One thing that I've never been satisfied has been appropriately addressed in AI (which I call "counterfeit intelligence" in honor of an old Journal of Improbable Results article) is forgetting.

    Human beings don't "forget" by simply deleting data from memory. Sure, things that have been learned-by-rote (i.e., verbatim & arbitrary S-R associations) are deleted (forgotten) if not reinforced. But what we learn "meaningfully" by associating new input with previously-acquired knowledge, is forgotten "meaning

  • To quote from memory : "the program will detect
    discrepancies in a user's thinking and alert the user".

    This sound an awful lot like "Clippy" raised to the Nth
    power. I am certain that MS products will be the first
    to feature this Smart-Ass computer technology, whereby
    the computer will constantly correct you and interrupt
    your thinking with irrelevant bullshit ("are you sure
    you want to do this? maybe you want to do that").

    On the other hand, just like spell-checking helps
    pepole (sic) write clearly, maybe this w

Swap read error. You lose your mind.

Working...