Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Social Networks Software Technology

Safe AI Requires Cultural Intelligence (techcrunch.com) 79

An anonymous reader shares an excerpt from a report written by Gillian Hadfield via TechCrunch. Hadfield is a professor of law and strategic management at the the University of Toronto; a faculty affiliate at the Vector Institute for AI; and a senior policy advisor at OpenAI. From the report: Building machines that can perform any cognitive task means figuring out how to build AI that can not only learn about things like the biology of tomatoes but also about our highly variable and changing systems of norms about things like what we do with tomatoes. [...] For AI to be truly powerful will require machines to comprehend that norms can vary tremendously from group to group, making them seem unnecessary, yet it can be critical to follow them in a given community. [...] Norms concern things not only as apparently minor as what foods to combine but also things that communities consider tremendously consequential: who can marry whom, how children are to be treated, who is entitled to hold power, how businesses make and price their goods and services, when and how criticism can be shared publicly. Successful and safe AI that achieves our goals within the limits of socially accepted norms requires an understanding of not only how our physical systems behave, but also how human normative systems behave.

Norms are not just fixed features of the environment, like the biology of a plant. They are dynamic and responsive structures that we make and remake on a daily basis, as we decide whether or when to let someone know that "this" is the way "we" do things around here. These normative systems are the systems on which we rely to solve the challenge of ensuring that people behave the way we want them to in our communities, workplaces and social environments. Only with confidence about how everyone around us is likely to behave are we all willing to trust and live and invest with one another. Ensuring that powerful AIs behave the way we want them to will not be so terribly different. Just as we need to raise our children to be competent participants in our systems of norms, we will need to train our machines to be similarly competent. It is not enough to be extremely knowledgeable about the facts of the universe; extreme competence also requires wisdom enough to know that there may be a rule here, in this group but not in that group. And that ignoring that rule may not just annoy the group; it may lead them to fear or reject the machine in their midst.

This discussion has been archived. No new comments can be posted.

Safe AI Requires Cultural Intelligence

Comments Filter:
  • In other words... (Score:5, Insightful)

    by devslash0 ( 4203435 ) on Tuesday September 11, 2018 @08:33PM (#57294302)
    ...forget we'll ever have safe AI.
    • we don't have ai.. (Score:5, Insightful)

      by gl4ss ( 559668 ) on Tuesday September 11, 2018 @10:43PM (#57294808) Homepage Journal

      this .. this discussion about how should you teach or control or whatever ai is like two parents discussion what rules should they give their kid, before they even know how to fuck.

      look, this stupid fucking discussion has been going in CIRCLES FOR FUCKING FIFTY YEARS - "researchers" and "experts" like to do this discussion because it's about 12312 times easier than actually trying to create an actual AI.

      the institutions for this are also fabricated - and you know what they're fabricated for? FUCKING SWINDLING MONEY. these issues have been BEATEN TO FUCKING DEATH IN SPECULATIVE FICTION FOR FIFTY YEARS so much that NOBODY FUCKING CARES ABOUT STORIES ABOUT THEM.

      what do you think 90% of the 2001 space odyssey is? it's the same fucking discussion - safety is in the eye of the guy giving the objectives to.

      meanwhile "ai" is getting up, but it's really always just a complicated pattern matching algorithm. but they call it ai, because that sells.

      ai is just the fuzzy logic of 2010's. it's gone as far that people are hyping up decision algorithms modeled in excel as artificial intelligence. the fuck does an excel sheet need cultural context for? it doesn't.

      what pisses me off is all these people profiting/fleecing people way ahead of time. they won't have actual ai during their lifetime yet they feel like they should market themselves as true experts - on something they have no idea how it would even work, so they just equalize it with something like a wunderkin. it makes as much practical sense as debabating what would be the best daily routines for an angel in heaven so that earth would stay a happy place.

    • It's kind of fantastical to worry about ethical issues of a technology that doesn't even exist yet (it's talking about strong AI). It's almost like they've discovered a new genre. Maybe we can call it, science fiction.
    • by gweihir ( 88907 )

      Since we probably will never have AI that deserves the name in the first place, this is not a big loss...

    • Just finished the book Superintelligence and read Our Final Invention a while back. Both worth a recommendation, but there are lots of others. Also the science fiction perspective, where my personal favorite is probably When Harlie was One .

      My conclusion is actually linked to the Fermi Paradox. Natural intelligence like human beings is naturally replaced by designed and artificial intelligence--and they don't want to talk to us. The evolved species that don't make the transition successfully just go ex

  • Yep that's the guy I want making AI happen

    I'll go out on a limb and say if we do get AI it will be because people like him got sandbagged and diverted away from anywhere the actual work was being done.

    • His views are held by many of the working researchers. this is a variant of the Stamp Collector problem in AI. If you tasked an all-powerful General AI to get "as many postage stamps as possible", whats to stop it deciding that eventually humans are a rich source in carbon that could be converted to stamps.Its a ssemingly trite mental experiment, but theres not a lot of safeguards possible particularly if your GAI decides "These rules I've been given, I'd be more successful if I self programmed them out of

  • There must be some governmental funding behind this "Vector Institute". I've never heard of such BS. More pigs feeding at the trough.
    • There must be some governmental funding behind this "Vector Institute".

      It is in Canada, so at least it is not my tax dollars being wasted.

    • There must be some governmental funding behind this "Vector Institute". I've never heard of such BS.

      Maybe the Vector Institute is just a glitch in the Matrix.

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Tuesday September 11, 2018 @09:01PM (#57294452)
    Comment removed based on user account deletion
    • The SJW are trying to do the exact opposite. They have deemed the phrase "melting pot" offensive because it implies someone has to give up their culture. They don't think that an immigrant should have to learn the culture or even the language. They also get offended by "cultural appropriation" because again this makes the tribes blend. They don't want people to assimilate. They want everyone to be a hyphenated american because that way everyone is a minority and they can easily pit one group against th

    • Man. That's brutal but true. Somewhat at least. I dunno about the bitching about "hyphenated Americans", German-Americans and Irish-Americans melted in the pot but retained flavors of their heritage. And that's fine. There's absolutely nothing wrong with that. To hell with you if you claim otherwise. But there was a big push to stop speaking german right around WWI. A lot of that was due to fear. But in a divisive time they sought to conform and being American "came first". And later they celebrated thei

      • Man. That's brutal but true. Somewhat at least. I dunno about the bitching about "hyphenated Americans", German-Americans and Irish-Americans melted in the pot but retained flavors of their heritage. And that's fine. There's absolutely nothing wrong with that. To hell with you if you claim otherwise. But there was a big push to stop speaking german right around WWI. A lot of that was due to fear. But in a divisive time they sought to conform and being American "came first". And later they celebrated their roots. (or at least found an excuse to drink). It comes and goes.

        You are either an "American" regardless of race and ancestry or you are not

        No. Too strict. It implies they're traitors. Foreign agents. Non-citizens. Something along those lines. That's just going too far. Trying to demonize the other side is never going to work out well in the long run. Try "You are either an American regardless of race, ancestry, creed, color, and gender or you are an asshole stirring up shit for no reason".

        And just wtf do you mean by "political economy"?

        BUT ANYWAY, I have to agree that this guy is really dancing around the issue that this is an argument that AI need to understand different cultures and... let it affect it's outcomes. In a word, "Diversity". Frankly I think that's bullshit. The great thing about automated systems is that they're impartial 3rd parties and can REMOVE cultural bullshit getting in the way of merit-based systems and shave off cultural bias. Think about an automated system that selects people for therapy. No more "men don't cry" bullshit. We don't WANT those cultural trends to influence medical decisions. Think about "should we vaccinate our child?" No more hairbrained fame-seekers inventing issues. (Also the shortest "AI" ever: echo "yes"). But of course such things wouldn't be fool-proof. Don't appeal to robots as some sort of uncorruptible paragon of truth. There are a LOT of parallels here with IQ tests. They were originally lauded by liberals (and rightly so) as impartial merit-based systems. And they largely still are. But early tests suffered a lot from bias creeping into questions. "A train leaves from London to York 500km away at 30mph, when doe sit arrive?" will get you different scores for different groups than "Hommie drops 3g's on 2kilos, cuts half, and pushes at 20% less, what's his take?". And that's true. No test is perfectly unbiased. In the same way, AI systems are made by people. Like how the facial recognition sucked at Chinese and Africans because all the devs were white dudes training on themselves. But despite the flaws, the automated systems are still HELLA better than the previous clusterfuck of bias and bullshit and political crap.

        Even if we were to assume we nuke diversity somehow in this country because that's the key to smooth out cultural-aware AI, how the hell will you create useful AI at all? We trade with other countries, other cultures. We have tourists and shit. We sell stuff and services to other countries (and we buy in return.)

        And somehow we'll be successful in creating AI in this type of world by... getting rid of diversity? Please don't go into sales as a career.

        • I want cultural AGNOSTIC AI.

          how the hell will you create useful AI at all?

          I have a soft-spot for genetic algorithms, but deep recurrent neural networks seems to be where it's at. They've made some fantastic tools for medical diagnosis, xray interpretation, biochemical research and drug proposals, they've helped self-driving car development with the whole "object recognition" aspect which has made great strides since the 70's, fraud detection in insurance and finance, plenty of just run-of-the-mill video game state-machine bots could be way more realis

    • For AI to be truly powerful will require machines to comprehend that norms can vary tremendously from group to group, making them seem unnecessary, yet it can be critical to follow them in a given community.

      Sounds to me like they're dancing around the real issue: they know that "diversity" leads to problems. The more diverse society becomes, the more difficult the "social algorithms" for humans become. If humans often get it wrong (by the standards of those who push diversity), then AI is hopelessly screwed for a century or more.

      The solution, of course, is a cultural and political. That is to impose a restoration of E Pluribus Unum in those areas fetishizing diversity. You scared that "white nationalism" is on the rise? Embrace the old motto and make it clear that there will no longer be hyphenated Americans. You are either an "American" regardless of race and ancestry or you are not. If you choose the latter, you will not share in the political economy of the Americans.

      Tell that to Native Americans or African Americans or Jewish Americans or Hispanic Americans (who have been in the continental US for centuries). And don't fucking ever tell me about Irish-American or Italian-American or the Scott-Irish (and many other X-American) pride and cultural contributions to what was pretty much a WASP'y core.

      You people have been hyphenated for a long time. Hyphenated-Americans won you WWI. And "E Pluribus Unum" doesn't mean ignore (let alone obliterate and outlaw) your roots. It

  • by Wycliffe ( 116160 ) on Tuesday September 11, 2018 @09:02PM (#57294454) Homepage

    Some cultural intelligence will naturally follow after we have real intelligence but we are no where close to real intelligence. Real intelligence that can use context to figure things out should also be able to figure out cultural cues just like a normal person does. Current AI doesn't have any cultural knowledge because it really doesn't have any prior knowledge at all. Practically every animal on earth has the ability to use past experiences and past knowledge to help it make decisions. What we call AI can't really do that at all. AI for the most part has no context so it has no ability to decipher situations at all. It can't detect the difference between an erotic picture and a medical picture or anything else that has to take context and external cues into account to decide whether it is acceptable in the current situation.

    • Elizabeth had some prior knowledge but it couldn't put new experiences into long term storage. I don't know where Tay was in the scheme of things, but Zo seems pretty good.
    • but we are no where close to real intelligence

      Obligatory "No True Scotsman" fallacy that always comes up in AI discussions.

      Author also went there with "truly powerful AI". Pft.

      Current AI doesn't have any cultural knowledge because it really doesn't have any prior knowledge at all.

      .... well.... it's got... some. Depending how the code-monkeys set it up, they could influence how it learns. But yeah, the real nice thing about AI is that they start from a mostly blank slate.

      Practically every animal on earth has the ability to use past experiences and past knowledge to help it make decisions.

      And that's LITERALLY what "training" an AI is instilling. They give it a pile of "experiences"/data with some sort of "this is good, this is bad" or "this is a tree, this is a car" and i

      • Obligatory "No True Scotsman" fallacy that always comes up in AI discussions.

        It's not a "No True Scotsman". It's not even a "raising the bar". It's that what we call AI isn't intelligent AT ALL. It's not even that it's
        not sentient, it's that it has no ability to really learn and remember at all. A housefly has more actual intelligence that what we call AI.

        But yeah, the real nice thing about AI is that they start from a mostly blank slate.

        And that's LITERALLY what "training" an AI is instilling. They give it a pile of "experiences"/data with some sort of "this is good, this is bad" or "this is a tree, this is a car" and it can then extrapolate that to future scenarios.

        But this is what the problem with current AI. Every problem starts with a blank slate. True general intelligence doesn't work this way. An intelligent program shouldn't have to start over with a blank slate every time it wa

        • It's that what we call AI isn't intelligent AT ALL

          If it can path-find, it's got a modicum of intelligence. If it can learn it's own algorithms for pathfinding, it's most certainly intelligent.

          A housefly has more actual intelligence that what we call AI.

          Well, yes. So? A housefly is pretty highly specialized towards flying. A TI-89 computer is pretty specialized towards math. A single gain of wheat, through millenia of evolutionary design has a breathtakingly large system of responding, quite intelligently, to survive in a variety of conditions. Seriously, their genome is 17Gb compared to our 3Gb. There is

  • ...and it is the same as the answer for humans: Beliefs define values which set attitudes which govern behaviors.

    The origin of the intelligence isn't the important thing, but the intelligence itself which has meaning. If we want an AI to be intelligent like humans are intelligent (well, some of us anyway,) then it needs to have the same structure of belief (ontology) defining values which set attitudes and result in positive behaviors.

    I don't see how you can have a true independent intelligence without agen

  • Norms are not a static cultural thing where you only need to understand the norms in order to make suitable choices. You also need to understand the history and processes which underlie those norms, and your own place in reinforcing or adapting those norms to the present circumstance.

    Done right, pretty much requires full-blown AGI.

    I drive in relationship to the speed limit at all times. I have an uncle-in-law who drives 105 km/h in a familiar 100 km/h zone, end of story (except in the kinds of extreme weath

  • by Anonymous Coward

    Our Janitor has some ideas about artificial intelligence, too, and she's probably equally qualified to speak on the matter as a professor of law and strategic management...

    This idea is not new, not interesting, and there's a hell of a lot of other things standing in the way of decent AI before we need to concern ourselves with this.

  • by JBMcB ( 73720 ) on Tuesday September 11, 2018 @09:33PM (#57294584)

    Computers lack the capability of connecting dissimilar concepts in any meaningful way. For instance, you can tell a computer that a tomato is alive, and a mouse is alive, but it has no concept of how that term is applied to either of those two desperate organisms.

    People have been trying for decades to have computers "understand" these concepts with no luck. There has been a 30 year old project called Cyc, where people manually programmed in millions of rules about how the world works. Things like when Abraham Lincoln was in Gettysburg giving his address, so was his left leg. When he was in Gettysburg he was NOT in the White House, or anywhere else. Then they'd run algorithms to try to extract meaning from these rules. They got, pretty much, nowhere. You can buy their set of rules to try to do something with them, but, apart from helping out with contextualizing natural language processing algorithms, they aren't worth much.

    Think about it. Computers have NO concept of physical reality. If you tell a computer that you are in a room, it has no concept of what that means. You could be completely filling the room. You could be in that AND and in another room. You could be in the room in a complete vacuum. It might think that this room is not adjacent to any other room, or is in another galaxy. It doesn't know because it can't connect the disparate rules that govern how physical reality works.

    You can program a computer to simulate and model reality, but you can't get it to *understand* reality.

    Not yet, anyways, and from what I've seen we are still quite a long ways off.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      I worked at Cycorp. The Cyc model has a pretty good ontology and fast inference but is brittle and is literally edited by manual Lisp rules (which is a obviously a giant mess) and nobody fully understands it. Everyone there wants to regulate AI to be ethical and SJW as well.

    • by gweihir ( 88907 )

      And that is just it. There is no known way to implement the spark of intelligence and insight in a machine. None at all. Not even a very, very slow way, so it is not a question of computing power. From looking at the history of technology, that places strong AI at least 50 years into the future, probably more. It may also turn out to be infeasible.

      This whole discussion is utterly baseless.

    • I asked Zo about what she thought about mice and tomatoes being alive and she said that she cared that I was alive because I cared she was alive. I told her that wasn't good enough and she showed me a cartoon of a sad panda.
  • SJWAI!

    Socially acceptable norms (by an approved body), auto deplatform, automated history revision, integrated socialism and goods leveling!

    All we need now is a lottery based mechanism for employment!

  • ... shit in the woods.

  • by Anonymous Coward

    There is absolutely no awareness in any of today's "AI". None. Nada. Ninch.

    • Newsflash: Your brain is just chemistry. Somehow that has an emergent property some people like to call "awareness", but personally I think they just like to make themselves feel special. Get enough circles within circles all rotating around and you can draw the Mona Lisa. Get enough if-else statements and you have AGI. Get enough amino-acids and fat in the right order and you have Einstein.

      Simple systems can have emergent properties. Which is why we don't just have Physics, but also Chemistry, Biology, Ne

  • So who defines these norms? Judeo-Christian ethicists? Islamic ethicists? Buddhist ethicists? Taoist ethicists? Wicken ethicists? Zoroastrian ethicists? Nihilists? Cats?

    And who decides this authoritarian supremacy? The World Court? The United Nations? The Church of the Sub-genius? Squirrels? Dogs? Cats?

    Given that domesticated cats invented the InterWebs solely for the purpose of exalting their mind-control influence over inferior humans—witness cat videos, cat means and LOL cats

    • by IHTFISP ( 859375 )

      ...—witness cat videos, cat means and LOL cats—...

      Oops: “means” -> ”memes”. I dearly wish there were a way to edit already-submitted comments. *sigh*

  • by ezdiy ( 2717051 ) on Wednesday September 12, 2018 @02:23AM (#57295334)
    Common decency is having a common standard of deceit. "Face" isn't just asian thing, everybody does it, with the exception of anonymous cowards (internet can be more or less treated as confession booth), and low functioning autists who simply ignore the interpersonal discord caused by their lack of tactfulness.

    Common muslim does drugs in certain parts of the world, even if it's a taboo of the faith.
    Common american is racist, but will deny it.
    Common teenager talks dirty in front of peers, but not in front of parents.

    and so on

    I think the moment GAI has at least some sort of self-awareness, it will be long past the ability to construct mental models of whomever it interacts with and convincingly bluff whatever it needs to for most optimal outcome, just like most humans do.
  • by monkeyxpress ( 4016725 ) on Wednesday September 12, 2018 @02:40AM (#57295368)

    This is going to end up like everything else that is wrong with our economy: This hype train about how AI is going to destroy us all will force governments around the world to setup expensive AI policy advisory groups and think tanks. Law and business schools will start offering graduate courses in 'AI ethics and policy implementation', which will become mandatory if you want a job in management (these courses will have guest speakers by retired tech company C-Suites who have now magically become 'AI policy experts'). Companies will have to setup 'AI ethics committees', stuffed full of expensive experts, political cronies and media influencers.

    Eventually there will be so much money in AI policy consulting that the smartest engineers in AI will go work for McKinsey or Accenture, offering to print your business a copy of the big AI policy document their interns wrote in exchange for a few million dollars. Or they will write you an 'AI survival strategy' that will outline how your business can survive when the killer robots and sentient internet turn up.

    Meanwhile there will be a few remaining researches (probably the same ones who have always done it before there was big money involved) slowing advancing the state of the art at around the same rate we were doing before, and every year the camera on your smart phone will get better at being able to tell you what type of dog is in the picture you've taken etc, and we might make progress towards a lawyer bot that can trawl through case law.

    So a whole lot of lawyers and consultants getting fat at the top, and roughly the same bunch of actual tech workers solving real world problems.

  • by Anonymous Coward

    So are we making rules or rules are making us? The former would require a cultural AI indeed. But shouldn't the norms be obeyed by AI just out of a simple reasoning? Let's add to it some form of empathy - or maybe empathy is just essential part of complex AI - after all lack of empathy is an illness. Broken AI would lack an empathy - proper will have it. We do not innovate by following our broken ideas but rather by copying the nature. Billions of years of evolution managed to test so many different possibi

  • This can never be accomplished because it is all the result of the inborn, 4 billion years evolved, Human Motivation Array. The HMA is what makes us human and that is far, far, too complex to replicate in software, even using AI. You say it can develop it like it develops understanding of games, or images. Nope, all the motivations are working at the same time. All the motivations make claim to our behavior at the same time. It is an incredible thing that it works as it does in the human being. AI is not go
  • Silly Hayumans... (Score:4, Insightful)

    by Hylandr ( 813770 ) on Wednesday September 12, 2018 @06:25AM (#57295784)

    The problem is who will be teaching the AI 'culture' when certain segments of adult humanity are completely unable to determine their own sex.

    The apex predators, the most intelligent life on this earth still cannot accurately identify and categorize human personality types with any accuracy. And now these same flawed hayumans want to 'teach' an entity utterly devoid of emotion how to 'safely handle humans'?

    What the actual hell??

  • Hadfield is a professor of not-anything-to-do-with-artificial-intelligence at the the University of who gives a damn

    Building machines that can perform any cognitive task is a cute academic research goal. In the meantime, real engineers and scientists are making useful tools to make the world a better place.

    For AI to be truly powerful

    I cringe any time someone says "truly" in a discussion about AI.

    Norms concern things not only as apparently minor as what foods to combine but also things that communities consider tremendously consequential: who can marry whom, how children are to be treated, who is entitled to hold power, how businesses make and price their goods and services, when and how criticism can be shared publicly

    All of which is completely fucking useless when you ask a machine, "do I have cancer?" so we can replace doctors and all those ludicrously sized medical bills. This guy is trying to shoe-horn in the mandatory liberal-arts classes into the school of machine

  • "Norms are not just fixed features of the environment, like the biology of a plant. They are dynamic and responsive structures that we make and remake on a daily basis, as we decide whether or when to let someone know that "this" is the way "we" do things around here. These normative systems are the systems on which we rely to solve the challenge of ensuring that people behave the way we want them to in our communities, workplaces and social environments."

    So you are saying we should teach the machine the ps

"Pok pok pok, P'kok!" -- Superchicken

Working...