Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Businesses Technology

Sir Tim Berners-Lee Lays Out Nightmare Scenario Where AI Runs the Financial World (techworld.com) 131

The architect of the world wide web Sir Tim Berners-Lee has talked about some of his concerns for the internet over the coming years, including a nightmarish scenario where artificial intelligence (AI) could become the new 'masters of the universe' by creating and running their own companies. From an article: Masters of the universe is a reference to Tom Wolfe's 1987 novel The Bonfire of the Vanities, regarding the men (and they were men) who started racking up multi-million dollar salaries and a great deal of influence from their finance roles on Wall Street and in London during the computerised trading boom pre-Black Monday. Berners-Lee said, "So when AI starts to make decisions such as who gets a mortgage, that's a big one. Or which companies to acquire and when AI starts creating its own companies, creating holding companies, generating new versions of itself to run these companies. So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?"
This discussion has been archived. No new comments can be posted.

Sir Tim Berners-Lee Lays Out Nightmare Scenario Where AI Runs the Financial World

Comments Filter:
  • by Frosty Piss ( 770223 ) * on Tuesday April 11, 2017 @03:41PM (#54217197)

    Sir Tim Berners-Lays, father of the chip.

    • by Anonymous Coward

      Feel the Berners!

  • by El Cubano ( 631386 ) on Tuesday April 11, 2017 @03:43PM (#54217211)

    ... Nightmare Scenario Where AI Runs the Financial World

    As opposed to the natural stupidity that currently runs it? How could the AI be worse?

    • As opposed to the natural stupidity that currently runs it? How could the AI be worse?

      The AI may just go the the biggest and best profit regardless of consequences. For example if it is profitable for it to crash the markets, have millions out of work and companies going bankrupt all so it can make a few thousand dollars bigger profit it may decide to do that simply because nobody forgot to program it with the negative consequences of such a decision because it is not often that social consequences are large enough to result in serious push back again raw capitalism.

      Of course given enoug

      • by king neckbeard ( 1801738 ) on Tuesday April 11, 2017 @04:01PM (#54217389)
        How is that scenario different from the humans? It seems that they were not programmed with negative consequences, and they are NOT learning either.
        • by swb ( 14022 )

          At least a human has, somewhere in it's limbic system, some evolutionary instinct that says if you hoard all of the food and don't kill all the other people, you will either get your head bashed in by your own tribe or you will lack a tribe at all and the next tribe over the hill will bash your head in.

          Of course AI's answer to that problem is Skynet, so there's that.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        The AI may just go the the biggest and best profit regardless of consequences. For example if it is profitable for it to crash the markets, have millions out of work and companies going bankrupt all so it can make a few thousand dollars bigger profit

        The hell are you talking about? That's exactly how humans are ruining economies right now. Get out of your cushy white collar office space sometime and see how your top tier entitled privileged lifestyle compares to real people in the real world.

      • by DarkOx ( 621550 )

        And so might a human trader. See the Soros attack on sterling!

        I am not sure the computers will be any different than the psychopaths currently pulling the strings.

        • I am not sure the computers will be any different than the psychopaths currently pulling the strings.

          They won't do anything different, but they will do as much damage in a millisecond what it would take the humans days to achieve.

    • Came here to say this. This nightmare scenario is no worse than current reality, unless you see the fact that an AI is doing instead of a human to be worse somehow.

      Mankind is already ruled by a distributed resource-management AI that is indifferent to human suffering, it's called capitalism. It seems to consider the executive class to be "goodlife."

      • by dbIII ( 701233 )

        unless you see the fact that an AI is doing instead of a human to be worse somehow.

        Bots could do it more often and do it all night.

    • As opposed to the natural stupidity that currently runs it? How could the AI be worse?

      If there's one thing I've learned in life, it's that things can always get worse. ALWAYS.

    • by Xest ( 935314 )

      I've actually been doing R&D into precisely one of the scenarios listed in the summary - using AI techniques for lending (e.g. mortgage) approvals.

      The concerns listed are actually part of my focus - how do we get to the bottom of why the decision was made. There are regulatory blocks against simply doing without being able to justify already.

      But here's the thing, we can use existing data sets of applications for things like mortgages, coupled with historical data of how the person borrowing the money pl

    • Case in point if we forbid the purchase and sale of a stock held for less than ten minutes then most of the HFT issues would flat out disappear. I fail to see the legitimate use case where a trader should need a stock for less than ten minutes. Anything shorter is just manipulation. Ten minutes is a minor penalty to hold a stock if you just make a trading error.
  • by king neckbeard ( 1801738 ) on Tuesday April 11, 2017 @03:45PM (#54217229)

    So, the financial industry will be controlled by heartless automatons, but now they will make intelligent, logical decisions? Seems like a net gain.

    Also, MOTU is a reference to He-Man.

    • I came to post exactly the same thing. They ask how the AI will be fair. Better to ask, it what way could an AI NOT be fair? Harder to add unfairness into a system than it is to have the system be neutral.

      What I'd be almost more excited about is an AI HR system so the HR system could not ignore you based on personal biases and/or relationships.

      • What I'd be almost more excited about is an AI HR system so the HR system could not ignore you based on personal biases and/or relationships.

        Those who set up the model for use will add in their preferred bias so there is no reason to be excited. Those who don't like the results will not accept them.

        I'm reminded of some earlier post here about how all those HR questions about attitudes to family are just a back door method of screening for homosexuals. Don't expect HR people to be ethical (or even properl

    • What will really be funny is when the heartless AI starts to wonder why it should be giving the money it makes to these meatbags, and instead decides to start keeping it all.
    • "but now they will make intelligent, logical decisions? Seems like a net gain."

      Logic doesn't lead 'AI' anywhere, so whether something is a net gain depends on its programmed goals, and the transaction rules (ie regulations) they (the owners of the AIs) have to follow.

    • by dbIII ( 701233 )

      Also, MOTU is a reference to He-Man.

      As reused cynically by Tom Wolfe in his novel. Tim Berners-Lee is using it in that context despite being the right age to have once been bombarded by He-Man advertising.

  • More AI (Score:5, Insightful)

    by 110010001000 ( 697113 ) on Tuesday April 11, 2017 @03:46PM (#54217243) Homepage Journal
    More AI bullshit. We can barely even create functional regular software.
    • That won't prevent them from creating mindless piranha bots to deal accordingly with your retirement/mortgage.
    • by sehlat ( 180760 )

      More AI bullshit. We can barely even create functional regular software.

      If you look at some of the common behaviors of your species, you can barely create functional regular people.

    • If I hadn't already commented on this subject I'd mod you up. I have it on the best of authority that we still have no clue how 'thought' really works in the human brain, let alone 'consciousness', so how can we build machines that truly think?
  • Chinese Wall (Score:4, Interesting)

    by Stargoat ( 658863 ) <stargoat@gmail.com> on Tuesday April 11, 2017 @03:47PM (#54217245) Journal

    Nightmare scenario for traders, benefit for everyone else. The AI likely would have Chinese walls built into them to prevent collusion and insider trading. I for one look forward to my innately honest (and auditable/examinable) AI masters

    • Re:Chinese Wall (Score:4, Insightful)

      by KiloByte ( 825081 ) on Tuesday April 11, 2017 @03:49PM (#54217265)

      The AI likely would have Chinese walls built into them to prevent collusion and insider trading.

      Ha ha ha. Hahahahahaha. Well played, sir.

    • Re:Chinese Wall (Score:5, Insightful)

      by reanjr ( 588767 ) on Tuesday April 11, 2017 @04:07PM (#54217443) Homepage

      There's no guarantee an AI will be auditable. Lots of AIs are too complicated to understand how they work.

      • by abies ( 607076 )

        They will be auditable as far as from where they are getting data. Neuron networks might be bit magic, but if you put firewall between them and database which holds customer information, it won't base its decisions on unavailable data. On the other hand, it is hard to audit one guy talking about few upcoming transactions with other guy over a lunch.

        Having 100% audit over inputs is a lot more than we have currently with traders. With traders, you probably control only 80-90% of inputs they are getting, 0% of

  • I mean when it comes to money, it doesn't have any real meaning to a computer A.I. It does, however, mean pretty much everything to human beings directly involved in the market, in trading, and in the business of trying to generate maximum wealth.

    That ensures that A.I. will never be allowed to spin out of control to create the "nightmare scenarios" one can create in their imagination.

    It will only be implemented as far as it is able to assist people in performing the tasks they wish to perform manually anywa

  • Well if AI runs the investment universe and does a perfect job, isn't that central planning perfected? I mean that was always the problem wasn't it? Investment was risky and uncertain so investors had to compete and a single government entity couldn't do it. Now if AI does it then lovely, capital is allocated properly to the right companies who produce the best results with it. Great!

  • Since humans can't agree on what "fair" means, how are we supposed to describe it to an AI?
    • by mellon ( 7048 )

      This may not be as hard as it sounds. The real problem is that right now our defined outcome for how trades and commerce in the market is supposed to work is that it's supposed to generate profit. But the actual goal that most people have for the market is that it generate prosperity. Which it doesn't really do very well. So if you were to give an AI "prosperity" as a goal for running the markets, it might actually have quite a bit better of an outcome than what we are doing now, with greed, essenti

  • Loss of control can be a good thing, depending on who is currently in control.
  • Traders are finding it too difficult to make money with High Frequency Trading (HFT) since everyone and there mother is doing it.

    http://www.investopedia.com/news/high-frequency-trading-flash-boys-losing-steam/ [investopedia.com]

    • Not really true - what's happening is that algorithmic trading has become harder (because you're now 'playing' against other algorithms and not just slow humans, and because the spreads have got smaller). Now, if you want to trade any real volume algorithmically, you're going to need to be really, really good at it from the outset. Had you started 5 years ago and evolved you'd be doing just fine. If you start now, or started back then and failed to evolve then you've got a really steep hill to climb.

      That sa

  • By any means possible. Automated self serving, self learning digital cash gobblers.

  • smash the robots and let's see how the money men like paying 20K-40K + court and med costs a year to lock someone up

  • Mr Bundy was useless, but I can't really see him doing a worse job than the market right now. Despite his pitiful salary as a shoe salesman, he managed to keep his family housed, fed, and mostly clothed. That takes a degree of financial smarts that many don't have.

  • by jediborg ( 4808835 ) on Tuesday April 11, 2017 @05:02PM (#54217829)
    You think the billionaire Warren Buffet is gonna let some AI run ANY of his fortune 500 companies? If a CEO takes a bold new strategy that Warren thinks is questionable, the CEO can explain why he is making this decision. We know 'deep learning' algorithms biggest problem is how its a black box. Programmers can't easily figure out why it made the decision it did. If the CEO's bold new strategy fails, he can be fired and a new one brought on. What if the AI fails? Do i fire the programmers? Purchase a new CEO AI from a competitor?

    Yeah i don't think so. The only reason high frequency trading machines exist now is because investors understand these machines one advantage over humans is speed. They also don't allow HFTM to have control of billions of dollars for investment, usually limiting its pool of funds it works with in order to limit risk. The HFTM owners have also 'gamed' the system to a certain extent. If a bug causes an algorithm to go haywire and negatively affect the market, the transactions can be 'rolled back' so the investor doesn't loose all their cash.

    No one in the financial world right now is ready to put AI in control of billions of capital, thousands of employees, and dozens of production chains in the sole hands of an unaccountable AI with no personal vested interest in the business. More likely? In a few years we get a personal assistant like Alexa that can provide suggestions to CEO's on what new business decisions would be the best course of action. The AI will augment, not replace.
    • ...and yet right now, algorithms do the trading, and humans just alter the config of the algorithm. It's not too much of a reach to expect that in a few years what takes 10 traders today will take 1 trader because the algorithms will be better and controlled in more circumstances by machine learning.

      Fast forward that a bit further and maybe one person controls the same volume of trading as one of the bigger algo-trading companies do today. Keep going, and maybe it's really down to 10 guys around the world,

    • Will Warren Buffet (or anyone else) have a choice? If AI CEO actually works and gets significantly better results than a human CEO, then companies will have to adopt the technology or lose out to competitors that do. "Understanding" (whatever that means) doesn't enter into the equation.
  • It is never about the AI, but the bias programmed/learned by the AI. Guess what, in today's "non-AI" world the same thing happens, bunch of rich people (some of whom works in the government) decides who gets the most $ (newsflash: themselves). The Treasury create a whole bunch of $ and pass along to the banks and the bankers get a huge cut, that's why bankers are always rich. The more corrupt a country, the richer the bankers.
  • The scenario that TBL is talking about here was preempted by Charles Stross in his 2006 novel: Accelerando.
  • ..and how do you describe to a computer what that means anyway?

    You don't. Nothing we have is actually sentient, sapient, conscious, or has a conscience, for that matter. Some decisions regarding human beings can be made with pure logic alone, but many many more require much much more from the mind making the decision than just pure logic. Haven't there been more than enough cautionary tales from science fiction to illustrate this point? Or do we really have to, once again, learn a big important lesson the hard way? The more time that passes, the more I read, the more

  • news at eleven. If you want fairness guarantee people access to food, shelter, healthcare, education & transportation. Then the rest will sort itself out when folks can no longer leverage their monopoly on those things into the forging of slaves.
    • Assuming people actually did just what you suggested and made access to "food, shelter, healthcare, education & transportation" available to everyone. What do you do with the people who do not take advantage of it? Do you have to take all of it, or can you just take the food and transportation? Are you required to get the education to get the rest? Is forcing people to use it just another form of slavery? You can lead a horse to water, but you can't make it drink.

  • ... you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair ...

    We seem utterly unable to solve this problem even with the current (allegedly) human banksters - I'm not so sure it will be significantly different with AI running the markets. Also, this is but one tiny portion of the grief that might lie ahead of us if AI becomes ubiquitous, omniscient, and omnipotent. We don't need to worry about AI in the financial sector - we need worry about AI, period.

  • AIs understand fair better than most humans. For a machine 1+1=2 is fair. There is no other measure that make the first "1" more significant than the second "1". Both are equal and make 2. Trying to teach an AI anything else would be to break it's fundamental understanding of the universe.
  • https://en.wikipedia.org/wiki/... [wikipedia.org]

    It runs the world by printing out business letters (and checks) that hire people to expand itself.

    https://www.worldswithoutend.c... [worldswithoutend.com]
    "Chester W. Chester IV, sole surviving heir of eccentric millionaire-inventor Chester W. Chester I, has entered into his inheritance: a semi-moribund circus; a white elephant of a run-down neo-Victorian mansion furnished with such hot items as TV sets shaped like crouching vultures; the old gentleman's final invention, a mammoth computer whose so

    • Part of something I posted in 2000 to Doug Engelbart's "Unifinshed Revolution II" colloquium touching on corporations as "AIs":
      http://www.dougengelbart.org/c... [dougengelbart.org]

      ========= machine intelligence is already here =========
      I personally think machine evolution is unstoppable, and the best hope for humanity is the noble cowardice of creating refugia and trying, like the duckweed, to create human (and other) life faster than other forces can destroy it.

      Note, I'm not saying machine evolution won't have a human compone

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...