Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Businesses Technology

AI Is in a 'Golden Age' and Solving Problems That Were Once Sci-fi, Amazon CEO Says (yahoo.com) 99

An anonymous reader writes: Artificial intelligence (AI) development has seen an "amazing renaissance" and is beginning to solve problems that were once seen as science fiction, according to Amazon CEO Jeff Bezos. Machine learning, machine vision, and natural language processing are all strands of AI that are being developed by technology giants such as Amazon, Alphabet's Google, and Facebook for various uses. These AI developments were praised by the Amazon founder. "It is a renaissance, it is a golden age," Bezos told an audience at the Internet Association's annual gala last week. "We are now solving problems with machine learning and artificial intelligence that were in the realm of science fiction for the last several decades. And natural language understanding, machine vision problems, it really is an amazing renaissance." Bezos called AI an "enabling layer" that will "improve every business."
This discussion has been archived. No new comments can be posted.

AI Is in a 'Golden Age' and Solving Problems That Were Once Sci-fi, Amazon CEO Says

Comments Filter:
  • Kindly do the needful and rephrase the headline as a question.

  • In order to really take off, AI needs hardware improvements. Right now, most of it runs on GPUs, and requires lots of them. GPUs weren't really made for that task and there is potential for efficiency gains. Things like the Google TPU are delivering much better performance per watt, but sadly right now Google keeps the TPU to themselves, not giving them away. Sort of reminds me of that bitcoin ASIC manufacturing company which ran the ASICS they have ordered for their customers themselves for a while before

    • Re:AI hardware (Score:5, Interesting)

      by Tough Love ( 215404 ) on Monday May 08, 2017 @06:15PM (#54381041)

      Right now, most of it runs on GPUs, and requires lots of them. GPUs weren't really made for that task and there is potential for efficiency gains.

      No, GPUs were made for a very similar task: parallel iamge processing. It is similar in the same way that your visual cortex, used for input, is arranged in much the same way as a GPU is, for output. The potential for efficiency gains from using GPUs better has barely been scratched. Sure, some special purpose code like s-expression evalutation can be accelerated by ASIC, but it would be a serious mistake to assume that is all there is to AI.

  • by sethstorm ( 512897 ) on Monday May 08, 2017 @05:05PM (#54380513) Homepage

    For the most of us, AI is trouble ahead that needs to be stopped and slowed down.

    Perhaps Mr. Bezos should read a bit of Herbert's Dune to know what happens when you let technological progress go unchecked. The end result is worse than what it would be if humanity were included.

    • by gweihir ( 88907 )

      There is no risk of that happening anytime soon. At this time, it is entirely unclear whether "thinking machines" are even possible in this universe.

      • The key takeaway is not whether we'll have thinking machines, but the consequences of unchecked automation.

        • by Anonymous Coward

          Go live with the fucking Amish if your so god dam worried about machines (like the fucking computer your on) making your life too easy. And while your at it. Why not throw a spanner into a automated loom.

        • by gweihir ( 88907 )

          Well, yes. And what mechanisms society comes up with to keep itself going and to make sure enough wealth is shared to keep society stable. It will be an interesting next few decades. There are just too many people that can be replaced by automation job-wise for this to not be a huge change.

      • by Anonymous Coward

        At this time, it is entirely unclear whether "thinking machines" are even possible in this universe.

        Implying that human brains are not in fact "thinking machines"?

        I suppose I'll agree with that.

        • by Hartree ( 191324 )

          "I suppose I'll agree with that."

          It doesn't matter if you think that or not. It only matters what the AI running you as a simulation thinks.

      • by Xest ( 935314 ) on Tuesday May 09, 2017 @03:55AM (#54382881)

        "At this time, it is entirely unclear whether "thinking machines" are even possible in this universe."

        Um, what do you think we as humans are? magic?

        • Negative, I am a meat popcicle

        • by gweihir ( 88907 )

          Find an explanation for consciousness in Physics and then come back to me. And no, "it is an illusion" does not count, because that one is self-referent.

          • by Xest ( 935314 )

            I think you're conflating separate issues, a definition of consciousness is quite different to where the universe physically supports it - the latter is obviously true, because we're conscious, and if we're conscious, then so could machines be because there's nothing magical about us, we just don't have sufficiently advanced technology to implement it yet.

            • by gweihir ( 88907 )

              And you probably have missed where your reasoning is assuming invalid ground-truth. Basically, you assume that because everything must be physical, so must be what thinks and is consciousness. And that is a scientific fail. Science makes no such claim. The only valid scientific claim is that there are interface observations of thinking and conscious entities in this universe, everything else is open.

              BTW, you assumption is called "physicalism" and it is a variant of religion, mostly because it assumes some "

              • by Xest ( 935314 )

                Okay - I didn't realise you were a god botherer type who believes in the unprovable. Had I known that I'd have not wasted my time. Keep believing what you want to believe, it has no relevance to actual scientific discussion though and doesn't change the fact you're completely wrong.

      • At this time, it is entirely unclear whether "something that is entirely undefined" is even possible in this universe.

        FTFY.
        And no, you have to define something (sans hand-waving generalities) before you can even consider whether it will ever exist.

    • by Anonymous Coward

      First of all, AI is nowhere near the level he seems to think. If "golden age" is referring to "the earliest age" then sure. If it's referring to the best, absolutely not.

      Second, stopping technology is never the answer. We just have to modify how we manage resources differently than we have in the past. "Redistribution of wealth" should not be seen as a bad thing when it's a mathematical certainty that wealth will automatically concentrate in the hands of a few people. There's literally no way around it

    • Though I wouldn't mind riding a sand worm, I do think our technology is evolving faster than our ethics and it is causing a ton of problems most only see when living outside the grip of social Darwinism. I've always seen Facebook as a huge problem, but combine AI with quantum computing and we are all data cattle forced to go to outrageously priced colleges and plunge us all into more debt. Those with an IQ less than 115 won't be able to take full advantage of capitalistic gain unless they know someone.
    • Which Herbert do you mean? Because Frank Herbert never clearly stated what happened with the thinking machines and in Brian Herbert's shitty books AIs go out of their way to be as sadistic as possible, which never made sense to me - it all sounded like a propaganda piece. Besides even there the actual cause for all that was a bunch of cyborgs who became enforcers for that particular AI, not the AI itself.

    • For the most of us, AI is trouble ahead that needs to be stopped and slowed down.

      Perhaps Mr. Bezos should read a bit of Herbert's Dune to know what happens when you let technological progress go unchecked. The end result is worse than what it would be if humanity were included.

      Progress is checked, just for you, mind.

      There's the quantum mechanical theory of uncertainty. One of the problems is being unable to tell exactly what is the momentum and position of a particle.

      Then there's the speed of light capping everything, like a salary cap in the NHL. Can't buy yourself Stanley.

      To top it off dark energy is making the universe expand. One day, we'll just be cold and alone. Isn't that something you want technologists to get working on posthaste?

      Without progress, we'd just be protoplasm

    • 'Dune' did precisely nothing to show that. It asumed that bad things had happened already, that a war had been fought long ago, and that the outcome was a veto on AIs. When the story begins the status quo is that AIs are outlawed, but we are never shown why, or what would happen if they existed. The lack of AIs isn't even central to the plot; it is just a convenient excuse to combine starships with people who live in caves and fight with swords in one story.

      Maybe he should play Mass Effect instead. Although

  • by phantomfive ( 622387 ) on Monday May 08, 2017 @05:05PM (#54380515) Journal
    Once again, someone is talking about soft AI, and the reporter interprets it as hard AI, and mass confusion results. Expect follow-up stories about how AI will take over the world.
    • Re: (Score:3, Interesting)

      by Cipheron ( 4934805 )
      Soft AI is still capable of building deathbots and the like, but it would lack conciousness. You can't reason with an anthill. Soft AI could well be more dangerous than hard AI.
    • by gweihir ( 88907 )

      Indeed. It seems tech reporters are severely lacking in real intelligence when it comes to reporting on AI.

    • Thanks to machine learning and A.I., voice assistants can accurately convert your speech into text. Unfortunately, they still are dumber than dirt, and have trouble COMPREHENDING things that even a 2 year old could. It's not artificial intelligence, it's artificial stupidity.
      • by Kjella ( 173770 )

        Thanks to machine learning and A.I., voice assistants can accurately convert your speech into text. Unfortunately, they still are dumber than dirt, and have trouble COMPREHENDING things that even a 2 year old could. It's not artificial intelligence, it's artificial stupidity.

        But is that the fault of the machine or the people programming it? We barely got the faintest idea of how to extract what the brain does and put it in a computer, deciphering it is still cutting edge Nobel prize worthy research. But a two year old doesn't go to class or have it zapped into their brain, they learn. And when we hardly can explain how we do it, we certainly can't explain how we learned it or what enabled us to learn it. They say the brain is super powerful but with petaflops of CPU processing

  • This might be the Stone Age, or at the most the Bronze Age, but we are nowhere near the Golden Age or the Renaissance.

  • The golden age of AI was in the early 1980's when I read all about it Bytes Magazine.
  • ... it would actually be exciting. Unfortunately, it is complete and unmitigated bullshit. There still is zero understanding in machines and the only form of "AI" we have is weak AI, i.e. the "AI" that actually has no intelligence and can only fake it in very limited circumstances. Properly, this is called "automation" and anybody thinking of a mechanical process is right on the mark.

  • This is just more hype, ignore it.
  • His assertions stand in stark contrast to what I've heard about developing for Alexa. As I understand, it performs the speech-to-text translation for you, but when it comes to parsing the text and interpreting what it means, you're on your own.

  • than the same claims of the 70's and 80's....you're only as good as the data you provide and the second was that OOP models to do deep learning basically plateaued always no matter how much compute power you threw at it.

    The more current models for deep learning seems to scale much better, but also the sheer amount of data collection (not to mention data storage/cost) is why you are seeing people so jazzed about this.

    Here is the rub though...you still need the "right kind" of data to correctly train todays d

  • People keep referring to data mining and algorithms as artificial [human] intelligence. It ain't. We we are nowhere near that point.
  • What is being lauded as A.I. is not the sort of A.I. described in most S.F. books and Bezos knows it. He has a product to sell so that's why he's embiggening A.I.

  • In another posting, people were complaining how hard it is to write skills for the Amazon Echo by trying to write the skill so it can catch every possible phrase spoken. They concluded, "this is not A.I." Well, technically, A.I. is a massively complex decision tree, so, yes, it is A.I.

    On that note, Amazon Lex is one of the services that you should be looking for if you want to know more about A.I. on AWS to create artificial conversations without writing every possible phrase.

We must believe that it is the darkest before the dawn of a beautiful new world. We will see it when we believe it. -- Saul Alinsky

Working...