Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Google Software Technology

Google Researchers Created An Amazing Scene-Rendering AI (arstechnica.com) 50

Researchers from Google's DeepMind subsidiary have developed deep neural networks that "have a remarkable capacity to understand a scene, represent it in a compact format, and then 'imagine' what the same scene would look like from a perspective the network hasn't seen before," writes Timothy B. Lee via Ars Technica. From the report: A DeepMind team led by Ali Eslami and Danilo Rezende has developed software based on deep neural networks with these same capabilities -- at least for simplified geometric scenes. Given a handful of "snapshots" of a virtual scene, the software -- known as a generative query network (GQN) -- uses a neural network to build a compact mathematical representation of that scene. It then uses that representation to render images of the room from new perspectives -- perspectives the network hasn't seen before.

Under the hood, the GQN is really two different deep neural networks connected together. On the left, the representation network takes in a collection of images representing a scene (together with data about the camera location for each image) and condenses these images down to a compact mathematical representation (essentially a vector of numbers) of the scene as a whole. Then it's the job of the generation network to reverse this process: starting with the vector representing the scene, accepting a camera location as input, and generating an image representing how the scene would look like from that angle. The team used the standard machine learning technique of stochastic gradient descent to iteratively improve the two networks. The software feeds some training images into the network, generates an output image, and then observes how much this image diverged from the expected result. [...] If the output doesn't match the desired image, then the software back-propagates the errors, updating the numerical weights on the thousands of neurons to improve the network's performance.

This discussion has been archived. No new comments can be posted.

Google Researchers Created An Amazing Scene-Rendering AI

Comments Filter:
  • Comment removed based on user account deletion
  • by DogDude ( 805747 ) on Friday June 29, 2018 @08:25PM (#56869052)
    Everything that's called "AI" today is just advanced pattern recognition. I hope that the /. editors quit using the term "AI" so frequently. It's a dumb thing to do for a "news for nerds" web site. You might as well talk about "cyber", if you're going to continue to use "AI" for things that are clearly not "AI.
    • by Pulzar ( 81031 ) on Friday June 29, 2018 @09:22PM (#56869192)

      I don't understand the fixation on the terminology, while ignoring the interesting aspects of what it does.

      The terminology is very well defined in the industry, and accepted by most who participate. Deep neural network are a part of the family of machine learning algorithms (https://en.wikipedia.org/wiki/Deep_learning), which is, in turn, a subset of the field of artificial intelligence (https://en.wikipedia.org/wiki/Machine_learning).

      They key different from what you call "pattern recognition" is that there is no explicit coding of the algorithm, but the algorithm instead is "learned" through examples.

      Nobody is saying that the machine is intelligent. You'd do yourself good to look past the disagreement with the established terminology and look at the technology itself. You might find it interesting.

      • by Anonymous Coward

        I agree with parent that ML is more than just pattern recognition, but I wouldn't go so far as to say that any type of algorithm was learned.
        Instead, it's more like an adaptive lossy compression + extraction algorithm that may or may not give the results you want, even after training.

        Essentially it works by curve-fitting a sum of shifted (in space and/or time) exponential S-curve terms (which you could think of as CDFs of uniform random variables), and by using feedback (in the engineering sense) to update

        • by Pulzar ( 81031 )

          I agree with parent that ML is more than just pattern recognition, but I wouldn't go so far as to say that any type of algorithm was learned.
          Instead, it's more like an adaptive lossy compression + extraction algorithm that may or may not give the results you want, even after training.

          The learning was both in the building of the 3d model, and the learning of the extraction algorithm. The extent that the algorithm was manually programmed is in the constraints that forced the data compression followed by uncom

      • Re: (Score:2, Interesting)

        by David_Hart ( 1184661 )

        I don't understand the fixation on the terminology, while ignoring the interesting aspects of what it does.

        The terminology is very well defined in the industry, and accepted by most who participate. Deep neural network are a part of the family of machine learning algorithms (https://en.wikipedia.org/wiki/Deep_learning), which is, in turn, a subset of the field of artificial intelligence (https://en.wikipedia.org/wiki/Machine_learning).

        They key different from what you call "pattern recognition" is that there is no explicit coding of the algorithm, but the algorithm instead is "learned" through examples.

        Nobody is saying that the machine is intelligent. You'd do yourself good to look past the disagreement with the established terminology and look at the technology itself. You might find it interesting.

        I think that the fixation on the terminology is due to two simple concepts. First, we've seen terminology used as marketing speak for both vaporware and for products that are much more limited than suggested (i.e. the devil is in the details). Second, hardly anyone cares how a particular subset of field of research defines localized terminology except people within that field. Redefining the term AI to mean less than the general usage seems to be just plain silly. It's like calling an apartment a house

      • I don't understand the fixation on the terminology, while ignoring the interesting aspects of what it does.

        Because it's misleading. This is weak AI, but the researchers never say that when talking to the media (they don't need to). The media misunderstands, and thinks it's strong AI, because that's the only thing they know. Then people read it and think, "Oh no, this AI is going to conquer humanity and enslave us."

        Of course, this AI is not going to do that, it's just weak AI. It has no possibility of evolving into strong AI. So it's worth mentioning it, because many people are misled.

        • In this case, the article seems fairly level headed. How about reserving the pedantry for the cases where it's actually needed ?

      • Thank you. The constant "THIS IS NOT AI" crap on Slashdot is getting really, really, really old.
        I can enjoy a good amount of pedantry, bit this shit adds absolutely nothing.

      • by swb ( 14022 )

        I think part of the problem is that the "that's not AI" camp uses human-like intelligence as the normative standard for what "AI" should be, without considering whether there might be other models of intelligence that aren't human like.

        I think it's possible that we might create an unusually powerful AI and not realize it because we're stuck in a paradigm that says it has to mimic human behaviors and thought patterns.

        It's almost a kind of cultural bias, like assuming a society has to be organized like ours w

      • So much this!

        Also, there is an astounding amount of memes floating around making fun of the fact that A.I. is just a lot of 'if statements'. Well, no shit, sherlock - it is implemented on a Turing Machine.
        But this has always been a curse of AI: as soon as some level is reached (basically since Deep Blue), the goal posts get shifted way out, again.
    • by Ungrounded Lightning ( 62228 ) on Friday June 29, 2018 @09:48PM (#56869264) Journal

      Everything that's called "AI" today is just advanced pattern recognition. I hope that the /. editors quit using the term "AI" so frequently. ...

      For decades "AI was a failure". But that was because intelligence seems to involve a number of different components, and every time AI researchers got one of the components working and useful, somebody gave it a name, stopped calling it AI, and the field of "AI" shrunk to exclude it, leaving only the problems not yet solved.

      It's nice to finally see some of the pieces retain the "AI" label once they're up and running well enough to be impressive..

      Sure it's not the whole of "intelligence". But it's obviously a part of it - or (if not the SAME thing that our brains do), at least a part of something that, once more pieces are added, would be recognized as "intelligence" in a Turing test.

    • a major step in AI? I don't mean "we programed these patterns and it recognizes them" I mean "we kept feeding patterns in until the program recognized patterns it never saw before". Pattern Recognition is one of the first things baby's learn. Our AIs might be at that stage, but that's still frighteningly impressive.
    • Everything that's called "AI" today is just advanced pattern recognition

      That's what intelligence is all about: the act of recognizing patterns and applying them in different ways. After that, it's just a matter of how complicated the patterns are.

    • Based on the way most people learn concepts from examples and slowly develop abstract rules for them, I wouldn't be surprised if we're mostly a combination of "simple" pattern recognition machine learning algorithms and hard-coded rules that in turn were derived from such algorithms.

    • Not exactly - most of what the press has decided to call AI is machine learning of some form or another - usually either some form of supervised learning or reinforcement learning.

      It's reasonable to characterize some simple supervised learning applications (image recognition, speech recognition) as pattern recognition, but generative models where the program is *doing* something such as generating speech, or playing Go, or, as here, imagining/predicting what a scene would look like from a novel viewpoint, o

      • Not exactly - most of what the press has decided to call AI is machine learning of some form or another

        That's just because ML is a popular way to implement AI. Nearly all AI advances in the last couple of years come from machine learning, so it's not strange that the press calls it AI. If researchers had produced AI using different methods, the press would be calling that AI.

        • Well, my real point is that you can't dismiss all of today's "AI" advances as pattern recognition. Predicting the future consequences of your actions based on past experience (i.e. reinforcement learning) is a lot more than pattern recognition.

          The press no doubt will call things whatever they want in an effort to garner eyeballs and sell advertizing, but note that the researchers making these ML/neural net advances themselves call it ML - the AI hype is thanks to the press, not researchers overselling their

    • by HiThere ( 15173 )

      What grounds do you have for believing that intelligence is anything besides advanced (well, quite advanced) pattern recognition?

      Actually, there clearly are a few additional features, but can you identify them?

  • no joke. AI based monitoring software. 2 years ago it was worthless. They just replaced the whole team with it. It's not some kneejerk thing either. They've been testing it for months and it's more accurate than people. That didn't used to be true. Used to be if you just ran monitoring scripts you were just asking for trouble. You needed somebody to watch the script. Not anymore.

    This next step here is getting AI to imagine. To think through problems. 20 years from now IT will be gone. The old timer's re
  • by Anonymous Coward

    Amazing! Everything is so blurry, it's so realistic!

  • Can it color B&W movies better than the ludicrous methods used til now?

    • No, and neither can it cook you breakfast.. because it's - very specifically - a program to predict what multi-object scenes look like from viewing angles.it hasn't seen before.

      FWIW there are now programs (also machine learning based, and otherwise entirely unrelated to the prgram being discussed) that do a very god job on colorizing B/W photos.

  • With deep fakes and this kind of "alternate reality" viewpoint - how much longer will it be until we cannot believe a digital image?

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...