Netflix Uses AI in Its New Codec To Compress Video Scene By Scene (qz.com) 67
An anonymous reader shares a Quartz report: Annoying pauses in your streaming movies are going to become less common, thanks to a new trick Netflix is rolling out. It's using artificial intelligence techniques to analyze each shot in a video and compress it without affecting the image quality, thus reducing the amount of data it uses. The new encoding method is aimed at the growing contingent of viewers in emerging economies who watch video on phones and tablets. "We're allergic to rebuffering," said Todd Yellin, a vice president of innovation at Netflix. "No one wants to be interrupted in the middle of Bojack Horseman or Stranger Things." Yellin hopes the new system, called Dynamic Optimizer, will keep those Netflix binges free of interruption when it's introduced sometime in the next "couple of months." He was demonstrating the system's results at "Netflix House," a mansion in the hills overlooking Barcelona that the company has outfitted for the Mobile World Congress trade show. In one case, the image quality from a 555 kilobits per second (kbps) stream looked identical to one on a data link with half the bandwidth.
Re:What exactly is Netflix doing? (Score:5, Insightful)
What I think is that they devised an algorithm, probably based on neural networks, that is particularly good as estimating the perceived quality of the picture.
This data is then used to adjust the level of compression of each part of the picture, so that the least important parts of the picture get compressed more aggressively to save bitrate for the more important parts.
This is nothing new really, the idea of using AI techniques and perceived quality to help with compression is decades old. The interesting part here is that it is done on a commercial scale.
Re:What exactly is Netflix doing? (Score:5, Insightful)
Re: (Score:3)
This may well be DRM, not just over-engineering.
I wonder if Netflix worry about a situation where you can dump a movie to disk by doing a save-state on your VM. (See also: their offline-viewing feature is only available on locked-down devices.)
I've seen Chromecast cache over 3 minutes of Netflix, but I'm fairly sure that's about as high as it gets.
Re: What exactly is Netflix doing? (Score:4, Informative)
Could be, but when AAC came around for audio compression, the interesting concept was that a pschyoacoustic model was applied to identify which of (at the time 7) multiple compression model paths would provide an the most optimal perceived result of the compression for the given audio samples. Better encoders would choose better paths for encoding. So, the best encoders would identify which compression method to use for each audio segment and how long said audio segment should be.
H.264 and H.265 offer a massive amount of opportunities to tweak encoding of macro blocks. There are spatial considerations (block size), temporal considerations, motion consideration, etc... each individual block can be a differ type (I, P, B, etc...). Each block can be mapped compressed relative to another block considering time and space. Each block can select a different set of coefficients for frequency identification (DCT for example) as well as gradients (quantization). Each block can be stored for optimal management of loss related to congestion (NAL).
To be fair, if I covered every possible case in H.265, I can be here a long time.
I used to write encoders for these standards and I would often target optimal allocation of bitrate relative to PSnR and SSIM. These are great metrics for attempting to model optimal quality following decompression. Unfortunately, I was too early to also optimize bitrate allocation relative to improved perceived quality relative to specific areas of interest which is something we can do today by applying computer vision modules that can simulate what is likely to be most interesting to humans and draw their attention. For example, consider that in "Back to the Future" when watching the scene where Doc types the date into the car computer, a computer can now identify that a human would most likely be drawn to watch the LED digits most closely. So, allocating a greater bitrate there would be better than to Doc's fingers and head movement.
Modern machine learning methods such as those used by Google to recognize a red dress in a photo and catalog it appropriately can easily be used for this type of encoding process and as Audun Mattias Ãygard has been publishing on his blog, these algorithms are public and well known today.
I experimented with this technology for identifying optimal image compression some time back. The tech wasn't ready yet
Bitrate allocation relative to areas of interest could easily allow for 50% bitrate savings. As the tech improves, I could see a great deal more especially in the area of quantization.
Now, if we used more AI for CABAC and CAVLC to precondition the dictionaries, we may have a great model for H.266
Psychovisuals (Score:2)
Yup, it's a psychovisual model.
Like there has been used in video compression for quite some time.
There is a primary source link [netflix.com] mentionned elsewhere in this thread.
The novelty is that these one use machine learning (SVM according to the source).
(As opposed to older psychoauditive models used in compression of MP3, Vorbis, etc. which were based on clear rule, such as "a loud beat from a drum will mask whatever was playing the main melody".
This one learns automatically based on a crowd-sourced quality evaluat
Re: (Score:1)
This links are also relevant:
http://techblog.netflix.com/20... [netflix.com]
http://techblog.netflix.com/20... [netflix.com]
Re: (Score:1, Interesting)
My guess is it's the codec equivalent of "See that tree in the background? Yeah, that's going to be there for a while so just draw it once and leave it there until we tell you otherwise and we'll only send you the data about the stuff that's actually moving."
Comment removed (Score:5, Interesting)
Re: (Score:1)
That's what modern video compression already does. What is new here is they have an AI that makes a determination on what constitutes a noticeable loss in video quality and then seems to use that for more intelligently chosen keyframes (ie, when the "this bit is going to stay here for a while" is declared) and perhaps providing metadata on how far you need to buffer to get a "scene" so that they know when it may be safe to stutter for a second if needed without being noticed.
Re: (Score:2)
It sounds like they've basically just written a better psychovisual engine for driving variable bitrate encoding. Since their work was done in concert with some universities, it's possible that we could see it make its way into x264 and x265 in the future, if they publish their work.
Re: (Score:2)
Optical, POTS to optical, coax. Is the national internet shared a lot and slow most nights?
Once that regional data is sorted movies can be made ready for local conditions. What is the actual connection from an ISP to the user when movies are been requested.
Is the connection good but the amount of bandwidth shared with a lot of other providers due to cost or other per nation issues?
Once local conditions are finally understood a movie c
AI? (Score:3, Insightful)
Why are they calling it AI? That's silly.
It's just an improved encoding scheme with better algorithms.
Nothing new to see here. We've been improving video encoding schemes since we started encoding video.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
In addition to AI being trendy, there are robots, specifically the idea that robots are going to replace jobs.
Machines, and automation have been replacing jobs since the dawn of the Industrial Revolution, and has been more aggressive with the invention of Microprocessors.
For some reason people seem to think that there's a new phenomenon where Lt. Commander Data and Bender are going to be replacing jobs.
Re: (Score:3, Informative)
Re:AI? (Score:4, Funny)
Let's call it : Algorithms Intelligence. :)
Re: (Score:2)
Well no that is not silly at all. Modern video codecs are in fact a toolbox containing different techniques that are more suited for such r such type of scenes.
- action movies would have better movment over definition
- still sequences
- sequences where only part of the scene is moving.
Using deep neural network to
1- identify which type of scene and adjust codec settings on the fly
2- compare the rendering to the original uncompressed version
3- readjust if necessary and learn from the situation
would be a breakt
Re: (Score:2)
Most of which is already done and none of which requires AI.
Re: (Score:2)
The improved algorithms are being driven by machine learning. They trained it to recognise when a scene needs a higher bitrate to look good, so that humans don't have to guide the encoder.
I don't necessarily agree that all machine learning or neural networks merit the term "AI", but nor do I class software that must be trained on a dataset to function as an "algorithm", except in the broadest sense.
It's just (Score:4, Funny)
You sa "AI"? (Score:4, Insightful)
I don't think that word means what you think it means...
GPU Transcoders (Score:2)
VBR isn't rocket science, and not new. Great that they're using it. GPU transcoding is really helping these days.
Re: (Score:2)
Exactly. DISH Network and DirecTV have been using this since the early 1990s and big-dish C-Band has used it even longer.
I fail to see what is new here, and, from what I understand, I'm even more surprised that Netflix wasn't aware of this technology from the very beginning.
Re: (Score:2)
VBR within a frame is relatively new or not talked about much if it isn't. This is talking about which macroblocks to give more bandwidth to based on their content and relative importance within the frame.
For all the people saying this isn't AI (Score:5, Informative)
Netflix got around the problem by using machine learning to teach a computer when video quality looked good. They had a bunch of people watch videos with different compression and rate the quality, then told the AI that their ratings were gospel. It then analyzed the different videos and decided for itself which features were associated with good quality. Once the computer was generating the video ratings as people, they had a rapid way to do A/B testing. That allowed them to optimize their compression algorithm in much less time than with using humans to rate video quality.
I'm not sure why Summary links to some popular news article which talks in general about Netflix using AI, instead of linking to the actual Netflix page describing exactly what they did [netflix.com]. This used to be the sort of technical detail you'd expect from slashdot submissions.
Re: (Score:2)
http://s2.quickmeme.com/img/e8... [quickmeme.com]
Re: (Score:3)
This is why /. should allow some comments to become 'top comment'.
Yours is more valuable than the submission, TFA and all other comments combined.
Re: (Score:2)
i wonder if they took into account viewing medium. i have a hard time comparing video details on an smaller screen vs a larger screen.
what looks good on a monitor/tv looks bad on a 160" 4k projector... to me
netflix does a decent job mostly though.
Call it what you want... (Score:2)
Call it anything you want: "Netflix uses bagels to compress video" I don't really care. I just wish they would take a closer look at the darkest parts of a scene and stop compressing the hell out of it. Visible gradients ruin every single scene always.
Re: (Score:3)
Maybe your display brightness/contrast settings are wrong?
Re: (Score:2)
I view streaming content on a variety of devices off of a perfectly acceptable cable internet connection and I still see the compression, but the worst of it is seen on the "main" family TV. Netflix offers the best experience (followed by Amazon Video, followed by the truly horrific Google Play), but it's still there.
I fully admit that I am not a hardcore video guy and not obsessed with tweaking a bunch of TV settings so there is indeed room to make adjustments. That said, I'm very happy with up-scaled DV
Re: (Score:2)
Maybe their new CODEC guided by human votes will use more bandwidth for dark scenes from now on.
Re: (Score:2)
they do a better job on their 4K stuff for sure, still no comparison to 4k media. i wonder if they compress based on BW availability.
i beg to differ... (Score:1)
"No one wants to be interrupted in the middle of Bojack Horseman or Stranger Things."
Actually, if I am every watching Bojack Horseman... interrupt me any way possible. Use bullets if necessary.
Netflix Could Have Been Good (Score:1, Redundant)
Re: (Score:1)
Re: (Score:1)
Why the fuck do you post in monospace?
The Furture of Netflix (Score:3)
Compression won't solve buffering. (Score:2)
Except in edge cases, videos don't stutter because they take slightly more bandwidth than you have available. They stutter because the buffers aren't deep enough to overcome network jank, and my understanding is that streaming providers use shallow buffers for content-protection reasons (it's not like you're going to suddenly switch streams 45 minutes into a movie).
Put another way, the difference between a 500 kbps stream and a 250 kpbs stream isn't going to improve your rebuffering experience on a link wi
Re: (Score:2)
In any event, it makes missing a bit of dialog a frustrating experience -- I'd love a "skip back ten seconds and turn on subtitles temporarily" button, with all the content already buffered...
It's a kind of censoring (Score:2)
More marketingspeak. (Score:2)
>> compress it without affecting the image quality,
If the compression used is in any way lossy, affecting image quality is by definition inevitable.
Please stop calling simple algorithms AI (Score:1)
Buffer Interruptus (Score:3)
"We're allergic to rebuffering," said Todd Yellin, a vice president of innovation at Netflix. "No one wants to be interrupted in the middle of Bojack Horseman or Stranger Things."
Or porn. "Yes, yes, yes..." (buffering ...) [ Nooooooooooooooo.... ]