Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Technology

Will Compression Be Machine Learning's Killer App? (petewarden.com) 59

Pete Warden, an engineer and CTO of Jetpac, writes: When I talk to people about machine learning on phones and devices I often get asked "What's the killer application?". I have a lot of different answers, everything from voice interfaces to entirely new ways of using sensor data, but the one I'm most excited about in the near-team is compression. Despite being fairly well-known in the research community, this seems to surprise a lot of people, so I wanted to share some of my personal thoughts on why I see compression as so promising.

I was reminded of this whole area when I came across an OSDI paper on "Neural Adaptive Content-aware Internet Video Delivery". The summary is that by using neural networks they're able to improve a quality-of-experience metric by 43% if they keep the bandwidth the same, or alternatively reduce the bandwidth by 17% while preserving the perceived quality. There have also been other papers in a similar vein, such as this one on generative compression [PDF], or adaptive image compression. They all show impressive results, so why don't we hear more about compression as a machine learning application?

All of these approaches require comparatively large neural networks, and the amount of arithmetic needed scales with the number of pixels. This means large images or video with high frames-per-second can require more computing power than current phones and similar devices have available. Most CPUs can only practically handle tens of billions of arithmetic operations per second, and running ML compression on HD video could easily require ten times that. The good news is that there are hardware solutions, like the Edge TPU amongst others, that offer the promise of much more compute being available in the future. I'm hopeful that we'll be able to apply these resources to all sorts of compression problems, from video and image, to audio, and even more imaginative approaches.

This discussion has been archived. No new comments can be posted.

Will Compression Be Machine Learning's Killer App?

Comments Filter:
  • by Ecuador ( 740021 ) on Wednesday October 17, 2018 @09:54AM (#57492152) Homepage

    I thought the Pied Piper platform already uses ML to improve compression, right?

  • by Anonymous Coward

    Perhaps you should get a patent for this vague description of a mathematical expression. We have learned with the right Supreme Court justices, you can circumvent established case law.

    • Say what now (Score:3, Interesting)

      by SuperKendall ( 25149 )

      I think you are more than a little confused if you think the Supreme Court has anything to do with patents...

      This is what partisanship does to your brains kids. Don't be partisan, learn how systems actually work, and offer thoughtful critiques instead of running around in a blind panic saying things that are outright wrong everywhere you go.

      • Wow, you didn't know SCOTUS has anything to do with anything? Yikes. Imagine how long you've been on the internet, too. And yet.

        • Wow, you didn't know SCOTUS has anything to do with anything?

          Here's how that read to normal or informed people:

          "Yarble blargh FRONOB FRONNOB FORNNOB!!!!!!!!!!!!!!!!"

          Yeah, keep working at it man. Someday you'll be intelligible, even if still ignorant of actual law.

          I'll let you have the last response because FORMNOB!

          • Here's how that read to normal or informed people:

            That's a nasty type mismatch error. Are you saying that compiles for you, or do you just not understand your own words? That would certainly be average, or informed to the average level. But yeah.

            Protip: People who don't understand the basic vocabulary of the subject... are not well informed. And no, I really do not care if people who wear their base ignorance on their sleeve can understand my words. I don't type it out for them, I type it out for people who matter. It doesn't bother me if it is only two or

  • Re: (Score:2, Interesting)

    Comment removed based on user account deletion
  • NB (Score:2, Informative)

    A quick remark: this could theoretically work for content delivery but it will be unsuitable for video archival where picture fidelity and lossless transfer are paramount.
  • So the article claims. Where is it improving rapidly? I interact with the Google Assistant and Alexa on a regular basis, and they seem to be just as limited and non-discerning as they have always been. I still have to speak slowly to them, while articulating carefully. And it still is the case that it does not take much in the way of background sound to throw them out of kilter. Thus, where is voice recognition improving rapidly?
    • To rival humans at voice recognition, these assistants would need to do at least two things that actual humans already do:

      1: Constantly listen in on your life (and possibly watch it with a camera), so that it can maintain a real-time context to interpret any ambiguous verbal information. Waking up the assistant only after it hears its name is not sufficient because without context you need to be extremely clear to establish what you're talking about. I've noticed that even talking to people, when you switc

    • I still have to speak slowly to them, while articulating carefully.
      You make the same mistake Newton testers made.

      You try to adapt till the machine "understands" you ... but the machine is actually trying to understand you without adapting. With your "articulate carefully" (and changing that articulation daily) you are just a moving target for the machine.

  • by QuietLagoon ( 813062 ) on Wednesday October 17, 2018 @10:16AM (#57492334)
    Preserving perceived quality by which metrics? Comcast recently moved to downgrading the quality of the HD cable programming it provides. Some people see a significant problem with the downgraded video, especially during action video such as sport events. Yet Comcast says that ~the perceived quality is the same.~ So I ask again, how is "perceived quality" going to be measured? And by whom? By those who want to push out the new technology for monetary gain, or by those who are subject to the inferior results of the new technology?
    • by Matheus ( 586080 )

      Reality of the business world: Yes of course they are going to try to deliver less for the same or more $. Capitalism blah blah blah...

      That being said: Although the article uses the phrase "perceived quality" they aren't necessarily differentiating between that and "real" quality. The sentences work just fine when the compression is lossLESS meaning they can accomplish that increase in quality (resolution / bit depth) at same bandwidth or reduce bandwidth for same quality by that definition.

      That doesn't mea

    • by antdude ( 79039 )

      So, what did Comcast say after those?

  • Options (Score:4, Interesting)

    by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Wednesday October 17, 2018 @10:57AM (#57492638) Homepage Journal

    1. Caching.

    Remember, all that matters is the bandwidth of the most constrained point you traverse. And that's typically going to be near the servers, as point-to-point connections are highly inefficient.

    If you place frequently-accessed content downstream, much nearer the recipient, you eliminate the need to use any of the pipes along the constricted regions.

    Experiments I did on this in the 1990s, when the problem was at its worst, showed that you could get a 60-fold improvement in quality of experience, well beyond anything compression can achieve.

    2. Multicast

    Most people are used to a few seconds delay before streaming starts. If N people request the same content over a 5 second period, then delaying the first person by 5 seconds won't be perceived as abnormal. You're now transmitting one copy per path. This is an ideal way to populate the aforementioned caches, it would be useful for server-based content only if no caches exist.

    3. Fractal compression

    Technically, wavelet. Used in the BBC's Schrodinger codec. Produces far better compression than typical codecs.

    4. Better pipes

    Most of the rest of the world is already operating at bandwidths between 100-10,000x that common in the U.S., with U.S. cable companies have either prosecuted those offering higher speeds or driven heavy equipment through their cables. It is time to stop accepting this as the cost of doing business.

    The U.S. should mandate 50 gbps to the home, the highest speed available elsewhere on a large scale. If you're going to be the best, you have to be the best. The U.S. should not be an also-ran. A fat tree is impossible, at those speeds. Realistically, I doubt you could get a block to handle more than 200 gbps. Which is adequate.

    Tier 1 is fine, just have a mesh network. Current SDM transmission rates are 111 tbps, which means you can support 555 blocks of houses off a single seven core cable. GMING can supply the info on how you'd actually get that to work on a metro level.

    You're really not going to need to do a whole lot of compression at those speeds.

    • The problem with upstream caching is that it is a type of man-in-the-middle attack, and there is a strong tendency to abuse it.

    • compression is good for various other reasons like storage; there is a trade off CPU versus network bandwidth between sender and receiver. It's related to trading code versus data (generator vs look-up table). If you can have a small algorithm to generate the nth prime number, you don't hv to carry around a huge table of primes.
      For various applications the inter-node data traversal is the costly operation (fancy eg an intelligence in a far away star wants to send information, it's best it compresses it an
    • > all that matters is the bandwidth of the most constrained point you traverse. And that's typically going to be near the servers, as point-to-point connections are highly inefficient.

      What universe do you live in. The bottleneck is always at the users end. Businesses, enthusiastic fans, just sites in general have no problem with bandwidth. They can pay for bandwidth 10 times over with a few ads. But the consumers are often stuck with 10GB per month plans. It does not matter where the data is coming from,

    • > The U.S. should mandate 50 gbps to the home

      I don't understand why bandwidth keeps increasing so fast. Who needs to watch 2000 youtube videos simultaneously? Outside of owning some business or operating some site, what is the point of anything over a couple Mbps? Are sites even compatible with 50 gbps? Will even the Behemoth that is steam allow you to download the latest 100 GB game in 16 seconds?

  • by ColaMan ( 37550 ) on Wednesday October 17, 2018 @04:04PM (#57494652) Journal

    Most CPUs can only practically handle tens of billions of arithmetic operations per second,

    Got to wonder how a computer scientist from 30 years ago would react to this casual statement. I'm going with something along the lines of, "Great Scott!"

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...