Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Software Graphics Media Technology

Next-Gen Video Encoding: x265 Tackles HEVC/H.265 104

An anonymous reader writes "Late last night, MulticoreWare released an early alpha build of the x265 library. x265 is intended to be the open source counterpart to the recently released HEVC/H.265 standard which was approved back in January, much in the same way that x264 is used for H.264 today. Tom's Hardware put x265 through a series of CPU benchmarks and then compared x265 to x264. While x265 is more taxing in terms of CPU utilization, it affords higher quality at any given bit rate, or the same quality at a lower bit rate than x264." (Reader Dputiger writes points out a comparison at ExtremeTech, too.)
This discussion has been archived. No new comments can be posted.

Next-Gen Video Encoding: x265 Tackles HEVC/H.265

Comments Filter:
  • This is great news! (Score:5, Interesting)

    by ShooterNeo ( 555040 ) on Tuesday July 23, 2013 @12:20PM (#44362731)

    25-35% less file size for the same quality is an incredible advance. Obviously the task of improving compression algorithms is going to ratchet up enormously as the file sizes get smaller with higher entropy. I'm in fact amazed that an advance this big is even possible, apparently, x264 is nowhere near the theoretical limits for (lossy) video compression.

    • by Anonymous Coward

      Storage isn't a problem, it's the cheapest part of the equation. Energy consumption is the biggest technical challenge due to the global domination of mobile devices and the current limitations in energy storage.

      • by gl4ss ( 559668 ) on Tuesday July 23, 2013 @12:48PM (#44363003) Homepage Journal

        Storage isn't a problem, it's the cheapest part of the equation. Energy consumption is the biggest technical challenge due to the global domination of mobile devices and the current limitations in energy storage.

        _transferring_ is still very much a issue though.

      • by jedidiah ( 1196 ) on Tuesday July 23, 2013 @12:49PM (#44363027) Homepage

        Storage isn't even the problem when it comes to file size, network bandwidth is. The generally poor quality of broadband and even cable ultimately relate to the size of the file. Network performance and bandwidth caps are the real choke point.

        Streams get over compressed to the point that even an aggressively transcoded DVD beats the snot out of them in terms of quality. Forget about a raw BluRay stream.

        • Ultimately video over IP (which sounds like a bad plan to start off with) is all about the connection - modern broadcasters use adaptive streaming - the same video is encoded at a variety of bitrates and resolutions and made available to playback clients. The client assesses live buffer fill and decides between low bitrate and poor quality and high bitrate/quality dynamically depending on how the link is performing, fetching small/large files off the server as appropriate. It works very well and the user i

          • by jedidiah ( 1196 )

            It isn't even really here yet.

            You know what's really here to stay? MPEG2. Stuff lingers because content is available in that format. Until that happens, fanish pronoucements are all premature.

            • MPEG2 lingers because it costs a lot to replace millions of existing set-top boxes, however that's only for video delivery to the home. A lot of backhaul/contribution/distribution is H264 which then gets transcoded at the edge of the broadcaster's networks. All the modern delivery over the internet systems are H264 because there's no legacy technology to replace and bandwidth is at a premium. HEVC will both get adopted both within broadcaster systems, and also for new domestic systems (4k being the obvio

      • by NFN_NLN ( 633283 ) on Tuesday July 23, 2013 @12:54PM (#44363091)

        Storage isn't a problem, it's the cheapest part of the equation. Energy consumption is the biggest technical challenge due to the global domination of mobile devices and the current limitations in energy storage.

        The summary talks about an increase in CPU usage. If they use a dedicated H.265 chips in the future (much like they use H.264 chips now) can they not optimize the hardware to minimize CPU and power use? I'm just wondering from the perspective of mobile/phone users if H.265 is going to dominate or will H.264 still be the standard for mobile.

        • I think there will be a race to adopt this and to flood the market with h265 material- the newer machines will crunch through those videos while the older one will suffer. Big push for updating those toys.

          Personally I am using xvid4 for my stuff, at high bitrates it is not much worse than h264, I have less patent concerns. I wish somebody came up with cams sporting dirac or theora encoding. Most source material is now compressed in mpeg2/4/10 formats, transcoding make those codecs perform worse.

          • the newer machines will crunch through those videos while the older one will suffer. Big push for updating those toys.

            It took YEARS after the release of H.264 before ANY CPUs could do realtime playback of highdef videos. That's why there was such a big push for H.264 decoding in GPUs, but that came years later.

            • That's not true, the spec was finalised in 2003 and CPU's were capable of playback in 2004 (at admittedly high usage). The first MOBILE chips capable of hardware playback were around 2006 if I recall.

              However, this time around is different. Qualcomm has already demonstrated h.265 playback on an S4 Pro earlier this year (For those not familliar with mobile processors, the S4 pro is essentially last year's model, the 600 featured in the HTC One and Galaxy S4 is much faster - and the 800 coming in devices this

              • Video codecs DON'T WORK THAT WAY! It's idiotic to claim CPUs were fast enough on $DATE.

                How powerful of a processor you need to decode a video (in realtime) depends heavily on the video parameters. Resolution, bitrate and other parameters chosen (eg. number of B-frames, GOP size, etc), as well as some client-side issues, such as tuning of the codec and performance of the player software, not to mention whether the decoder is skipping steps like in-loop deblocking to cheat, all have a huge impact.

                And no, CP

        • Hardware Decode (Score:5, Insightful)

          by LordCrank ( 74800 ) on Tuesday July 23, 2013 @02:31PM (#44364267)

          If it's anything like H.264/x264 then I expect to have the hardware to decode H.265/x265 in my laptop about 2 years after movies and tv shows are being distributed in this format, but 2 years before there are any linux drivers for the hardware decoders.

          • by jedidiah ( 1196 )

            By the time I had any need of a hardware h264 decoder in Linux, the drivers were readily available and used by all of the relevant software.

            The notable slacker here was Adobe. These are the losers that whine about clanlib while the "hobbyists" just take care of business.

            The community and (at least a subset of) the hardware vendors were well prepared in ample time.

            Current performance of current entry level CPUs may already make the point moot.

            • I had a laptop that would barely be able to do hdtv, and couldn't handle 720p content without significant stuttering. I believe the issue was that I had done enough research to know its graphics card supported decoding, but not enough to know that the particular brand making the graphics card had a long history of terrible linux support.

              As far as whether hardware decoding is even necessary, it's something that I'd look at as an issue for set top boxes/low power media centers. It seems XBMC has generally bee

            • By the time I had any need of a hardware h264 decoder in Linux, the drivers were readily available and used by all of the relevant software.

              So? You were a VERY LATE ADOPTER of H.264, which doesn't remotely represent the rest of the world. H.264 has been around since 2003, and people have been struggling to watch H.264 videos in realtime on top-of-the-line CPUs from the start, and failing miserably for years. It took several years to get it decoded by GPUs even on Windows, and longer still before Linux co

            • By the time I had any need of a hardware h264 decoder in Linux, the drivers were readily available and used by all of the relevant software.

              Hardware accelerated video is still huge problem on linux if you don't own one of the few supported brands of graphic cards (nvidia vdpau with proprietary drivers and intel HD graphics vaapi open source drivers). Recently i wanted to purchase odroid-u2 [archlinuxarm.org] ARM computer with Mali-400 graphics accelerator but the driver support is not there and unfortunately it's not exception.

        • It's very hard to break away from the lowest-common-denominator. Just look at MP3.

          Smartphones didn't have a common standard video format before H.264 came along. H.265 is going to require much more computation, which will be possible in a few years, but I would fully expect H.264 to stay entrenched for a long, long time to come, and H.265 being an also-ran technology some high-end phones will start supporting, and us techies will rant about, while nobody else really cares...

          Besides that, Google is now the

        • The biggest compression gains for video involve improving motion compensation, which may raise the decoders memory requirements, and entropy coding. Entropy coding is implicitly single threaded, and the complexity of this process will impose a lower bound on the clock speed of your decoder. While the rest of the decoding process can be done in parallel with lower powered circuits.
          • Entropy coding is implicitly single threaded, and the complexity of this process will impose a lower bound on the clock speed of your decoder. While the rest of the decoding process can be done in parallel with lower powered circuits.

            Well, how about SHA-1 encoding? Each frame is an unpacked bitmap image, giving them a fixed size in bits. The encoder generates every frame which hashes to the same SHA-1 as the target, sorts them by their contents, and notes the ordinal of the target. The final packet consist

      • by dj245 ( 732906 ) on Tuesday July 23, 2013 @01:00PM (#44363213) Homepage

        Storage isn't a problem, it's the cheapest part of the equation. Energy consumption is the biggest technical challenge due to the global domination of mobile devices and the current limitations in energy storage.

        But there are bandwidth limitations on many devices. Limitations which are generally fine now for 1080p, but could be a problem with "something better", or with multiple streams of "something better".

        Plus, this article deals with the compression part of the video encoding. Most media is decompressed many many times, but only compressed once. It is reasonable to assume that decompression will be more taxing with x265 compared to x264, but that isn't a part of this article. How much more CPU is required for decompressing x265 compared to x264? That isn't so clear at the moment, and since the code isn't finalized, results today may have no bearing on tomorrow anyhow.

      • by Nemyst ( 1383049 )
        As others have said, bandwidth is an extremely important element. The thing to remember is that for the specific task of video compression and decompression, hardware vendors aren't shy about integrating specialized hardware that can run the algorithm at a much higher speed and at a much lower energy cost than general purpose processors. While it's too early to say whether H.265 will take more CPU power to compute (this is, after all, an alpha implementation), even if it were, it probably wouldn't matter th
      • The power cost for transferring a single bit has not really changed over the years (if anything, its increased). The power cost of decoding a set of bits however constantly decreases.

        Its always been more power efficient to employ more complex compression (more CPU work) over transferring more bits. Even for non-specialized CPUs this is true, never mind when you have hardware decoders.
        • by Bengie ( 1121981 )

          Its always been more power efficient to employ more complex compression (more CPU work) over transferring more bits.

          I agree. Wireless eats a lot of power and newer wireless devices and shut-down and go idle much more quickly. Even quicker idle wireless in the pipeline.

    • 25-35% less file size for the same quality is an incredible advance.

      No, it really isn't.

      First off, we're just talking about PSNR per bitrate, which is pretty meaningless. They even say in TFA that they could have used the "--psnr" option to increase the PSNR, at the expense of video that looks like crap. At least SSIM would have been less easy to fool than PSNR. At the end of the day, there's nothing better than a subjective expert human visual comparison.

      Secondly, look at a few of those charts. The spe

      • The specific encoder you use makes a HELL of a lot more difference than the codec/format. Other H.264 encoders don't come close to x264. If x265 doesn't get the same kind of open source development boost, x264 will continue to improve, and probably outperform the newer format, as proprietary codec developers just haven't shown themselves willing or able to do a good job of perceptual encoding, yet that's where the bulk of our non-pirated content comes from...

        Wow ... just wow!

        I'll assume by "proprietary codec developers" you actually mean developers who have closed source implementation of the encoder. In which case, the statement may not be completely unreasonable ie closed-source encoders have not kept pass with this open-source implementation due to less comphrensive (and less costly) implementation of the encoding tools.

        The rest is complete rubbish though ...

        So what if the bulk of non-pirated comes from closed source encoders ... what does that have to do x2

        • So what if the bulk of non-pirated comes from closed source encoders ... what does that have to do x264 or x265 or H.265 development?

          It means the actual delivered content won't be any better than what we've got now. Quite possibly worse. It's quite a common phenomena when new lossy formats come out. You'd know that if you had a clue about the subject.

          Do you understand that H.265 is essentially a superset of H.264? You know, as in H.264 + extensions

          You could say that all modern video codecs are essentiall

    • Um, the theoretical limit for lossy video compression is 1 bit. You can store any video with just that single bit if you are willing to put up with some rather obvious artifacts.

      I guess, technically, it's just a single artifact that you have to put up with.

  • This is basically just the HEVC reference encoder right now. Expect it to be slow and inefficient.
    • by StreamingEagle ( 1571901 ) on Tuesday July 23, 2013 @12:47PM (#44362983)
      The HM reference encoder takes roughly 40 seconds to encode one frame of 1080P video on a dual Xeon (16 core) server. x265 can encode 1080P at roughly 11 frames per second today. The project is still early in development, and there are many features (lookahead, B-frames, rate control, etc) and efficiency/performance optimizations left to be done, but we are making good progress. I would encourage you to try it before reaching any conclusions.
      • How amenable is H.265 to vector operations? Nobody would expect a 3d game to run at an acceptable rate without a GPU nowadays, and video encoding may fall in the same category. Serial processing hasn't been getting significantly faster for years now.
      • Just curious... Have you tested x265 scalability on any big iron? (e.g. eight socket Xeon server)
        • I've tested it on a 32 core machine, but not a 64 core machine. It scales well, in part through the brute-force technique of GOP level parallelization. x265 is far more parallelized than the HM reference encoder, but we still have a way to go before we have it fully parallelized. The project is being run by MulticoreWare, and as the name implies, parallel processing is our core competence.
  • by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Tuesday July 23, 2013 @12:25PM (#44362773)

    Not even just that it's almost certainly covered by a pile of patents, but unlike H.264, there isn't any clarity yet about which ones, and what the licensing terms will be like. Will the categories of royalty-free use granted to H.264 codecs also be applied to H.265? Nobody seems to know. MPEG-LA hasn't issued an update since June 2012 [mpegla.com], at which point they were still at the stage of calling for patent-holders to submit claims.

    • This is a software implementation. Commercial companies must license necessary patents separately, just as they do when they implement MPEG-2, H.264, etc.
      • by Trepidity ( 597 )

        At the moment, even noncommercial users who download the software and use it to encode videos are in a murky situation, since patents don't by default have an exception for noncommercial use; simply encoding personal videos via a patented method constitutes "practicing" the invention. The H.264 license specifically gives a royalty-free patent grant for noncommercial use (as well as a few other types of use), which clears up that case. What would be helpful is if MPEG-LA came out and said whether it plans to

        • by Kjella ( 173770 )

          What would be helpful is if MPEG-LA came out and said whether it plans to do that with H.265 also.

          Don't forget that MPEG LA isn't the one holding the patents, it's just a mouthpiece for all the companies involved that deals with collecting and distributing royalty fees. Until they've got deals signed with every patent holder, they can't really say anything on behalf of all of them. That said, if they want any H.265 adoption to take place the terms should probably not be significantly worse than H.264...

      • by tlhIngan ( 30335 )

        This is a software implementation. Commercial companies must license necessary patents separately, just as they do when they implement MPEG-2, H.264, etc.

        No, companies doing MPEG-2, h.264, etc just buy a blanket patent license from MPEG-LA - a set fee schedule basically means you pay like 25 cents per device and you've licensed all the patents.

        Problem is. h.265's patent pool is still being created and rates negotiated (and how much out of that fee does everyone get).

        So it's similar to implementing say a cel

        • Of course, it also shows how much marketing (yes, marketing) needs to go into VP9 - h.264 is too established for VP8 to be of any threat. But the next-gen codec field is wide open - will it be h.265? Or VP9? Or something else? (All it would take is Google to heavily promote and advertise VP9, and grease enough people so that MPEG-5 can include VP9 as a codec in part of the standard).

          Google doesn't need the MPEG to get VP9 into widespread use. They've got their video and audio codecs fully developed (VP9/Op

  • by Anonymous Coward

    Has anyone found any information on decode performance? Apparently "H.265 is known for taking more horsepower to encode and decode than H.264" but I only see numbers provided for encode performance. It seems like decode performance is more important in most cases.

    • by khelms ( 772692 )
      Why did the parent get voted down? This seems like a reasonable question to me. I sometimes have trouble getting smooth playback of H264 on my Linux systems due to the GPU driver not being as advanced as the Windows version. It sounds like H265 will require even more powerful graphics processing. Will I need a newer, more powerful video card? Will I be able to play this format on Linux at all, or will I have to wait a couple of years till the driver support for hardware acceleration catches up?
    • x265 is an HEVC encoder implementation only. To answer the general question, H.265 is more compute intensive to decode than H.264, but the compute requirements for HD decoding are not unreasonable. Software decoding of HD HEVC is possible on a dual-core ARM system, and x86 systems will not have a problem. Of course, as with any video codec, hardware manufacturers will implement hardware decoders. Some have been announced. Expect more announcements in the coming months.
    • About 25-50% higher according to most estimates. There aren't any optimised decoders to really compare real-world performance though.

  • even so that none of the main x264 developers is involved with the project. -> lame marketing stunt
    • Not true.
      • by Selur ( 2745445 )
        what? - the author commited some patches to x264, quite a while back - the author is not a main contributor to x264 - the x264 team is not involved in the x265 development
    • I'm also annoyed about the name, and we may now end up in a confusing situation with two independent x265 programs if the x264 developers eventually start working on a h265 encoder too.

  • The question is whether HEVC will be of use on mobile devices - that is where an increasing amount of video viewing is being done, and the area where bandwidth is most in demand.

    Existing smartphones have hardware support for H.264 - it is unclear how soon they will have hardware support for HEVC, which of course is even more computationally intensive.

    • Does "hardware support for H.264" refer to dedicated silicon that does nothing but AVC decoding, or does it refer to implementing AVC's inner loops in a programmable digital signal processor such as a GPU? If the latter, it may be reprogrammable for HEVC. In fact, with HEVC increasing opportunities for parallelism, it might be easier to bring the GPU in.
    • Happy to be proven wrong, but I'd imagine fairly soon.

      H.265 is a superset of H.264, so much of the same hardware could probably be used. As for complexity, it seems H.265 is around 25-50% more complex for DECODING so certainly not unreasonable for current mobile devices.

      The current Samsung Galaxy S4 advertised both HEVC support and Full HD (1080p) playback - not sure if they are both together however.

  • h264 good enough? (Score:2, Interesting)

    by Blaskowicz ( 634489 )

    The thing is h264 is maybe too entrenched, it took many years to have many millions of devices supporting it and that gives you a big install base that people don't necessarily want to replace (not everyone is the middle class american with "drawer full of smartphones"). And even then many old PCs aren't quite up to the task yet - the kind that barely manage youtube 480p or even 360p.

    MP3 files are still commonly used, even though they're clearly inferior and using AAC or OGG is a comparatively much simpler

    • The thing is h264 is maybe too entrenched, it took many years to have many millions of devices supporting it

      The same was true of MPEG-2 hardware (DVD players) and MPEG-4 ASP hardware (DVD players with DivX). It took Blu-ray and smartphones to get AVC hardware into users' hands. People replace smartphones regularly. And just as Blu-ray Disc with AVC replaced DVD-Video with MPEG-2, whatever optical disc format replaces Blu-ray Disc is likely to include HEVC as an option for 4K video.

      • by jedidiah ( 1196 )

        We don't even seem to have a standard media format for 4K video yet. While 25% improvement is nice, it really isn't that spectacular when you consider the likely escalation of stream size.

        25% smaller versus 400% larger.

        • Agreed but 4x pixels does not equate to 4x bitrate for a given codec - typically more like 2x.

          so maybe something like

          4K H.265 = 75% * 200%* 1080p H.264 = 150% H.264

          So not as bad as you'd think and a lot of current format stuff is still MPEG-2 as well so even more scope for improvement !!!

      • And just as Blu-ray Disc with AVC replaced DVD-Video with MPEG-2, whatever optical disc format replaces Blu-ray Disc is likely to include HEVC as an option for 4K video.

        Blu-ray hasn't exactly replaced DVDs... so it's probably a better analogy than you think. We're still stuck with MPEG-2 all over the place.

        And I don't expect to see anything better than Blu-ray for another generation. Blu-ray only has a market because the FCC forced the HDTV upgrade, creating a substantial market for 1080i TVs, then early

        • Comment removed based on user account deletion
          • The Advanced Television Systems Committee (ATSC) added support for H.264 compression and 1080p resolutions to the A/72 standard in 2008. My understanding is that FCC approval is pending.

            ATSC writes all kinds of crazy standards that will NEVER be used. ATSC for satellite, cable, mobile, and handheld? Nobody uses it. Cable is QAM and satellite is universally DVB-S or DVB-S2. M/H is a joke that everyone just ignores.

            The FCC will never approve any incompatible changes to OTA broadcast TV for decades to come

      • A nitpick, I think that any optical format that replaces Bluray disc would be a Bluray disc, such as a 100GB triple layer version with a higher supported sustained reading speed. That will piss off consumers though : maybe you need such a scheme (possibly with four or five layers) where the first layer or two can be read by a regular Bluray player, containing an usual h264 movie, and h265 version is on the bottom layers.. Is that possible at all?

      • by jedidiah ( 1196 )

        The same is still true of MPEG2 hardware.

        Every household in America has at least one expensive MPEG2 decoder (if not many) that likely won't be replaced until the issue is forced.

        Other non-h264 devices continue to linger in the marketplace as they continue to function and are able to generate new content. This tends to create conflict with crippled devices that are limited to a particular subset of h264.

        Even these force fed h264 devices come into conflict with other h264 content that assumes a more robust h

  • Ogg Opus [wikipedia.org] (open, royalty-free, not patent-encumbered audio) beats the pants off of HE-AAC [wikipedia.org] (which, in turn, is superior to everything else at pretty much every level). Opus also streams better, capable of dealing with extreme low-latency demands associated with real-time uses like VoIP.

    It is so common to see people talking about tweaking x264 to improve quality and compression, but there is a point where you're better off optimizing the other pieces; AC3 passthru is laughable contrasted to 6-channel vorbis [wikipedia.org]

    • by evilviper ( 135110 ) on Tuesday July 23, 2013 @05:41PM (#44366167) Journal

      Ogg Opus (open, royalty-free, not patent-encumbered audio) beats the pants off of HE-AAC (which, in turn, is superior to everything else at pretty much every level).

      Wow! So much wrong in just a single sentence...

      Opus is an IETF developed codec, based on CELT from Xiph.org, and Silk from Skype/Microsoft.

      HE-AAC certainly isn't "superior" at "every level". It excels at very low bitrate encoding that sounds SOMEWHAT like the original. As you start increasing the bitrate (eg 96k), low-complexity AAC easily surpasses HE-AAC. And as you go to higher bitrates still (eg. 160k), temporal domain codecs can outperform any frequency-domain codecs, so Musepack will beat the pants of AAC, and even Opus.

      Still, low bitrate lossy audio quality is important, so Opus is a good choice for streaming audio and video. That's why Google chose it for their latest revision of WebM, along with their new VP9 codec that they claim outperforms HEVC.

      I seriously doubt the MPEG / MPEG-LA organizations, and their members, will consider using a patent-free audio codec along with their heavily patent-encumbered video codec. Their business model is patents, and they'll chose an expensive and inferior option over a free one, any day. I'd expect HE-AACv2 to be the best you can count on for the foreseeable future.

      • by Khopesh ( 112447 )

        Ogg Opus (open, royalty-free, not patent-encumbered audio) beats the pants off of HE-AAC (which, in turn, is superior to everything else at pretty much every level).

        Wow! So much wrong in just a single sentence...

        Opus is an IETF developed codec, based on CELT from Xiph.org, and Silk from Skype/Microsoft.

        Ah, I thought Opus was under Xiph's Ogg umbrella. Xiph certainly hosts the new CELT+Silk project called Opus. Ogg is (afaict) the only container used by Opus (.opus is an Ogg file container). Noting those things, if you can refer to Vorbis as "Ogg Vorbis" then why can't you refer to Opus as "Ogg Opus?" I knew this was informal (though it is somewhat common) and did it mostly to concisely demonstrate its connections to Xiph (which most people know as "the ogg [vorbis] guys").

        HE-AAC certainly isn't "superior" at "every level". It excels at very low bitrate encoding that sounds SOMEWHAT like the original. As you start increasing the bitrate (eg 96k), low-complexity AAC easily surpasses HE-AAC. And as you go to higher bitrates still (eg. 160k), temporal domain codecs can outperform any frequency-domain codecs, so Musepack will beat the pants of AAC, and even Opus.

        You again caught me generaliz

        • Regarding Musepack (MPC), that is an obscure format that was generally on par with Vorbis back in the day (but Vorbis has been improved a few times since then while Musepack has not).

          I'll re-iterate my first comment:

          "temporal domain codecs can outperform any frequency-domain codecs" (at high bitrates)

          Vorbis is a frequency domain codec. Musepack necessarily beats it at high bitrates, every time. Don't like Musepack? Okay. Pick another temporal-domain codec, and it'll beat Vorbis, Opus, AAC, and any other

  • VP9 produces video about the same size and quality as H.265 (Google I/O talk on VP9 [youtube.com], though they of course weren't using x265 to compare), VP9 support is already in Chrome [webmproject.org] (with Firefox and Opera likely to follow soon) and the reference VP9 implementation is BSD-licensed. What's the advantage of H.265 over VP9 and what does x265 in particular offer over this new version of WebM (VP9+Opus)?

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...