Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Technology

The Wretched State of GPU Transcoding 158

MrSeb writes "This story began as an investigation into why Cyberlink's Media Espresso software produced video files of wildly varying quality and size depending on which GPU was used for the task. It then expanded into a comparison of several alternate solutions. Our goal was to find a program that would encode at a reasonably high quality level (~1GB per hour was the target) and require a minimal level of expertise from the user. The conclusion, after weeks of work and going blind staring at enlarged images, is that the state of 'consumer' GPU transcoding is still a long, long way from prime time use. In short, it's simply not worth using the GPU to accelerate your video transcodes; it's much better to simply use Handbrake, which uses your CPU."
This discussion has been archived. No new comments can be posted.

The Wretched State of GPU Transcoding

Comments Filter:
  • by Anonymous Coward on Tuesday May 08, 2012 @07:05PM (#39935521)

    Yes, they are cheating. That is exactly how they are getting it to be so fast.

  • by CajunArson ( 465943 ) on Tuesday May 08, 2012 @07:06PM (#39935535) Journal

    The GPU isn't meant to do everything. If it were, there wouldn't be a CPU. Considering the hatred that was poured on Quicksync here, and that Quicksync still produces better quality Transcodes than GPUs while being substantially faster, I don't think we'll be seeing the end of CPU transcoding anytime soon.

  • by CajunArson ( 465943 ) on Tuesday May 08, 2012 @07:13PM (#39935593) Journal

    The quick sync hardware is part of the IGP block but it is specialized hardware specifically geared towards transcoding. For example, it is not using the main GPU pipeline and shader hardware to do the transcoding.

  • by Dahamma ( 304068 ) on Tuesday May 08, 2012 @07:19PM (#39935647)

    Quick Sync uses dedicated HW on the die. Intel's solution that uses their GPU is called Clear Video.

  • by Dputiger ( 561114 ) on Tuesday May 08, 2012 @07:19PM (#39935655)

    As the author of the story, that's an error that slipped past in formatting. I'm uploading the proper graph right after I hit "Reply" on this.

  • by Anonymous Coward on Tuesday May 08, 2012 @07:38PM (#39935781)
    Here's a link [extremetech.com] to the article in 1 page.
  • by cheesybagel ( 670288 ) on Tuesday May 08, 2012 @08:11PM (#39936119)
    Hint: Not all GPUs have IEEE FP compliant math. Often they break the standard, or do something else altogether just to improve performance.
  • by PCM2 ( 4486 ) on Tuesday May 08, 2012 @08:21PM (#39936203) Homepage

    has anyone tried Badaboom?

    Not much point. It's been discontinued.

  • by Mia'cova ( 691309 ) on Tuesday May 08, 2012 @08:35PM (#39936355)

    Only the more modern GPU support it. And of those, there are still different levels of support. Even if it's supported, you would probably get much better perf on an nvidia card by using cuda for example. So in today's world, you can't just use an onpencl-powered encoder, it depends on what hardware you have.

  • by rsmith-mac ( 639075 ) on Tuesday May 08, 2012 @08:41PM (#39936437)
    Let's be clear here: the x264 guys will never be happy. QuickSync, AMD's Video Codec Engine, and NVIDIA's NVENC all use fixed function blocks. They trade flexibility for speed; it's how you get a hardware H.264 encoder down to 2mm2. There are no buttons to press or knobs to tweak and there never will be, because most of the stuff the x264 guys want to adjust is permanently laid down in hardware. The kind of flexibility they demand can only be done in software on a CPU.
  • Re:9 Pages??? (Score:3, Informative)

    by Dputiger ( 561114 ) on Tuesday May 08, 2012 @08:42PM (#39936447)

    As the author:

    Because 3000-word articles with PNGs at ~300K per large image and 100K per preview image aren't fun reading in a single go. There's ~1.5MB of imagery just on the third page . Pages 3-8 have about the same, and that's with the images only loaded as thumbnails.

    If you've got a fast net connection, you won't care. If you don't have a fast net connection, loading 16MB of images at once isn't a lot of fun.

    Visual quality comparisons are one area where you can't use low-quality JPGs. A 9-page article at ET is a real rarity, it's not something we do because we want to spam ads.

  • by rsmith-mac ( 639075 ) on Tuesday May 08, 2012 @08:54PM (#39936569)

    Because they're not using the same encode paths.

    All 3 hardware encode paths - Intel QuickSync, AMD AVIVO, and NVIDIA's CUDA video encoder - are largely black boxes. Programs such as MediaEspresso are basically passing off a video stream to the device along with some parameters and hoping the encoder doesn't screw things up too badly. Each one is going to be optimized differently, be it for speed, more aggressive deblocking, etc. These are decisions built into the encoder and cannot be completely controlled by the application calling the hardware. And you have further complexities such as the fact that only Intel's QuickSync has a hardware CABAC encoder, while AMD and NV do it in software (and poorly since it doesn't parallelize well).

    Or to put this another way, everyone has their own idea on what the best way is to encode H.264 video and what kind of speed/quality tradeoff is appropriate, and that's why everything looks different.

  • by pla ( 258480 ) on Tuesday May 08, 2012 @08:56PM (#39936587) Journal
    Because behind the scenes your "encoder" program is actually using several different encoders. Generally the encoder has to be custom written specifically for the specialized GPU hardware it is targeting.

    This has largely ceased to present a problem, thanks to OpenCL.

    GPU code no longer needs to run as custom-written shaders targetting 20 different platforms. One program, written in fairly straightforward C, will run on just about any modern platform. And it will do so at speeds that absolutely dwarf a CPU - The Radeon x9yy cards (for x>=5) easily crush a modern CPU at OpenCL code by a factor of a thousand. The x8yy cards still perform admirably, over three hundred to one. For NVidia, the Tesla series do well, while the GX... Well, ten to fifty times faster doesn't exactly suck...


    The real problem here? Most people have really crappy GPUs. Even compared to the $100 card range, your GPU sucks ass, and hard. And you can't really blame people, because honestly, even modern IGPs will run just about anything fairly well, so why would you pay for more?


    But don't blame the GPUs, or the concept in general. If you target OpenCL and the user has a halfway decent modern GPU, it will give consistent, reliable results, and will blow away your CPU many times over.
  • by pla ( 258480 ) on Tuesday May 08, 2012 @09:28PM (#39936845) Journal
    who cares how fast it completes a task if it's failing? Nobody gives little jimmy props when he finishes the hour-long test in 5 minutes but scores a 37% on it.

    I agree that presents something of a problem for current implementations; the concept of GPU transcoding doesn't fail, however. Only the fact that those currently pushing it have tried to show at least modest gains for everbody - meaning those with massively inappropriate hardware - has made it such an abysmal failure to date.

    To repeat my earlier post, if you target an OpenCL-capable GPU, you will get consistent results; and if you target a card with a reasonable number of compute units, (58xx/59xx/68xx/69xx/tesla), you'll see performance far beyond what a modern CPU can give.

    Does that make GPU transcoding the best choice for the general public at present? No! But for those with the hardware, the comparison counts as literally laughable.
  • by Dputiger ( 561114 ) on Tuesday May 08, 2012 @09:29PM (#39936859)

    I set out to test presets. Specifically, I set out to test the presets of software packages which are sold on the purported *strength* of those presets. I say so in the first paragraph:

    " Our goal was to find a program that would encode at a reasonably high quality level (~1GB per hour was the target) and require a minimal level of expertise from the user."

    That's why MediaCoder results weren't included.

    The entire article came about because Cyberlink's iPhone 4S preset yielded files that were 1.4GB if I used CPU encoding or a GTX 580, and 188MB if I used Quick Sync. That disparity is what I noticed when I went to check encode quality for the initial IVB review.

    Can you build custom profiles in CME and create outputs that avoid these problems? You can -- though some options aren't available. That, however, is not the point. If I'm going to build my own custom profiles, I can download a copy of MediaCoder for free and do it with a more powerful piece of software that offers a huge number of options.

    I did a review of "Software that claims to automate the GP encode process." I did not do a review of "Can Cyberlink MediaEspresso EVER create a decent image?" Given what I set out to evaluate, my ability to tweak profiles to achieve a satisfactory result is not a valid criteria for my conclusions.

  • by Dputiger ( 561114 ) on Tuesday May 08, 2012 @09:31PM (#39936879)

    No, the article says that GPU encoding software runs the gamut from outright awful to simply broken and limited. Quick Sync video is great in Arcsoft, terrible in Cyberlink, unsupported in Xilisoft, and looks decent in MediaCoder. Check the GTX 580's output in Xilisoft for plenty of proof that no, you don't need insane bitrates to create decent-looking output.

  • by Darinbob ( 1142669 ) on Tuesday May 08, 2012 @09:38PM (#39936915)

    And remember that this is not necessarily lower quality! There are valid reasons for not following the complexities of IEEE floating point if you have no need for portability.

  • by parlancex ( 1322105 ) on Tuesday May 08, 2012 @10:55PM (#39937371)

    Hint: Not all GPUs have IEEE FP compliant math. Often they break the standard, or do something else altogether just to improve performance.

    I can't speak for ATI, but actually all FP32 math on Nvidia architectures for many generations now has been IEEE compliant, excluding NAN and -inf +inf and exception handling cases, and except for their hardware sin, cos, log implementations, and except when using the fused multiply add instruction (though the last one you could actually get around by using special compiler intrinsics to avoid the fusing).

  • Also let's be clear (Score:5, Informative)

    by Sycraft-fu ( 314770 ) on Tuesday May 08, 2012 @11:31PM (#39937565)

    That while the x264 guys aren't wrong to want to keep working on a software encoder that is tweakable, there is nothing wrong with a fixed function hardware encoder for some tasks. Sometimes, speed is what you want and "good enough" is, well, good enough.

    Like at work I edit instructional videos for our website (I work at a university) using Vegas. I use its internal H.264 encoders, which can be accelerated using the GPU. They are quite zippy, I can generally get a realtime or better encode, even when there is a decent amount of shit going on in the video that needs to be processed (remember that Vegas isn't for video conversion, I'm doing editing, effects, that kind of thing).

    Now the result is not up to x264 quality, per bit. I could get better quality by mucking around setting up an avisynth frameserver and having x264 do the encoding using some tweaked settings for high quality. However it would be much slower.

    Not worth it. I'll just encoder a reasonably high bitrate video. It is getting fed to Youtube anyhow, so there's a limit to how good it is going to look. The faster hardware assisted encode speeds are worth it.

    If I was mastering a Blu-ray? Ya I might do the final encode to go off to fabrication with x264 (actually more likely an expensive commercial solution that can generate mastering compliant bitstreams). Spend the extra time to get it as quality as possible because of all the other work and because it could actually be noticable.

    There is room for both approaches.

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...