Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Google Graphics Mozilla Technology

Mozilla Considers H264 After WebM Fails To Gain Traction 182

HerculesMO writes with word that "Looks as though Mozilla is considering using H264, one step closer to unification of a single protocol for video encoding. It's a big deal for HTML5 traction, but it still leaves Google holding onto WebM." The article, though a bit harsh on Ogg Theora, offers an interesting look at the way standards are chosen (and adopted by the browser makers).
This discussion has been archived. No new comments can be posted.

Mozilla Considers H264 After WebM Fails To Gain Traction

Comments Filter:
  • Lol editors (Score:5, Informative)

    by Lunix Nutcase ( 1092239 ) on Thursday April 26, 2012 @01:51PM (#39809639)

    Yes, you already posted the story [slashdot.org] about this in March. Which is the same month when the linked article is from. Good to see timithy is still at the top of his game!

  • Re:Realmedia codec (Score:5, Informative)

    by X0563511 ( 793323 ) on Thursday April 26, 2012 @01:54PM (#39809675) Homepage Journal

    People figured out that Real (and Realplayer) were utter pieces of garbage and stopped using them?

    Same thing I wish would happen to quicktime.

  • by Anonymous Coward on Thursday April 26, 2012 @01:54PM (#39809685)

    Dupe:
    http://news.slashdot.org/story/12/03/13/2027215/mozilla-debates-supporting-h264-in-firefox-via-system-codecs
    http://yro.slashdot.org/story/12/03/20/1742209/mozilla-to-support-h264

    Old news:
    March 13th, 2012 -> This particular blog's story is March 16th, 2012 -> Today is April 26th, 2012

    Vanity link:
    It's a link to AppleInsider--why on earth would AppleInsider be a novel or interesting source about internal Mozilla strategy?

    Dear editors: wake the hell up.

  • by diamondmagic ( 877411 ) on Thursday April 26, 2012 @02:04PM (#39809851) Homepage

    There's plenty of open source encoders and decoders available. ffmpeg, x264 (produced for VLC) to start.

    The notion that H.264 is not "free" isn't a result of a development methodology, it's because people think that somehow patents make it that way, despite the fact that the software authors have no choice in the matter.

  • by ctime ( 755868 ) on Thursday April 26, 2012 @02:06PM (#39809881)
    The fact that one company owns the license to this technology and makes no guarantees to _not_ increase licensing costs means that once h.264 support is the be-all end-all solution to web video, this one company has a monopoly on the sole video technology that drives the web. Most people running windows/mac have probably indirectly paid for licensing fees for h.264 multiple times. Nice racket they've got there and nobody is complaining, yet.

    Here's a pretty good article:
    http://www.zdnet.com/blog/bott/a-closer-look-at-the-costs-and-fine-print-of-h264-licenses/2884 [zdnet.com]
    from the article:
    To use and distribute H.264, browser and OS vendors, hardware manufacturers, and publishers who charge for content must pay significant royalties—with no guarantee the fees won’t increase in the future. To companies like Google, the license fees may not be material, but to the next great video startup and those in emerging markets these fees stifle innovation. []
  • by tomhuxley ( 951364 ) on Thursday April 26, 2012 @02:25PM (#39810209)

    The thresholds for encoders/decoders are based on distribution quantities, not revenue thresholds. Below 100,000 units, there is no royalty fee owed. Between 100,000 and 5 million units, the cost is 20 cents per unit. Above 5 million, the cost is 10 cents per unit.

    The maximum royalty fee owed is currently capped at $6.5 million.

    Part of the license agreement specifies that the fees can't increase more than 10% per 5 year period (2011-2016 is the current period). The max cap can go up more than 10% per period, but that only affects the biggest distributors.

    Mozilla can certainly afford it, the Foundation brings in over $300 million in search engine fees alone. Smaller open source projects would probably fall under the 100,00 units. The ones most affected will be popular projects that lack an income stream like Mozilla Foundation has.

  • by Anonymous Coward on Thursday April 26, 2012 @03:12PM (#39810935)

    Sure. And if it becomes the dominant codec, I guess we can pray they don't alter the deal further after 2016. Enjoy your clown suit [youtube.com].

  • Re:Harsh? (Score:5, Informative)

    by the_other_chewey ( 1119125 ) on Thursday April 26, 2012 @03:35PM (#39811175)

    Besides, VP8 is actually more-or-less equal to H.264 in quality and compression, you can easily verify that yourself with libvpx and x264.

    It really isn't. VP8's quality is comparable to that of H.264's Main Profile.
    H.264 High Profile eats VP8 for breakfast in bitrate-limited scenarios, meaning
    about 800 kilobit/s for SD content.
    But even at around 1,5Mbit/s, it's really obvious to the trained and still visible
    to the untrained eye. Yes, I actually have done double-blind tests.

    vpxenc is still very young, so improvement will happen, in both perfomance
    and quality. But the developers themselves have stated that it is unlikely to ever
    exceed H.264 MP by much.


    I've done extensive tests to try to coax better quality out of VP8, and have pretty
    much failed. I even had help from one of the guys at google working on VP8.
    And yes, it's part of what I do for a living.

    Have a look yourself:
    x264 High Profile, 790Kb/s, 4.3MiB [chewey.org]
    VP8, best effort, 770Kb/s, 4.2MiB [chewey.org] (the encoder was given the same constraints as x264)
    VP8 falls completely apart on high-frequency picture content, where H.264 holds up quite well.

    As one of the x264 devolpers said when I showed this around (verbatim quote): "Holycrapbirds".

    Of course, that low a bitrate is a very harsh test. At over twice the bitrate, VP8
    still needs more bits for similar quality, but the relative difference is much smaller.
    At some point around 2Mbit/s, "quality saturation" sets in.

    But for sites doing lots of streaming to clients behind <1Mbit/s connections and aiming
    for noncrapbird quality, this is a real issue.

  • Re:Harsh? (Score:2, Informative)

    by Anonymous Coward on Thursday April 26, 2012 @09:45PM (#39815763)

    Wow, you couldn't be more wrong. There are many more differences between the two videos.

    In the first segment (the one that lasts only 0.62 seconds), the details of the mountains. In the second segment, the blue sky as the camera passes through the mountains, and to a lesser degree the two mountains themselves. In the third segment, the details in the foam and, more evidently, in the trees far away behind the cascades.

    Saying that "The videos look absolutely identical until the last part" is just ridiculous.

  • Re:Harsh? (Score:4, Informative)

    by the_other_chewey ( 1119125 ) on Friday April 27, 2012 @01:53AM (#39817395)

    Not that I am doubting you, but could you provide the encoding settings you used for both VP8 and H.264?

    I'm on vacation until mid next week, so don't have access to the scripts used. From memory:

    Both are two-pass encodes. x264 recommends single-pass with --crf nowadays, because it's
    faster and nearly as good, but constant quality mode and bitrate limits don't mix too well, and
    for my low-bitrate case there's still a clear advantage for longer-range precognition.

    The x264 settings are a tweaked "--preset slower", with longer rc-lookahead and added
    bandwidth limitations (vbv_maxrate and friends). x264 writes the exact settings used – including
    an expansion of a preset into its individual parameters – at the beginning of its output, so have
    a look at "strings testenc_x264.mp4 | grep x264", or use mediainfo on the file.

    For vpxenc, it's basically the same, using "--good" and "--tune=ssim", adding bitrate control
    settings and minor tweaks. No parameter logging in the output here unfortunately, so I can't give
    you any more details right now.

    The tests I have conducted myself were more-or-less of the same quality so I am curious as to whether or not I did something wrong myself.

    You probably did nothing wrong at all, you just didn't restrict the encoder as heavily as I did.
    I tried to make it clear that those examples are rather extreme: If you allow higher bitrates, VP8
    quality improves quite a bit, and everything above about 1.5Mbit/s is fine in the vast majority of cases.

    Also, the test clip used is intentionally really really hard to encode (smooth, slow-moving, weak
    gradients; rotations; low contrast; high motion; high frequency; all combined; hard cuts; etc.),
    to highlight encoder capabilities – and shortcomings. It stress-tests everything from motion
    estimation to several other predictors the encoders may or may not have, to bitrate allocation,
    so it's an excellent worst case.

    I'm not trashing VP8 or vpxenc by the way, we're still going to use it to serve video to clients and/or
    devices without H.264 support. It's just that H.264 (in its x264-encoded manifestation, there are lousy
    H.264 encoders out there, mostly the really expensive ones) really shines in low-bandwidth use cases,
    and VP8 is universally outclassed by H.264 High Profile.

You knew the job was dangerous when you took it, Fred. -- Superchicken

Working...