Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Media Software

Dirac 1.0.0 Released 127

dylan_- writes "According to their website, 'Dirac is an advanced royalty-free video compression format designed for a wide range of uses, from delivering low-resolution web content to broadcasting HD and beyond, to near-lossless studio editing.' Now a stable version of the dirac-research codebase, Dirac 1.0.0, has been released. The BBC have already successfully used the new codec during the Beijing Olympics and are looking to push it to more general use throughout the organisation. The latest version of VLC (the recently released 0.9.2) has support for Dirac using the Schroedinger library."
This discussion has been archived. No new comments can be posted.

Dirac 1.0.0 Released

Comments Filter:
  • really? (Score:3, Insightful)

    by Anonymous Coward on Saturday September 20, 2008 @02:18PM (#25086375)

    Remember when we all used GIF until somebody came out of the closet with a patent claim. How can we be sure about this one?

    • Re:really? (Score:5, Informative)

      by whathappenedtomonday ( 581634 ) on Saturday September 20, 2008 @02:22PM (#25086417) Journal

      Read their site. From the FAQ [diracvideo.org]:

      Do you infringe any patents?

      The short answer is that we don't know for certain, but we're pretty sure we don't.
      We haven't employed armies of lawyers to trawl through the tens of thousands of video compression techniques. That's not the way to invent a successful algorithm. Instead we've tried to use techniques of long standing in novel ways.

      What will you do if you infringe patents?

      Code round them, first and foremost. There are many alternative techniques to each of the technologies used within Dirac.
      Dirac is relatively modular (which is one reason why it's a conventional hybrid codec rather than, say, 3D wavelets) so removing or adding tools was relatively easy, even though this may mean issuing a new version of the specification.

      • Re: (Score:3, Interesting)

        by Anonymous Coward

        The real question is, how does it fare against good H.264 encoders e.g. x264? And how are the encoding speeds?

        The few comparisons I've seen put H.264 as having the edge when it comes to both, but not by a lot.

        • Re:really? (Score:5, Informative)

          by David Gerard ( 12369 ) <slashdot.davidgerard@co@uk> on Saturday September 20, 2008 @03:22PM (#25086833) Homepage

          Encoding and decoding is presently fat and slow. It's very much in development.

          • Re:really? (Score:5, Interesting)

            by whathappenedtomonday ( 581634 ) on Saturday September 20, 2008 @06:22PM (#25088193) Journal

            Since you claim this I assume that you tried the 1.0.0 already - I watched the promo vid, and it says the BBC is using the codec to handle HD content over their standard def infrastructure at very low latency (a few ms, if I remember correctly).

            Nonetheless, this seems to be an interesting thing to keep an eye on, because the codec specs address good compression especially for very high bandwidths, which is going to be an important issue for movie post production/processing, HD content and the likes. The promo vid is well worth watching.

  • by Anonymous Coward on Saturday September 20, 2008 @02:20PM (#25086389)

    I tried using the Schrodinger library but I'm uncertain it works. Plus, I can't find my cat.

  • 0xBBCD (Score:5, Interesting)

    by hey ( 83763 ) on Saturday September 20, 2008 @02:22PM (#25086413) Journal

    I see the first 4 bytes are 0xBBCD.
    British Broadcasting Corporation Dirac.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Did you mean to say the FOURCC (which is usually not the first four bytes) is 'BBCD'? 0xBBCD is usually two bytes...

    • Re:0xBBCD (Score:5, Insightful)

      by Anonymous Coward on Saturday September 20, 2008 @02:39PM (#25086549)

      Isn't that just 2 bytes? :)

      *nibbles on parent's geek card*

      • Haha, nice bit of geek punnery.
      • Re: (Score:2, Informative)

        The size of a byte doesn't have to be 8 bits, though it usually is.

        • Re:0xBBCD (Score:4, Funny)

          by Yetihehe ( 971185 ) on Saturday September 20, 2008 @03:54PM (#25087053)
          It doesn't? It's Blasphemy! In the beginning, there was word. And in word there was two bytes. In two bytes there was 16 bits. And the root saw it was good.
      • nYbbles on parent's geek card
  • Open source overkill (Score:5, Interesting)

    by mdmkolbe ( 944892 ) on Saturday September 20, 2008 @02:24PM (#25086435)

    From the FAQ:

    What are the license conditions?

    The Schrodinger software is available under any of the GPLv2, MIT or MPL licences. Libraries may also be used under LGPL.

    Sounds like someone wanted there to be no question about whether it was open source.

    • by damn_registrars ( 1103043 ) <damn.registrars@gmail.com> on Saturday September 20, 2008 @02:32PM (#25086497) Homepage Journal

      The Schrodinger software is available under any of the GPLv2, MIT or MPL licences. Libraries may also be used under LGPL.

      Sounds like someone wanted there to be no question about whether it was open source.

      Sounds to me like the license exists in multiple states at once, which may be exactly the way Schrodinger would have liked it.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I am not sure, but isn't MIT one good enough to relicense it to (L)GPL or MPL?

      • by MrWim ( 760798 ) on Saturday September 20, 2008 @02:54PM (#25086647)
        The GPL makes assurances regarding patents that the MIT license doesn't:

        "Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version."

        So if you use it as a GPL licensed library you can't get sued by the BBC or other contributors to the code.
        • Re: (Score:1, Informative)

          by Anonymous Coward

          Yes - GPL says that explicitly, bit MIT (copyright holders) grant you:

          Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so ...

          So they GRANT you to do with it more less an

        • Isn't that clause new to the GPLv3?

          • Re: (Score:2, Informative)

            by dapyx ( 665882 )
            No, the GPLv3 has clauses about using the code in a DRM system, the anti-patents clause has been for a long time.
            • I seem to recall something about a Novell-MS dealbreaker being in the GPLv3 that, if adopted by the kernel itself, would mean a lot of fireworks, but I might be thinking of something else.

              I'm definitely not talking about the Tivoization clause.

  • by Enderandrew ( 866215 ) <enderandrew@NOsPAM.gmail.com> on Saturday September 20, 2008 @02:26PM (#25086449) Homepage Journal

    How does it stack up to other codecs?

    Do we need another codec?

    • by fuzzyfuzzyfungus ( 1223518 ) on Saturday September 20, 2008 @02:38PM (#25086533) Journal
      We don't need another codec, per se, we need a royalty free codec, that can be legally implemented in FOSS situations, and others without a lot of legal overhead. Assuming it isn't markedly worse than others in performance terms, Dirac qualifies. If by some miracle(class II or greater) mpeg4 were available under such terms, there wouldn't be any point to Dirac; but that isn't exactly likely.
      • Exactly, Matroska is great and all from a freedom standpoint, but technically it's far behind the encumbered ones.

        At least we have ogg for audio, it seems like nothing can beat it in terms of quality/bitrate:-)

        • Matroska is a container, not a codec.
          • Yeah I realized that after I wrote this:-) I meant Theora. Oopsie.

            • Re: (Score:1, Interesting)

              by Anonymous Coward

              Also, try to encode with the newer versions of Theora. It has gotten much improvements in the last year - quality problems were never in the decoding as some will have you believe, but that the encoder pretty much sucked.

              Not sure what you expect and I'm no video buff... but it sure looks a LOT better.

              (You may still be right, of course. I've just found that 99% of all who state anything about anythings quality usually have formed their opinion once, maybe years ago, and then keep on repeating it).

        • Re: (Score:3, Interesting)

          by Tweenk ( 1274968 )

          Matroska is not a codec. It is a container format, and it beats any closed-source competitions hands own on features (e.g. as far as I know it is the only format that supports embedding custom TrueType fonts for subtitles).

          The best video encoding combo right now is:
          - Matroska as the container
          - H.264 for video
          - Ogg Vorbis for audio
          - ASS for subtitles

      • This came up in yesterday's discussion of the Canonical codec pack.

        Standardized codecs, like VC-1 and H.264, have full open specifications and typically even reference source code implementations that can be reused in a variety of ways.

        However, they also require patent fees depending on use and jurisdiction.

        The issue of free software has always been asserted to be about "speech, not beer" but it seems like there's an assumption that it has to be free as in "speech AND beer." I'm sure all kind of arguments c

        • by delt0r ( 999393 )
          You are right and wrong. If I want to keep MPEG-LA happy, i need to pay quite a bit of money if there are more than 100K downloads IIRC. For a open source project that is *not* selling the code it is a lot of money. Most OS projects are not done with money, but with time.

          How many copies of mplayer do you think are out there? Its a lot more than 100K copies.

          But here is the real rub. Even if you pay the fees, they give *no* guarantee what so ever that you are not infringing some other patent.

          Also these
          • You are right and wrong. If I want to keep MPEG-LA happy, i need to pay quite a bit of money if there are more than 100K downloads IIRC. For a open source project that is *not* selling the code it is a lot of money. Most OS projects are not done with money, but with time.

            But if you're just distributing the source, you wouldn't need to pay the patent. If it's a real product, or if users are compiling it into one, is doesn't seem to be an infringement on "freedom" in the classic RMS definition to wind up paying a fee. You still have full control over the code and technology used.

            But here is the real rub. Even if you pay the fees, they give *no* guarantee what so ever that you are not infringing some other patent.

            Theoretically true, although that hasn't happened much in practice, at least in this space. And Theora and Dirac are in the same legal position, and don't have the market effet of lots of companies

            • Re: (Score:3, Interesting)

              by delt0r ( 999393 )

              But if you're just distributing the source..

              So now I can't also distribute binary but my freedom is not affected? I don't think I would want to test source=ok, binarys=!ok as far as patent law is concerned with my wallet. Economic harm is all thats needed if software patents are valid.

              Theoretically true, although that hasn't happened much in practice, at least in this space.

              So you pay a crap load of money and only don't get sued much? Thats a raw deal. There has been at least one case I know of with mpeg4 | h.264, and thats a lot more than what both theora and dirac have had to deal with. Add the fact that theora is based on VP3 with a act

      • What about Dirac being a wavelet based codec that has inter-frame motion compensation? Wavelet is superior to DCT-based codec, like mpeg-1, 2, and h.264. Dirac's inter-frame encoding is also something that motion JPEG 2000 doesn't have.
    • by Tab is on Slashdot ( 853634 ) on Saturday September 20, 2008 @02:55PM (#25086667)

      How does it stack up to other codecs?

      As I say below, unfortunately the quality is lacking compared to modern codecs like H.264 and even (dare I say) VC-1. Apparently that's just the nature of using wavelets. While they give a very natural style of compression on still images (JPEG-2000, etc), they do not translate well to moving sequences because, unlike all other current codecs, the image is not broken up into blocks that can then be tracked and diff'd in time. Still, it'll be interesting to follow Dirac, if only because they're taking a radical new approach with only Michael Niedermayer's Snow as a peer.

      • by delt0r ( 999393 ) on Saturday September 20, 2008 @03:11PM (#25086773)
        As I state below. Most of codecs performance has to do with the encoder. At 1.0.0 its too early to tell if the format/codec design is limited.

        However a great codec without a good encoder is no good at all. But its early days yet considering h.264 has been around for 5+ years.
      • Re: (Score:2, Funny)

        by Ant P. ( 974313 )

        The codec is new, give it a few months.

        Early DVDs looked like shitty 90% compressed jpegs too, you know.

      • Re: (Score:3, Interesting)

        Update: I've been told by the devs that Dirac is optimized for HD live action, wheres my tests have thus far involved SD animated content, so, YMMV. I'll have to try some live action sources next.
      • While they give a very natural style of compression on still images (JPEG-2000, etc), they do not translate well to moving sequences because, unlike all other current codecs, the image is not broken up into blocks that can then be tracked and diff'd in time.

        You seem to be confusing motion estimation/compensation with residual coding. Dirac does break the image into blocks, using overlapped block motion compensation [wikipedia.org]. However, the residual image is coded as a whole, thanks to wavelets. This should greatly r

  • Content (Score:5, Interesting)

    by whathappenedtomonday ( 581634 ) on Saturday September 20, 2008 @02:27PM (#25086469) Journal
    I was wondering where I could find some vids to check out quality vs. file sizes and found this [bbc.co.uk] index of demo files. Looks great in VLC, quite impressive even at lower bitrates.
  • While it's very cool what the BBC is doing, and it's good to see wavelet technology being pushed, Dirac 1.0 falls extremely short in my tests (at least on animated material at medium bitrates). In the H.264 era, the quality is unacceptable. Here's hoping they'll be able to keep improving it. On the other hand, I know at least one x264 dev who's convinced that OBMC wavelets will never match the quality of MC block-based approaches without a major breakthrough.
  • by c0l0 ( 826165 ) * on Saturday September 20, 2008 @02:49PM (#25086619) Homepage

    Dirac isn't the only royality-free, patent-unencumbered video codec there is - Xiph's OGG Theora has been around a while already, yet failed to impress quality-wise up until recently. There's some really cool development going on however, and you may see some of the results achieved over there: http://xiphmont.livejournal.com/35363.html [livejournal.com]

    It's noteworthy that the changes made only affect the ENCODER, thus no changes to the DECODER (the part of a codec all applications used to play back files have included) are necessary. This bodes very well for HTML5, which will include some support for Theora on at least Mozilla (and iirc Opera) browsers.

    • by Tab is on Slashdot ( 853634 ) on Saturday September 20, 2008 @03:05PM (#25086733)
      Theora gets a bad rap for being outdated technology, but it does have a few advantages over MPEG-4 ASP: the loop filter, adaptive block sizes, and multiple reference frames, putting it closer to H.264 than MPEG-4 ASP. With these features, it's really a pretty strong showing from Xiph, and things can only get better as the encoder nears 1.0.
      • Except that Theora not only start being behind the then-emerging H.264 and VC-1, but its implementations are launch were quite a bit weaker than even Xvid of the era, and have essentially stagnated.

        And since then, it's fallen even further behind; the implementations of standardized codecs has been improving a lot more each year than Theora, as have proprietary codecs like the later entries in On2's VPx series (Theora was forked from VP3; On2 just announced VP8).

        There's been some interesting work in the last

        • by delt0r ( 999393 )
          Dev time. There are more people working on xvid etc than theora. Folks seem to be more interested in helping out the patent encumbered formats by providing good encoders and decoders. While the bulk (if not all) of Theora work is done by only a few people. And thats a lot of work even if you get to do it full time.

          A patent free codec is still good as long as its pretty close to say mpeg4. It would end up in a lot of games etc, and it keeps everyone else playing nice because there are alternatives. It not
      • but it does have a few advantages over MPEG-4 ASP: the loop filter, adaptive block sizes, and multiple reference frames, putting it closer to H.264 than MPEG-4 ASP.

        AFAIK, the deblocker in VP3/Theora is NOT an in-loop filter, but just a simple post-processor, with just a little more smarts due to being codec-specific.

        VP3/Theora also has some significant inherent DISADVANTAGES over MPEG-4 ASP, such as lacking B-frames (which Xvid uses to great effect). And yet, while VP3/Theora isn't competitive with H.264 o

    • by delt0r ( 999393 ) on Saturday September 20, 2008 @03:07PM (#25086749)
      This is a big point. The Encoder is far more important that the rest of the codec. Folks talk about xvid and divx as if they are codecs when really they are different encoders for mpeg4.

      Both Theora and Dirac have plenty of space to move with regard to encoders.

      However there is no easy way to measure "distortion" of the encoded image that matches the human visual system all that well. (unlike audio). But I expect most codecs to get better in the next few years because of encoders. (including h264).

      Ironically h264 does so well because of the availability of a free, fast and good quality encoder done my the community. Not the license owners.
      • Re: (Score:3, Insightful)

        by mdmkolbe ( 944892 )

        However there is no easy way to measure "distortion" of the encoded image that matches the human visual system all that well. (unlike audio).

        How do you objectively measure psychoacoustic distortion? Do the same techniques not apply to vision simply due to unknown constants or is there some more fundamental reason?

        • by delt0r ( 999393 ) on Saturday September 20, 2008 @03:44PM (#25086993)
          In sound the idea of masking works really well. That is if there is a loud sound at a particular frequency we tend to not be able to hear sounds that are a low in pitch and a bit quieter (IIRC). Its effectively masked. The other big advantage is also the linear nature of sound.

          But the human visual system is a *lot* more complicated. IIRC about 1/3 or our brains are used for visual perception. Currently we use PSNR (Peek signal to noise ratio) as a measure. But this has been shown many times to be a very poor indication of what we perceive. One example is blocking. Blocking cause straight lines to form in the image and our brains lock on to them far more quickly that other artifacts.

          Next is the colour and the 2d nature of a image. Then add that the eyes do a bunch of preprocessing on motion perception and its getting quite difficult. Finally we have the method of comparison. Which often involves comparing still images from the video stream. Yet if thats a high motion scene the codec might be better off encoding these frames with low quality because we can't perceive the quality loss combined with fast motion.

          Lets also not forget how many people think youtube is good quality or at worse, good enough!
      • by lxt ( 724570 )

        However there is no easy way to measure "distortion" of the encoded image that matches the human visual system all that well. (unlike audio)..

        I'm not sure I agree with that...and I think the fact that there are people who *can* tell the difference between a 256kbs MP3 and CD-audio and those who *can't* perhaps shows that there's no easy way to map quality of audio onto something that matches human perception. There are plenty of technical ways however, both for audio and visual. I'm not sure where you're getting this from.

        • by delt0r ( 999393 ) on Saturday September 20, 2008 @03:55PM (#25087055)
          All the R&D papers I have read and from folk in the field working on this. Its well recognized that psychoacoustic models are far more developed than psycho visual models.

          I don't doubt that some people can tell the difference between flac and mp3/ogg/aac. But the true number is far less than the claimed number (do a proper blind test to really find out). Also you don't design codecs for 0.5% of the population that can hear the difference, but for the 90% that can't and the other 9.5% that don't care.

          Now its a fact that PSNR is used in most encoders. Its also widely recognized that it is not a good measure. I have done my own image compression and got better PSNR than jpeg per bit, and yet it looked far worse.

          So I'm not really sure where you getting the idea that is even in the same category as audio.
          • Also you don't design codecs for 0.5% of the population that can hear the difference, but for the 90% that can't and the other 9.5% that don't care.

            In fact you do design for the 0.5%. Testing of codecs specifically uses expert listeners, with an in-depth double-blind testing setup, hidden anchors, and the like. Of course you aren't going to please everyone, all the time, but codec developers certainly do try their best to do so.

        • I think the fact that there are people who *can* tell the difference between a 256kbs MP3 and CD-audio and those who *can't* perhaps shows that there's no easy way to map quality of audio onto something that matches human perception.

          In fact you're wrong.

          MP3 was never designed to be indistinguishable from CD-audio. It was designed to sound GOOD at even lower bitrates (64kbps mono). MP3 has several limitations that prevent it from doing so. It's immensely ironic that people are now cranking up the bitrates

    • Yes, let me know when "Thoera" has b-frames. You know, like those things in MPEG-1.

  • been waiting a long time.
  • Hey, I KNEW those Atomic Physics courses I took way back in University would come in handy!

    Who needs to compress time, when all you hafta do is compress the video.
    If I took a video of my cat, and then compressed it with this new codec, would the cat be...
    Umm.. never mind...!
    .
    .
  • Now I can feel just as good about using open-source video compression as I do about buying eggs from free-range chickens!

    The porn-viewing experience just gets better and better.

BLISS is ignorance.

Working...