Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Technology

FLAC Joins The Xiph Family 319

Ancipital writes "Xiph.org (of Ogg Vorbis fame) have today announced that the FLAC (Free Lossless Audio Codec) project has joined the Xiph rebel alliance. The full story and press release can be found at the Xiph site. (FLAC is nice, because it gives you pristine lossless audio at roughtly 50% size reduction over uncompressed WAVs- you can store them on your hard drive/wherever and then transcode down to a lossy format when you need portability, yum!)"
This discussion has been archived. No new comments can be posted.

FLAC Joins The Xiph Family

Comments Filter:
  • Missing the Point (Score:5, Insightful)

    by dewboy ( 22280 ) on Wednesday January 29, 2003 @02:49PM (#5183231) Homepage Journal
    I think many posters are missing the point of the article. My first reaction was also "hey, this is nothing new -- SHN (Shorten) has been around for a long time and does lossless encoding at 50% the size of WAV"... but then I actually went to the Xiph site and read their mission:

    "Xiph.Org Foundation is a non-profit corporation dedicated to protecting the foundations of Internet multimedia from control by private interests. Our purpose is to support and develop free, open protocols and software to serve the public, developer and business markets."

    So the point isn't that FLAC is new... the point is that FLAC is OSS, and has joined forces with an organization backing such efforts. The SHN codec is not OSS.
    • Re:Missing the Point (Score:2, Informative)

      by Ziviyr ( 95582 )
      So the point isn't that FLAC is new.

      And I'd laugh my guts out if it were.

      FLAC isn't new.
    • So the point isn't that FLAC is new... the point is that FLAC is OSS, and has joined forces with an organization backing such efforts. The SHN codec is not OSS

      Most people don't use OSS because it is open, they use it because it is better. The problem with FLAC is that shorten is established as the standard and there are shorten encoders/decoders available at no cost. The only way FLAC is going to be able to thrive is if it does something better. I was seriously considering moving my 500 GB SHN collection to FLAC, but after doing a few test encodes, I concluded that it wasn't worth the hassle. If FLAC were to give me 75% compression ratio that would free up 125 GB for me and would definitely be worth the hassle. But as it stands now, I'm not going to spend a few days converting my SHN collection just for the sake of it using an open file format and this is coming from someone who uses OSS for just about everything (except Photoshop and Sound Forge) .

  • Yawn.. (Score:5, Funny)

    by grub ( 11606 ) <slashdot@grub.net> on Wednesday January 29, 2003 @02:49PM (#5183233) Homepage Journal

    If you take your LPs out of the cardboard sleeves you easily save over 50% space.
  • FLAC streaming (Score:5, Informative)

    by Adnans ( 2862 ) on Wednesday January 29, 2003 @02:52PM (#5183270) Homepage Journal
    The upcoming version of AlsaPlayer [alsaplayer.org] will support FLAC streaming over HTTP, and even seeking if you use HTTP 1.1. We should see FLAC streaming support in Icecast soon, at least I hope so.

    -adnans (*plug*!)
  • by Zathrus ( 232140 ) on Wednesday January 29, 2003 @02:52PM (#5183276) Homepage
    This is good news in a nebulous sense, but what about actually getting 3rd party adoption? How many players out there support FLAC? Or even Ogg Vorbis?

    I've been contemplating a digital audio player like the Turtle Beach AudioTron [turtlebeach.com] for awhile now, and while the AT has better support for a variety of formats than most, it's missing both FLAC and OGG (and the developers have stated it's not coming due to lack of CPU power).

    I'd love to encode all my CDs onto a central server and have several units around the house playing from that. But I'd rather not rip around 1000 CDs more than once. And it's still not cost effective to just store them as WAVs - using FLAC would double the capacity.

    Yeah, I know... Samba can translate files on the fly now, but that requires a good bit of horsepower. The Celeron 300A in the server just isn't going to be capable of transcoding FLAC->anything in real time, much less do it for 2 or 3 streams at once.

    I guess the question is, what's holding back consumer electronics companies from implementing OGG and FLAC support? Is it technical, financial, or what? And what can Xiph do to help them in this?
    • the audiotron is [sourceforge.net] supported [sourceforge.net]
    • by joe_bruin ( 266648 ) on Wednesday January 29, 2003 @03:22PM (#5183494) Homepage Journal
      well, the kenwood music keg [kenwoodusa.com] and phatnoise phatbox [phatnoise.com] support both ogg vorbis and flac (in addition to mp3 and wma). flac has turned out to be the best way to keep single session recordings (ie, concert recordings) continuous without gaps on digital music players. i'm guessing we'll be seeing more firmare upgradeable devices start adding support for flac real soon now.
    • According to this comparison [sourceforge.net], it takes about 7:00 - 7:15 to decode 70:12 long audio on a PII-333, so I would guess that your server could - in theory - serve something like 8 streams at once (in practice with the IO overhead and with a security margin it should not be a problem to server those 2-3 streams at once).

    • This is good news in a nebulous sense, but what about actually getting 3rd party adoption? How many players out there support FLAC?

      There's a list on the sidebar of the FLAC homepage.

      I guess the question is, what's holding back consumer electronics companies from implementing OGG and FLAC support?

      In the case of the AudioTron, they are getting the full court press from Microsoft to do WMA lossless, and knowing Microsoft this will be to the exclusion of all others. I made a good case on their mailing list that with moderate work (and I was willing to help), the AT would be able to decode FLAC natively. Their response has always been "we tried it and it's not fast enough", despite the fact that identical hardware in other devices (Rio Receiver, PhatBox) can decode FLAC fine.

      But no matter, other manufacturers are providing a choice, and the list is growing. Recently the ReQuest guys have added FLAC support to their ARQ boxes. So time will tell what consumers really want.

    • I guess the question is, what's holding back consumer electronics companies from implementing OGG and FLAC support?

      Not much. Xiph offers ported and optimized fixed-point code for Ogg Vorbis to anyone who wants it on royalty-free terms (BSD or similar license IIRC). Moreover Xiph has been actively promoting its standards and codebases directly to the involved businesses.

      Nevertheless, the trick is actually getting the end-product companies to make Vorbis/FLAC support a requirement for their next product/revision. I suspect it'll be Apple or a similar market leader that forces the hand of the competitors, since it seems that most marketing departments can't see further past their noses than their competitor's feature lists.

  • Very very cool (Score:5, Informative)

    by Phexro ( 9814 ) on Wednesday January 29, 2003 @02:55PM (#5183297)
    I just started archiving my CD collection (350+ discs) using FLAC. I tested a number of codecs, including LAME, Ogg Vorbis, and FLAC.

    In the end, I settled on FLAC for four reasons:

    * It's completely lossless.
    * Gapless playback
    * If you save the TOC from the source CD, you can burn an exact copy, pregaps and all, from your FLACs.
    * I can reencode to Ogg, MP3 or whatever lossy format I want at any time. Nice for when I want to make a MP3 disc to play on my MP3 walkman, and I don't lose quality like I would if my source material was in Ogg.

    Hopefully, we'll see wider support for FLAC come from this partnership. Not too many players support FLAC, though the FLAC developers have made plugins for XMMS and WinAmp.

    Oh, and some people have been tossing the '50% compression' thing around already. It really depends on the music. I have managed up to 70% compression on some sparse music, (mainly ambient and classical) while my death metal and noise encoded around 30%. It seems that the more dense the source is, the less it compresses.
    • There was a Slashdot article a while back mentioning Ogg Vorbis 'peeling'. Like interlaced GIFs (or those weird blocky JPEGs whose correct name I don't know), the first part of the file is a low-quality version and then downloading more bits gives progressively higher quality.

      I wonder if FLAC could be adapted do this too, so you could 'head --bytes 1000000' to get a lossy version. Okay, maybe not quite such good quality as an Ogg Vorbis file of the same size, but it might be good enough.
      • Like interlaced GIFs (or those weird blocky JPEGs whose correct name I don't know)

        Progressive jpegs.
      • While this sounds cool, there's really not much point. FLAC decoding is very fast. When I want a MP3 to take with me, I just reencode it:

        flac -c -d flac_file.flac | lame - >mp3_file.mp3

        It takes under a minute to reencode a typical 4 or 5 minute track this way on my 1.7ghz P4. Just decoding the track takes around 7 seconds, and reencoding it with lame takes 50 seconds.

        I suppose it could be faster, but it works for me.
      • FLAC couldn't be adapted to this, because it's intrinsically lossless; converting it to lossy would require a full conversion, might as well just re-encode it as Ogg.

        HOWEVER, it's possible that Ogg could be adapted the other way, adding another layer to it to make it lossless (probably by computing the difference between the lossy result and the actual source, and compressing the resulting stream). The result would be larger and MUCH more computation-demanding to play than FLAC, but it would be lossless and peelable.

        I don't have any idea how much larger or more computation-dependant it would be.

        -Billy
  • Algorithms? (Score:4, Interesting)

    by crow ( 16139 ) on Wednesday January 29, 2003 @02:55PM (#5183298) Homepage Journal
    So what sort of compression algorithm does FLAC use?

    One idea that would be really cool is if they could get acheive lossless compression by noting the differences between the original and the .OGG, and appending that to the .OGG. Then if you can just strip off the added info when you make copies to restricted-space devices. The only question is whether this can be done with a competitive compression ratio.
    • Re:Algorithms? (Score:5, Informative)

      by pclminion ( 145572 ) on Wednesday January 29, 2003 @03:26PM (#5183524)
      So what sort of compression algorithm does FLAC use?

      For the most part, linear prediction. This uses a linear combination of past sample values to predict the next sample value. The difference between the prediction and the actual is Golomb-Rice encoded. Golomb-Rice codes are used when the probability of an integer occurring is geometric (i.e., the value N+1 is 1/R times as likely as the value N, for some R > 1). This is a pretty good assumption for audio, since the predicted values tend to be quite close to the real ones. Some other lossy compression algorithms also use linear prediction, but they quantize the predicted values to reduce the bitrate even further. The quantization is the lossy step.

      MP3 and OGG, on the other hand, work differently. They first transform a block of audio using the MDCT, and apply a psychoacoustic model to the resulting spectral envelope. This eliminates a lot of subbands that are "inaudible." At that point the remaining subband energies are quantized and entropy-coded. To decode, the encoded energies are decoded and the spectral envelope is reconstructed, then transformed back into the time domain to become "audio" again.

      It would be a serious feat to integrate FLAC and OGG. They are totally different systems.

      • Re:Algorithms? (Score:2, Informative)

        It would be a serious feat to integrate FLAC and OGG. They are totally different systems.

        Not so, they are already integrated, i.e. you can already encode to raw FLAC or Ogg FLAC with the command-line flac encoder. FLAC packets are embeddable in an Ogg container just as easily as Vorbis ones.

        • Yeah, I got Ogg and Vorbis confused. My bad. What I meant is there is no simple way to integrate FLAC data with Vorbis data. Obviously there should be no difficulty embedding FLAC data in an Ogg stream.

          What the original poster was suggesting would be to encode the file losslessly, and then FLAC encode the residual produced by subtracting the encoded waveform from the unencoded one. This is a very cool idea, but it won't work. The residual signal is going to be very noise-like, so it would be resistant to FLAC compression (FLAC uses a "verbatim" mode when it sees noise -- the verbatim mode does no compression at all). It isn't that you couldn't do it, but I very highly doubt you'd gain anything by it.

        • Re:Algorithms? (Score:3, Informative)

          by Pathwalker ( 103 )
          You can wrap FLAC in an OGG stream, but why would you want to?

          FLAC already has a very good wrapper. OGG is very small, and adds as little as possible to the size of the raw data making up the media stream, but some decisions were made that make OGG useless to me as a wrapper.

          As an example, how do you seek in a file?

          In FLAC's native format, you read the Metadata Block Seektable [sourceforge.net] which gives you a mapping between points in time, and points in the file.
          In QuickTime, you read the Sample Table Atom [apple.com] which does basically the same thing.

          In OGG [xiph.org]? It appears ( from vorbisfile.c [xiph.org]) that you have to seek through the whole stream, reading the headers of every page to find the locations of all of the absolute granule position markers and regenerate the same information that other formats spend a few hundred bytes to store in a table.

          Needing to read the whole file before being able to seek might not seem like much, but when you are dealing with files of moderate size (6 hours or so) stored on a media where the transfer rate between the file and the player is close to the bitrate of the audio, it becomes extremely annoying.
      • It would be a serious feat to integrate FLAC and OGG. They are totally different systems.

        FLAC and Ogg are already integrated. Ogg is simply a container file format, it has nothing to do with audio compression. You are thinking of Vorbis.

        See the FAQ [vorbis.com].

    • An interesting idea, Vorbis something then FLAC the deltas and put that in the Ogg.

      Being lossless FLAC is going to care about a bunch of little details that OGG won't.

      Actually I'm going to bet that the Vorbis noise floor management may hurt when FLAC has to deal with it. But FLAC is designed to expect alot of crap to sneak by its main predictors, and into the rice (coding).

      So it beats me. I'd like to see the difference with various instruments.
    • Re:Algorithms? (Score:5, Interesting)

      by Josh Coalson ( 538042 ) on Wednesday January 29, 2003 @04:17PM (#5183861) Homepage
      So what sort of compression algorithm does FLAC use?

      • interchannel decorrelation: mid-side coding
      • intrachannel decorrelation: FIR linear prediction
      • entropy coding: Rice codes with a simple context mechanism

      For more info see here [sourceforge.net]

      One idea that would be really cool is if they could get acheive lossless compression by noting the differences between the original and the .OGG, and appending that to the .OGG. Then if you can just strip off the added info when you make copies to restricted-space devices. The only question is whether this can be done with a competitive compression ratio.

      This has been suggested before, but would require all Vorbis decoders to decode to the exact same result, which is not practical (Vorbis decodes to float samples).

      • 1) I don't know anything about Ogg Vorbis
        2) Isn't there some integer decoder everyone was talking about a while back? I think they could use that to deal with this problem.
        3) I think there are other serious barriers of entry. That is, the discrepancies between a vorbis file's output and the original might be more difficult to encode than the audio in the first place, and the resulting filesizes might be bigger than FLAC (Or hell, have no advantage over uncompressed audio in the first place.)
        4) I don't know anything about Ogg Vorbis.
      • "This has been suggested before, _but would require all Vorbis decoders to decode to the exact same result_, which is not practical (Vorbis decodes to float samples)." [emphasis mine]

        Not true.

        I don't know if this has been tried before, but the limitation you propose does not exist. This would not require all OGG decoders to produce the same exact result. It would require just all FLAC decoders to produce the same ogg output for the ogg data in the flac file, which is possible.

        You'r .ogg personal mp3 player doesn't have to decode the ogg's exactly for a completely unrelated FLAC codec to take advantage of this principle. The only requirement is that all FLAC codecs use the same OGG decoding algorithm.

        I believe the problem that is actual has nothing to do with practicality, but that the residual wave form that is produced via the diff between the .ogg output and the .wav output is nothing more than noise, which is not significantly compressable by any means currently known (IOW, the noise would take up as much space as the origional .wav file, because it can't be compressed very well)

        If someone found the "trick" to compress this noise (because this noise will definately have a pattern to it, depending on which codec you used to encode the origional .ogg file, and depending on what type of music is being encoded) then this principle would be an awesome breakthrough.
  • Is lossless really a good idea?

    Why can't we develop a codec which is "almost lossless" and works well at higher bitrates? Ogg and MP3 do okay at 320kbps, but the quality increase isn't 3 times a 128kbps mp3.

    A good test for encoding quality is to encode new age (enya, enigma) or classical music as they tend to have many subtle, yet distinct instrumental sounds (bells, small symbols, synthesized effects) in the background. Listen to them using a pair of good quality headphones (seinheisser or bose) - you're not listening for artifacts (at high bitrates, you should't find any) - instead listen for the subtle background sounds. THEN, make the decision if lossless really is better. Personally, I prefer 192kbps OGG for my encoding, as it provides reasonably good quality without sucking up my entire drive.
    • ---
      Why can't we develop a codec which is "almost lossless" and works well at higher bitrates? Ogg and MP3 do okay at 320kbps, but the quality increase isn't 3 times a 128kbps mp3.
      ---

      That's fine. But lossless compression is important for people that trade and distribute music. Having an *exact* copy is what we want and lossless compression schemes do that for us. In the trading circles that I'm a part of, mp3 and ogg are the product of the devil. :)
    • by LinuxGeek8 ( 184023 ) on Wednesday January 29, 2003 @03:24PM (#5183516) Homepage
      > Is lossless really a good idea?

      Yes, it is.
      There are many musicians who want portability. Try encoding some wav to mp3/ogg at home, decoding it in the studio, mix it, encode it again to mp3/ogg and go home to your homestudio.
      Then try that 20 times, and see what remains of the soundquality.
      Then sure, you can also carry wavfiles if it matters that much to you, but 50% savings can be a lot.
      • I am curious, why would you lose audio quality
        with successive reencoding. In theory, mp3
        has psychoacoustic model to eliminate frequencies
        you can't hear. So once that is done on the first
        pass, the successive passes should find nothing
        more to eliminate. Where am I wrong?
        • There are 3 reasons you are wrong...

          1) your suggestion might be technically possible, but impractical. For instance, there is no practical reason to build a codec that has this capability if it will cost more to build than to just use lossless codecs.

          2) You still don't have a way to archive the ORIGIONAL SOUND. Would you store your master copy on a record? surely not. The same should be true for lossless v lossy compression/storage.

          3) After you decode an mp3, and edit it, then re-encode it, you are no longer re-encoding the origional mp3's output, so you cannot predict its re-encodeability, EVEN IF you did build a codec that could do what you propose.

          An example of #3:

          have you ever tried to run a .jpg image through photoshop filters? decode it to .psd (basically bitmap image), apply the filter, then re-encode to .jpg, and you will surely see that it won't be anywhere CLOSE to if you had applied the filter to the origional non-jpeg file. The reasoning for this is that the filters do a pixel by pixel filter application, which takes the current pixel, and sometimes surrounding pixels, and alters them depending on what surrounds them. Since you started with a jpg, the surrounding pixels will be somewhat obscured from the origional, and these slight obscurities from the origional could produce drastic changes in the final output.
    • by mindstrm ( 20013 ) on Wednesday January 29, 2003 @03:25PM (#5183518)
      Speaking of quality increase isn't really the right way to look at it. It's the decrease from the original that is important.

      Most poeple, experts included, cannot tell the difference between a 320kbps mp3 and an original 44.1khz pcm sample. I mean, the vast, huge majority of experts simply cannot tell the difference.

      But there IS a difference. We know there is a difference because it's lossy compression. We know that when we take an original CD and use flac on it, we end up with an exact copy of the original. That's why lossy compression exists.

      If you are simply listening to something on the headphone jack of your computer with medium or low quality headphone (like Bose or most of the Sennheiser line (medium) or the normal crap you buy in any store (low)), you don't have a chance of hearing the difference between a high bitrate mp3 and the original.. there is too much noise from the computer, and not enough power from the headphone jack.

      On the other hand, if you are using a clock stabilized external output from a good external soundcard with a proper mixer, running through a good class-A headphone amp and into a good pair of headphones (Sennheiser HD580, HD600, Grado RS-1, RS-2, SR325), in a quiet room built for listening, and if you have good ears, and are used to listening for detail, you may hear a difference.

      • by Sycraft-fu ( 314770 ) on Wednesday January 29, 2003 @05:52PM (#5184631)
        Actually one of the most important applications for lossless audio compression is production. The "stuff you can't hear" often because stuff you CAN hear if the sound is processed forther (EQ'd, chorused, etc). It's really not a good idea to use lossey compresison until you are completely down with your stuff. But, saving diskspace is often soemthing that would be nice, multitrack audio can get real big real quick. Hence, a losless ocmpression algortihm is great. Some companies implement one or another in tehir pro software but it would be nice if they were to settle on something like FLAC as a standard so the files would be interoperable.
    • by tapin ( 157076 ) on Wednesday January 29, 2003 @03:26PM (#5183530)

      Is lossless really a good idea?
      Yes. It is.

      Say you've got your collection of CDs at home, and you're just about to encode them all for your iPod. "Okay", you figure, "I'm going to pick.. umm.. 192kbps MP3s, since that's pretty good and I'm going to be listening to them over cheap headphones on the train on the way to work."

      So you go ahead and encode your entire 600-album collection to 192 kbps MP3s. And you put them on your iPod, and everything's fine... until you decide you want to listen to them at work as well, and 192kpbs just isn't good enough for listening in the quieter environment in your cube.

      Now you've gotta take your 600 CDs and re-encode them at 320 kbps, because if you were to do something silly like extract your 192 kbps MP3s to wave files and re-encode to 320 kbps, you'd just end up with inflated 192 kbps MP3s.

      Better yet, say you want (vbr) ogg files at work; or Apple (heaven forfend) finally comes out with a portable player with ogg support. You still need to go back to your original CDs (are they scratched yet? Did you lend 'em to your friend and forget he had it before he left for Maryland? Did your wife take your favorite disc to work with her, where one of her students used it for an art project?) and re-encode everything.

      Now, say instead you use FLAC (or SHN, or even APE which I've never personally used).

      You take your collection to work; turns out your servers are slightly too small for the FLAC files, so you expand to wave and encode to 320 kpbs MP3s using a simple shell script for the entire collection.

      You want ogg files for your new next-generation iPod; great, just run a slightly different shell script to expand to wave and encode to ogg.

      Your apartment is broken into and your entire 600 CD collection is stolen, including that ultra-rare CD you got from that band that was once part of that other band but split off when the original drummer OD'd, but they only burned 300 copies of their indie CD and besides they haven't been together since '94. No problem, you've still got the FLAC files and can at least burn yourself a virgin, bit-for-bit exact copy (depending on how carefully you originally extracted it, of course) of the audio -- your artwork and individually-numbered disc are still gone, sorry.

      And that's not to mention new compression algorithms, media formats, etc. MP3 and any other lossy compression algorithm doesn't handle future-readiness very well.

    • by kekoap ( 37035 ) on Wednesday January 29, 2003 @03:41PM (#5183608)

      Visit etree.org [etree.org]. The big benefit of lossless compression is it makes for better distribution of live recordings. The short of it is that demanding recordings in a losslessly compressed audio format, along with verification using checksum files, guarantees no loss in fidelity.

      There are many alternate live-music trading scenarios which cause a loss in fidelity. Two of the most common: 1) CD Audio->CD Audio copies are not perfect (unless you use a specialized tool like EAC - Exact Audio Copy); 2) trading lossily-compressed audio tends to lead to loss of fidelity through inevitable decompression, writing to CD, reripping, and reencoding.

    • I don't think that "lossless" sounds three times as good as 128kbps mp3.
  • It's a go! (Score:2, Interesting)


    With all the news on Microsoft's "new" TabletPC (old idea), I am quite intrigued that Microsoft doesn't have any innovative technology to bundle with their TabletPC; Xiph.org has it! The Opensource "revolution" is crumbling many barriers, including the proprietary ones put up just as a "distraction" (yes, inter-operability with Microsoft's proprietary software is a distraction from good programmers to design and implement better software and standards).

    Come to think of it, Microsoft has nothing innovative in the audio and video world. Their AVI format, its many subspecies (wsf, wmf, wma, etc), and the general proliferation thereof are a justified (and quite notable) example of how media standards is not as crucial element in a company's survival. Bill Gates (yes his statment still stands as being verry impressive and of his accurate observation) generally stated that Microsoft's goal is to extend itself to its competitors by ussurping them to use Microsoft software. I just saw a black cat, the same one, walk by twice. XIPH has technology that Microsoft wants; loss-less audio. We know S3's S3TC is a loss-less standard of computer graphics and it is the only standing technology that is keep the DRI project [sourceforge.net] from being able to objectionably compete as an opensource platform. So now, where does Microsoft think its going today? Microsoft has no software forcing anyone to use it now; the better of the software is opensourced and freely available.

    In the immortal words of Nelson... "Hah ha!"
  • by briancnorton ( 586947 ) on Wednesday January 29, 2003 @03:12PM (#5183421) Homepage
    High quality music recording is great and all, but what's the point? Unless you are recording in it straight from the source, you are still limited by the frequency range and sampling rate of the delivery media. (i.e. a CD) I seriously challenge ANYBODY, even those with true HI-FI equipment to tell me the difference between a CD and a good quality MP3Pro.

    On top of this, you are still limited by the response of the equipment you are playing it on. Maybe this would help a little if you had an optical connection to a good amp, but computer speakers will provide more interference than compression any ol day.

    • by KjetilK ( 186133 ) <kjetil AT kjernsmo DOT net> on Wednesday January 29, 2003 @03:19PM (#5183475) Homepage Journal
      I'm totally without a clue on this, really, but I can think of one obvious use: You want to back up your music collection but use less space to back it up.

      If the CD is lost or destroyed by scratches (many of mine are allready), you still have the original recording that you can compress with lossy compression of the day for your daily use. Conversion between lossy codecs is meaningless, but compressing from a lossless format to a lossy format is OK.

      So, if Ogg Vorbis 2.0 is better than 1.0, you can make 2.0 files from your lossless compressed files.

  • Can someone give a layman's description of how this compression algorithm manages to compress audio so effectively?

    I pretty much know how lossy compressions work and how gzip/zip/etc work, but what does this do (that must be specific to audio) that zip doesn't?

  • Does anyone know if there is any way to edit FLAC files at the commandline? Like tell it to output a region from a starting time to an ending time to another file?

    Editting 30-minute audio files on Linux is quite slow going using the GUI programs. The best I've yet found is GLAME, which at least lets me select a region then resize the region to get things right, but it takes an age. It would be much quicker to pinpoint the start and end points by listening in xmms, then use a commandline.
  • by Anonymous Coward
    #!/usr/bin/perl
    #
    # Converts FLAC to MP3 preserving tags
    # License: GPLv2
    # Home: http://www.GuruLabs.com/downloads.html
    #

    use MP3::Info;

    foreach $file (@ARGV) {
    if (!($file =~ /\.flac$/)) {
    print "Skipping $file\n";
    next;
    }
    undef $year; undef $artist; undef $comment; undef $album; undef $title; undef $genre; undef $tracknum;
    if ($tag = get_mp3tag($file)) {
    $year = $tag->{YEAR};
    $artist = $tag->{ARTIST};
    $comment = $tag->{COMMENT};
    $album = $tag->{ALBUM};
    $title = $tag->{TITLE};
    $genre = $tag->{GENRE};
    $tracknum = $tag->{TRACKNUM};
    chomp($year, $artist, $comment, $album, $title, $genre, $tracknum);
    $tracknum = sprintf("%2.2d", $tracknum);
    } else {
    print "Couldn't get id3v1 tag for $file.\n";
    }
    if (($artist) && ($title) && ($tracknum)) {
    $outfile = "$tracknum" . "_-_" . "$title.mp3";
    `flac -c -d "$file" | lame --alt-preset standard --ty $year --ta "$artist" --tc "$comment" --tl "$album" --tt "$title" --tg "$genre" --tn $tracknum - "$outfile"`;
    } else {
    $outfile = $file;
    $outfile =~ s/\.flac$/.mp3/;
    `flac -c -d "$file" | lame --alt-preset standard - "$outfile"`;
    }
    }
    • There's a security/correctness problem with that script:

      `flac -c -d "$file" | lame --alt-preset standard --ty $year --ta "$artist" --tc "$comment" --tl "$album" --tt "$title" --tg "$genre" --tn $tracknum - "$outfile"`;

      Don't use this script unless you trust the creator of the .flac file. In other words, definitely not on anything you downloaded off gnutella. You can execute arbitrary commands through carefully crafted ID3 tags. (Not even that carefully; an artist of '"; rm -rf /; echo "' minus the single quotes would do it.)

      Normally I'd say to use the system('program', 'arg1', 'arg2', ...) form instead, since it doesn't go through the shell. In this case, that's impossible because there's a pipe. You could accomplish the same thing with fork()/exec() and some manual pipe manipulation. It's a pain, so maybe someone's written a CPAN module to do it for you.

      Or, you could set the ID3 tags of the .mp3 afterward with the same MP3::Info module that you used to read the ID3 tags from the .flac. That way might even be less code. But you still need to make sure $outfile is safe.

      If all else fails, you could use a regexp to sanify those variables. But I don't like that approach for a couple reasons:

      • quoting is shell-dependent. Assumptions are bad.
      • it's unnecessarily complicated.
      • it requires you explicitly remember to sanitize each variable. Solutions which are unsafe unless you have a certain bit of code are really bad. They cause SQL injection attacks, cross-site scripting attacks, shell command injection attacks, etc.
  • etree (Score:2, Informative)

    by dylelf ( 617498 )
    People should check out: http://wiki.etree.org [etree.org], an online network for people interested in live jam band music. They are trying to move towards using all FLAC, or at least mostly. Also check out the etree audio archive [archive.org], they have some stuff in FLAC, although most of it's in SHN.
  • by Jugalator ( 259273 ) on Wednesday January 29, 2003 @06:09PM (#5184791) Journal
    Now tell me what FLAC has that lzip [sourceforge.net] hasn't! I constantly compress my CD rips down to a few MB's. You can too!

    Some impressive stuff from the FAQ that made me leave that Monkey-compression-thingy once and for all:

    "We're talking about a constant-time algorithm that can reduce a file down to 0% of its original size. What's not to like?"
    ---
    "You will most likely experience a feeling of euphoria or lightheadedness as you watch your free disk space cascade upwards to 100%."
    ---
    Are there any drawbacks?

    "Not that we know of. Occasionally, in the pre-1.0 days, someone would compress a file down to 0K and it would be lost for good. But that has been happening less and less frequently, and these days it has been a long time since we received any complaints from the people who reported this originally."

    ---

    I'm especially impressed by their complex PLACeBO and Lessiss-Moore algorithms.

    And don't forget to read their Free-Object Oriented License [sourceforge.net] (or simply "FOO"):
  • That seems low. A bit of preprocessing before gzip should do better than that. How about this?

    Start by converting stereo to A+B and A-B form. This is lossless, but the A-B track often has less variation than the A+B track; anything that's on both channels doesn't affect the A-B track much.

    Then convert those two tracks to deltas from the previous sample. This reduces small changes to small numbers, which compress well in later steps. When the source material doesn't have high frequencies, you'll have runs of similar numbers.

    Then reorder the bytes so that samples are sequential, not interleaved. That way, runs of similar deltas are sequential.

    Then run gzip, which is very good at compressing runs of similar bit sequences.

    This is completely reversible. Try it and see how well it compresses. It should do especially well on instrumental classical music.

    • by pclminion ( 145572 ) on Wednesday January 29, 2003 @06:55PM (#5185166)
      You freak, you just described EXACTLY what FLAC does.

      Except that they go even further than your naive scheme, and use a predictor to get even smaller deltas than your scheme (e.g., assume waveform is locally quadratic/cubic/quartic then extrapolate the next sample). A signal can be varying rapidly and yet still be highly predictable. Your simplistic scheme wouldn't handle it.

      Then they use Rice-Golomb coding to encode the deltas. This does FAR better than gzip ever could, because it is designed SPECIFICALLY to handle the geometric distribution of the deltas, whereas gzip is a generic dictionary algorithm.

      I really doubt you've even tried what you are suggesting. You're on the right track, but the FLAC team beat you to the punch. Sorry.

    • A bit of preprocessing before gzip should do better than that. How about this? [...] This is completely reversible. Try it and see how well it compresses.

      Umm, you try it. Presumably they've spent a lot of time on designing a format. You should take a little bit of effort before claiming you can do much better, especially since verifying the compression of such a simple format should be easy. Making the claim without even taking that much effort is insulting.

      Besides, FLAC has important features [sourceforge.net] your format does not. In particular, FLAC is seekable. As anyone who has tried to quickly extract a single file from a large .tar.gz knows, gzip is not.

  • by wwwgregcom ( 313240 ) on Wednesday January 29, 2003 @08:52PM (#5186030) Journal
    And by GNU no matter!

    Its called gzip. Try it yourself and see the results!

    gzip -c9 audiofile.wav > audiofile.gz

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...