Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics

'Quite OK Image' Format (QOI) Coming To a Graphics Program Near You? (phoboslab.org) 103

Slashdot reader Tesseractic comes bearing gifts — specifically, news of "a new image format that is lossless, gives much faster encodes, faster decodes and roughly comparable compression compared to what's in use today."

Quite OK Image format (or QOI) is the brainchild of developer Dominic Szablewski, who complains current image formats like PNG, JPEG, MPEG, MOV and MP4 "burst with complexity at the seams," the Register reports: "Every tiny aspect screams 'design by consortium'," he added, going on to lament the fact that most common codecs are old, closed, and "require huge libraries, are compute hungry and difficult to work with." Szablewski thought he could do better and appears to have achieved that objective by cooking up some code, floating it on GitHub, and paying attention to the 500-plus comments it generated.

While Szablewski admits that QOI will not compress images as well as an optimized PNG encoder, he claims it "losslessy compresses images to a similar size of PNG, while offering 20x-50x faster encoding and 3x-4x faster decoding." Most importantly, to Szablewski, the reference en-/decoder fits in about 300 lines of C and the file format spec requires is just one page long.

"In the last few weeks QOI implementations for lot of different languages and libraries popped up," Szablewski wrote on his blog, with Zig, Rust,Go, TypeScript, Haskell, Ä, Python, C#, Elixir, Swift, Java, and Pascal among the options.

This discussion has been archived. No new comments can be posted.

'Quite OK Image' Format (QOI) Coming To a Graphics Program Near You?

Comments Filter:
  • by sodul ( 833177 ) on Saturday December 25, 2021 @03:49PM (#62114565) Homepage

    I'm not sure that image compression/decompression speeds are of a big concern these days. Sure, I can see this be used in a niche use case where a compromise between storage size and raw processing speed might be useful on a backend platform, something like Pixar's image storage systems they use for their movies for example, but considering jpeg is still the standard for lossy pictures while newer formats have existing for almost decades, I don't see how this one will dethrone PNG.

    PNG gained popularity because we strongly needed a lossless format with more than 256 colors (gif) and jpeg was not good for that, the addition of transparency really sealed the deal.

    Obligatory: https://xkcd.com/927/ [xkcd.com]

    • It could be useful for embedded systems where you want to process images quite fast, and don't want to include a large library like libpng that may have vulnerabilities and require updates.

      Compression algorithms for filesystems lzo, lz4, snappy, zstd, do not produce best compression ratio, but are very fast decompressing and improve the throughput of ssd, and their implementation is small enough to be included into an OS kernel with low risk of introducing bugs. For images the linux kernel can read good old

      • by ceoyoyo ( 59147 )

        PNG uses DEFLATE for compression, which is basically an LZ77. LZ4 is as well.

        Sounds like PNG anticipated your post by about thirty years.

      • Embedded systems usually use custom formats since they're working with custom data and processing requirements, so the formats are tuned towards meeting those requirements.

        In fact, without wanting to rain on the developer's parade, I can't actually think of a single genuine reason why we need yet another new image format alongside the, oh, four million existing ones that have been standardised and around for years. Sure, every new whatever is always going to be claimed to be more cromulent than everything

    • Re: (Score:2, Troll)

      by Shinobi ( 19308 )

      If it lives up to the hype, it could be useful in digital cameras, to replace jpeg. But I think a lot of the impetus behind it is revealed in the reasoning for it, with the comments such as "designed by committee" etc: Namely, an attitude of a snotty upturned nose and "well, *I* don't need those features, thus noone else does either!"

      • Where digital cameras are replacing or providing alternatives to JPEG, they're choosing HEIC. Especially in the higher end cameras. This looks a little late to the game as an alternative to PNG.

        • It is what i will be converting my old DNGs into (from the old Lumia 930) in order to avoid wasting space. Just like how H265 has sorted video and AAC sorted audio, HEIC has sorted images. I do not see the appeal of perfectly lossless storage honestly.
      • If it lives up to the hype, it could be useful in digital cameras, to replace jpeg.

        No, it wont, or only for very limited use (consumer entry-level point'n'shoot camera. Which anyway don't store raw/lossless data. And whose niche has been completely replaced by smartphones by now). Because among others it doesn't support anything but 8-bits per pixel, and RGB colors (either linear or sRGB).

        But I think a lot of the impetus behind it is revealed in the reasoning for it, with the comments such as "designed by committee" etc: Namely, an attitude of a snotty upturned nose and "well, *I* don't need those features, thus noone else does either!"

        Yup, exactly that.
        Some programmer who just wanted to store a simple bitmap and wasn't interested in the full specs of PNG.
        Thus they came up with a simple bitmap storage, that can't encore anything fancy.

      • by tlhIngan ( 30335 )

        If it lives up to the hype, it could be useful in digital cameras, to replace jpeg. But I think a lot of the impetus behind it is revealed in the reasoning for it, with the comments such as "designed by committee" etc: Namely, an attitude of a snotty upturned nose and "well, *I* don't need those features, thus noone else does either!"

        No, digital cameras use JPEG and HEIC for their "quick photo" mode where you can take a memory card and send the photo off to your friends instantly.

        Pro photographers shoot exc

      • Fast compression and small code footprint are features that most image compression formats ignore. The former also implies low power, while the latter facilitates security audit. (A formal security proof for QOI would help.) Low energy per compression might be desirable for battery powered devices such as drones, medical devices and trail cameras. Not all the action is in high end imaging.
    • It might not be a problem as such, but if it's a significant improvement over what we have, that's fine. Progress is often incremental rather than revolutionary.
    • Re: (Score:2, Insightful)

      by RyanFenton ( 230700 )

      What if you were making a game, and wanted user-generated content?

      Like, thumbnails for save files, exportable character images, etc.

      Making games faster to load/save would DEFINITELY be a benefit - and not just on handheld consoles.

      More bandwidth means more ability to have more ways of using such features, like making new features using dynamically changing textures and being able to save the most recent state with the game load, instead of resetting it each time.

      Each little bit of improved bandwidth has a w

      • by Dutch Gun ( 899105 ) on Saturday December 25, 2021 @05:02PM (#62114791)

        Being a professional videogame developer, I suppose I can speculate a bit on it's usefulness to our industry.

        Images (textures) are typically pre-processed into a native format the graphics card will understand. So, games are typically not spending a lot of time, if any, decoding PNGs or JPEG images.

        So in practice, image decompression is probably not going to be a loading bottleneck for most games. Disk size is actually more of a factor. In many cases, whatever sort of compression reduces the size by any amount tends to be a win, because CPU power has long ago outstripped disk IO speed, and gaming devices typically have multiple cores to use. And in these cases, total compression is more of a factor than compression or decompression speed. For examples, some mobile games transmit JPEG images across the network, then transform them to a GPU-native file format locally. Again, this sort of format would not really be of any benefit here.

        As you indicated, there may be some edge cases where user data is saved or loaded, but it seems much more likely that JPEG or PNG would be used, with not much downsides to those.

        One possible future benefit I could see is that, due to it's simplicity, this might have a chance at being directly supported by GPU texture units. In that case, this could then gain life as a native videogame file format (with some extensions), similar to how Ogg Vorbis is very popular for videogame audio. But in this case, it's simplicity and limited feature set may work against it. It doesn't support pre-multiplied alpha, or greyscale textures, for example. And it would need to support mipmaps, etc. So it's likely a new image format would be needed, and only the compression algorithm would be used.

        It's a cool new format (check out the specs - very slick), but I don't see many game developers pining for a new image format. It has a reasonable place in libraries like SDL which have built-in image loaders/converters. I mean, the more images supported, the better. But overall... not terribly practical for most use cases I can think of.

           

        • It's not likely to see any use as an in-GPU texture compression format; it's 100% serial. No 2D lookups into the compressed image. You could make a sort of checkpoint by storing the 192-256 byte palette, 3-4 bytes of prior pixel, and offsets into image and compressed stream, but I don't think people would bother. Of course, decoding the format could be done by compute shaders, but wouldn't make much sense with less than some dozens of images to decode simultaneously; CPUs are often better at these tight l
          • That's a good point about the GPU format. I should have thought a bit more about the RLE nature and how that's not very GPU friendly. Well, I guess that's why I'm not a graphics programmer.

            So... yeah, probably not very useful at all to game developers then. It's a clever format, but just doesn't seem to fill any immediate niche.

        • I'm getting a meh feeling like many others here. But on your comment about not supporting pre-multiplied alpha, I'm sure why that is? I didn't notice any assumptions that would prevent this other than no metadata for it?
          • From the header file, which contains the specs:

            The color channels are assumed to not be premultiplied with the alpha channel
            ("un-premultiplied alpha").

            So as you said, there's apparently just no support for it in the metadata.

            Don't get me wrong - it's a neat file format, and a clever algorithm. It's just that I don't see a use for it when we have PNG, which compresses better, has a wider variety of features, and is plenty fast on modern hardware. This would have been an awesome format to have around the time we needed a true-color replacement for gif, although honestly, given how limited the feature set is,

            • Oh I agree with you on this format's niche use case being limited. Just talking about the aspect of pre-multiplied alpha, the non-support of it in metadata is the same as PNG then. This aspect isn't really a bad mark against it since it's just maintaining the status quo.
    • There's already a format poised to dethrone PNG: WebP. It's complex, yes - but it absolutely wipes the floor with PNG on lossless compression ratio,and in web use transmission time is far greater than decompression time. It's got the backing of a tech industry superpower, Google, to push support. And already is supported by all the web browsers worth mentioning. It also has a lossy mode which is, at worst, comparable to JPEG in performance - and usually quite a lot better. And it can combine them - lossy im

      • by narcc ( 412956 )

        It's complex, yes - but

        You're missing the point of QOI. It's a reaction to what many people see as needless complexity.

        Besides, with more people tinkering with small computers, it's the perfect time for a simple and fast image format like this. I'm a big believer in doing as much as you can yourself, especially when it comes to hobby projects.

      • by Megane ( 129182 )

        I hate webp because it's new enough that basically nothing but web browsers supports it, and too many web servers automatically transcode to it when a web browser claims to support it, so if I want to save an image as a file, it saves as a useless blob. Ask for a .jpg, get a webp. Ask for a .png, get a webp. I have to fall back to using wget to NOT get a webp file.

        And now someone wants to create ANOTHER image format?

      • JPEG-XL is better than WebP, though.
      • Screw WebP. Just getting the image dimensions out of a file is a nightmare that requires you to parse practically the whole file since that information may not be in the first chunk. Even then, Google chose to use this idiotic 14-bit format for values requiring you to do bit shifts, unless you have a version of the format that doesn't do it that way or uses the alternate header format or...

        WebP is a trainwreck. I hate it. There's a reason why it's been around for more than a decade and you can't get any

    • by AmiMoJo ( 196126 )

      I can't really see much use for it either. Less computation saves energy on mobile devices, but that will likely be offset by the need to download more data.

      The only place I can see it having some utility is on microcontrollers with limited RAM. I've written a few custom formats for those over the years, mostly based on the pixel format of the LCD. For some reason there is no standard with graphic LCDs, they all do something different and usually weird. Simple run length encoding, which is what this thing u

    • Comment removed based on user account deletion
    • I'm not sure that image compression/decompression speeds are of a big concern these days.

      For general purpose computing? No. But that's not the only place images are used.

    • PNG is also popular because it was patent free which Unisys was threatening to charge royalties for GIF.
    • I don't see how this one will dethrone PNG.

      JPEG-XL, though, on the other hand... Is going to wipe the f***ing floor with PNG.

    • by twms2h ( 473383 )

      I'm not sure that image compression/decompression speeds are of a big concern these days.

      I'm currently looking into it (haven't read much yet) because we need a way to efficiently store lots of high resolution images for road condition surveys. Currently we use JPEG for that but compression artifacts have become an issue.

      • by sodul ( 833177 )

        And there are existing formats that are much better than standard JPEG for lossy compression, or are much better at compressing lossless than QOI can ever be.

        I don't think the compression/decompression speed is a greater concern than size in your case, unless you get something super slow. I suspect that for lossy HEIC could be good enough for your needs, but you could try JPEG-XL which offers strong compression in both lossy and lossless formats but can be slow.

        Here is a relatively recent article that cover

  • Wait what? (Score:5, Funny)

    by korgitser ( 1809018 ) on Saturday December 25, 2021 @03:51PM (#62114573)
    What is this? Is this really actual news for nerds? This can't be right. Since this is slashdot, i must conclude it's just clickbait. Stop this nonsense and give me my blockchain AI opinion pieces, dammit!
  • What did he skip? (Score:4, Interesting)

    by hackertourist ( 2202674 ) on Saturday December 25, 2021 @03:56PM (#62114599)

    What he calls complexity could be useful features. Color space is RGB or RGB+alpha, so at least transparency is supported. No monochrome option though. 8 bits per channel, no option to change color depth. No CMYK.

    • by Shinobi ( 19308 )

      I think it's the usual snotty upturned nose and "well, *I* don't use those features, thus noone else needs them either!" attitude providing a lot of the idea behind it, though it could be useful for digital cameras as an alternative to jpeg copies of RAW data.

      • What exactly does it offer for people who shoot in RAW? I usually have RAW+JPEG enabled, but thatâ(TM)s mostly for a backup and normally end up deleting the JPEGs as a waste of space. This thing offers PNG level compression, which will make it bigger than JPEG, but for what benefit? If Iâ(TM)ve got the RAW then I can create any image format I want without additional loss and have the benefits of being able to fix white balance or exposure, etc.

        • by ceoyoyo ( 59147 )

          Three times the file size probably, since it doesn't support anything but RGB.

        • I'm sure you could already tell, but it offers nothing for that use case. For instance, it doesn't support high bit depth or one channel per pixel (as caused by Bayer filters and the like). If you're looking for something that might compress more than your camera does, but preserves sensor data, DNG is more relevant. If you're looking for easier to parse, FITS.
      • by Anonymous Coward

        QOI on digital cameras? Nope, I haven't seen a digital camera with an 8bpc image sensor since last century. Image sensors today commonly have 12, 14 or 16 bits per channel thus being completely unusable with QOI.

    • Does it use a pixel cache? If so, there might not be much benefit from monochrome.

      • I'm not sure what the right term is... PNG doesn't do it, WebP does. Sort of fancier move-to-front encoding, that assigns shorter codewords to recently-seen color values.

        • Move to front strategy works in a significant number of scenarios.

          The easiest to understand use is when applied to an unsorted list. The most frequently accessed items will tend to stay near the front, so the runtime performance of searching that list improves, and for the same reason, if items near the head of the list are more likely than items near the tail, then its easy to give them shorter code words.

          In compression practice, move to front is often done via splay tree.

          Optimally a code words leng
    • Re:What did he skip? (Score:5, Interesting)

      by pz ( 113803 ) on Saturday December 25, 2021 @05:20PM (#62114823) Journal

      In addition to not supporting things like grayscale, more or fewer than 8 bits per pixel, more color spaces than sRGB / sRGB-alpha, a number of color planes other than 3 or 4, all of which limit the domain for a new encoding, there are some observations to be made ...

      Fun fact: he's using Goedel numbers in the hash function.

      Also fun fact: he uses a running index of previously seen colors to create a localized cache, essentially a sliding-window indexed encoding with each pixel a reference to a recently seen indexed value, a delta from the previous pixel, or a run-length duplication of a previous pixel. Decoding *must* use the same cache size, which is fixed at 64 entries. Encodings like this implicitly assume that images have more low-frequency information in them than high-frequency (which is generally true, but not taking into account actual spectral profiles is tantamount to leaving a lot of compression on the table).

      Another fun fact: there appears to be no redundancy in the encoding, so that if a single pixel is corrupted in transmission or storage, it can affect the entire rest of the image.

      Final fun fact: he ignores opportunities for compression in 2D, by considering images only along scan lines.

      OK, one more fun fact: paying attention to lots of suggestions as part of a development process is quite reminiscent of design by committee.

      • Also fun fact: he uses a running index of previously seen colors to create a localized cache, essentially a sliding-window indexed encoding with each pixel a reference to a recently seen indexed value, a delta from the previous pixel, or a run-length duplication of a previous pixel.

        It sounds like you are describing an encoder from the LZ family, only with the word 'dictionary' replaced by the word 'cache'.

        • by ceoyoyo ( 59147 )

          That would hardly be surprising since PNG also uses an LZ encoder. It sounds suspiciously like someone stripped the features out of PNG that he didn't personally see a use for.

      • by narcc ( 412956 )

        paying attention to lots of suggestions as part of a development process is quite reminiscent of design by committee.

        \

        Let's see if I can explain the difference.

        Design by committee is all about compromise. That's why designs tend to get bloated with questionable features and hampered by confusing design decisions. It's the intersection of a lot of competing interests, with a little extra weight given to the loudest voices. It'll give you something ugly, complex, but generally good enough. (Take a look at OpenType to see how absurd compromise can get.)

        This was designed by a single person with input from a community. T

      • Just had a quick look at the source code [github.com]. Oh dear Ghod, he takes an existing file, e.g. PNG, decompresses it, wraps it up in an exquisitely homebrew set of headers that, as others have pointed out, won't deal with 99% of the requirements of people out there, and then uses some first-year student... can't be bothered devoting the cycles to figure it it since it's all uncommented code littered with magic numbers but it looks like there's some run-length coding and delta/differential encoding going on.

        This i

    • Yes.

      I dont understand the spin being put on this, because there is no way in hell this format can replace much.

      But its wholly true that image compression in generalized formats is quite complex. A programmer unfamiliar with how information theory is applied in practice, down to the very last pigeon holed encodable value, has no chance at all. Even something like huffman encoding is not something that can be described in a single paragraph.

      If you dont need to support one of the generalized formats, the
    • by narcc ( 412956 )

      Maybe we don't need every file format to do every possible thing?

      "Simple and fits most use cases" is a good thing. Most people don't need anything outside of RGB24 and RGBA32, after all, so why make them use a large, complex, and slow image format (which usually means including a do-everything library) when something smaller, simpler, and faster will do instead?

      If that doesn't work for you, that's fine. You can just use something else. It's not like other formats go away.

      • by Ichijo ( 607641 )

        Everything's going 10-bit HDR these days so RGB24 is already obsolete.

        • by narcc ( 412956 )

          Obsolete? That's the most ridiculous use of the word I can imagine. I can guarantee that in 10 years time, sRGB will still be more common that 10-bit or higher color. Think about it. We've had cameras that support 12 and even 14 bit color for a long time and sRGB is still the standard.

          It has value in it's niche, sure, but that's where it'll stay. It's the same reason a lot of photographers will pick 12 over 14 bit when they have the option. The quality just isn't worth the extra space.

          • by Ichijo ( 607641 )

            We've had cameras that support 12 and even 14 bit color for a long time and sRGB is still the standard.

            I convert my photos to sRGB out of necessity because JPEG has a lot of inertia. So these days sRGB is only useful for final publishing, not for capture or as an intermediate format.

    • What he calls complexity could be useful features.

      If people need useful features there are other formats you can use. The useful feature here is the speed and lack of complexity which is a feature in itself. Take your examples for ... err example. The overwhelming majority (easily greater than 99%) of images in current use and distribution are 8bits per channel, don't support alpha channels, have no option to change colour depth, and don't have a CMYK colour space.

      But really the use case here isn't for your professional printing company which cares about t

  • by Snarky McButtface ( 1542357 ) on Saturday December 25, 2021 @04:03PM (#62114615)
    Standards... [xkcd.com]
  • Will this speed up porn?
  • Now we need to update the billion plus systems, including embedded systems to be able to make use of this new codec.

      It's going to take many years to see any kind of widespread adoption of this, not to mention PNG is good enough to the end user.

  • What? No Lenna?! (Score:2, Insightful)

    by nocoiner ( 7891194 )

    I was surprised their test examples [qoiformat.org] don't have Lenna [wikipedia.org] in there.

    • There has been a bit of a âoedebateâ about it:
      https://github.com/phoboslab/qoi/issues/35

      • Re: What? No Lenna?! (Score:4, Informative)

        by Ecuador ( 740021 ) on Saturday December 25, 2021 @06:05PM (#62114897) Homepage

        Wow, I had actually missed this whole issue. From what I read now, even the model herself doesn't understand what the big deal is, especially since it is to the shoulders. And people calling it a "crop of a pornographic image" when we are talking about just Playboy is a bit... I don't know, maybe because I grew up in Europe where people are not irrational pseudopuritans?

        • by holloway ( 46404 )

          From what I read now, even the model herself doesn't understand what the big deal is

          Nah, she doesn't want to be involved in these test images and she's made that quite clear. https://www.losinglena.com/ [losinglena.com] There's nothing particularly special about that photo so we can find equivalent test images.

          • by Ecuador ( 740021 )

            From the Wired [wired.com] article where they met her and had a chat:

            The photo, she said, doesn’t show very much—just down to her shoulders—so it was hard for her to see what the big deal was.

            I’m really proud of that picture

            Her son works in tech, and he has occasionally tried to explain to his mother how her image is used and to what ends. “He works with pixels,” she said. “I don’t understand, but I think I’ve made some good.”

            There's nothing special about that photo EITHER WAY. Our world is too fucked up for people to get worked up about non-issues. If the subject of the picture had a problem with it, I'd understand.

    • Didn't you hear? Anyone using Lenna now is sexist and therefore cancelled.
      Even she (Lenna) says she doesn't want the image used anymore...which I think is a perfectly legitimate request, as long as she pays back the money she received - plus interest - to have the picture taken in the first place.

  • by ugen ( 93902 ) on Saturday December 25, 2021 @05:01PM (#62114789)

    Ok, so first we have: "Every tiny aspect screams 'design by consortium'," he added

    Then: "Szablewski thought he could do better and appears to have achieved that objective by cooking up some code, floating it on GitHub, and paying attention to the 500-plus comments it generated"

    Is that not "design by consortium"?

    • Re: (Score:3, Interesting)

      by Anonymous Coward
      Not much heed was paid to the suggestions. One day he announces the format out of the blue, and week or two later the "specification is final", the time window for suggestions was extremely short. The big-endian byte order was kept, he didn't budge. Most architectures or, or boot to little-endian configuration these days so its unfortunate that can't "peek" u32 from the stream and do cheap extraction of the values. The LSB's are on the *next* byte so you either extract the bytes you need in BE order, or by
    • No, because he had complete veto to shut down alterations that would eliminate pathologically poor cases for not feeling right to him. And now he's got the format 100% specified with meaningless padding and no version field. But the compression usually outperforms PPM!
    • Not when thereâ(TM)s only one person saying yes or no to each bit of feedback. Committee designs have to make compromises to build consensus and get buyin from multiple approvers. This is just one person trying to get educated and make decisions accordingly. As a result, all the trade off options are being decided toward the one personâ(TM)s use cases. Itâ(TM)s not necessary bad, not necessary good â" just a different way to encode.

    • by narcc ( 412956 )

      Is that not "design by consortium"?

      It is not.

  • Skimming through the spec, seems like it's a sort of combination of run-length encoding and pixel-to-pixel diffing to make the run last a little longer?
    • ...oh, plus a sort of running palette.
    • The pixel to pixel diffing doesn't do anything to the runs, I think. It's basically 8-bit RGB, running 64-entry palette, RLE, and two variants of HAM. And a completely redundant end marker that only complicates things (by overlapping with the palette format). The main feature of the format is a codec that'll fit in L1 cache and uses simple byte based processing. By the way, who uses 8-bit linear?
      • Re:How does it work? (Score:4, Interesting)

        by clawsoon ( 748629 ) on Saturday December 25, 2021 @06:40PM (#62114949)

        So I guess the basic idea is something like... store each 24 or 32 bit pixel in:

        • 0 extra bits if it's the same colour
        • 8 extra bits if it's in the running palette or has less than 2 bits per channel difference from the previous pixel
        • 16 extra bits if it's RGB and has less than 4 or 5 bits per channel difference from the previous pixel
        • 24 extra bits if it's RGBA and has less than 5 bits per channel difference from the previous pixel
        • 32 or 40 extra bits if it doesn't fit any of these, plus whatever space is needed in the running palette and for the RLE counter.

        I wonder if compression could be improved ever so slightly if the extra bit in the 16-bit format was moved to the green channel from the red channel, since brightness information mostly ends up as green. No idea if that would help or not. (Nevermind, I see that's already been done in the updated spec.)

        I work in a 2D animation environment, and most of the images that we generate are 8-bits-per-channel. (For post-processing they often switch to higher bit depths so that they can push colours around without loss of fidelity, and because they're often working in video/legal range instead of pc/full range which makes 8bpc suck just enough to really notice it in gradients.) That's just starting to change with broadcasters wanting HDR, but presumably this format could be extended to 16bpc pretty easily.

        Having only sRGB and linear colour spaces is a bit limiting, though. Perhaps I could propose an extension to the spec...

    • Skimming through the spec, seems like it's a sort of combination of run-length encoding and pixel-to-pixel diffing to make the run last a little longer?

      Delta encoding doesnt make runs longer. Delta encoding makes more runs.

      0,1,2,3,4,5,6,7 has no runs
      but apply delta encoding to it:
      0,1,1,1,1,1,1,1 has a run of 7 1s

      Any even stepping of values, such as seen in gradients, therefore also has runs.

  • Please follow https://twitter.com/richgel999 [twitter.com] to find out everything about QOI. TLDR: it sucks. A long version
    • It only works for special images, there are tons of images which it inflates instead of compressing.
    • It's single threaded as an image format which makes it slower than multithreaded PNG, and this cannot be fixed.
    • It only works for RGB 24 bit images and it terribly underperforms for 32bit.
    • It's nowhere near WEBP/AVIF lossless/JPEG XL lossless/properly optimized PNG.

    In short let's forget about it.

  • by Gravis Zero ( 934156 ) on Saturday December 25, 2021 @10:12PM (#62115245)

    I actually looked at the how the image is encoded and the basis of it is that most basic concept is that many pixels aren't a large change from the pixel before it. As such, it provides four different operations (selected by the last two bits in a byte) and then provides the remaining six bits to compute the new pixel value from previous pixel value. The various operations are...

    * palette index: six bits select from a palette of 64 colors
    * channel value diff: two bits per channel that are added to the previous values
    * luma value diff (two bytes total): six bits for the green channel and then four bits for red and blue that modify based on luma values relative to the green channel.
    * repeat previous pixel: six bits determine the number of times the pixel should be repeated (1 to 62 times because 63 and 64 conflict with bellow)
    ** special values 0xFE and 0xFF are used to indicate a new pixel value, RGB or RGBA and the next 3 or 4 bytes are channel values.

    I can see why this is attractive option for embedded system programmers (it's elementary math done per pixel in order) but I do wonder if the resulting encoded images would benefit from DEFLATE or LZMA compression. If so then it might be worth adding this (or a similar type) of encoding to the PNG format as a new kind of "filter algorithm".

  • lots of metrics on code length reductions. What IF compiled code that is shorter runs slower?

    It a hailed achievement in programming but no measures other than size-only comparisons like smaller is better.

  • I am reminded of Geoffrey James' "The Zen of Programming", a series of comical zen-like koans back in the day. In one of them, the master complains about the multitude of conflicting editors that all have useful features but are all incompatible. The novice takes it upon himself to devise an editor that combines all the best features of the multiple editors the master must deal with...

    Suddenly the master struck the novice on the side of his head. It was not a heavy blow, but the novice was nonetheless surprised. "What did you do that for?" exclaimed the novice. "I have no wish to learn another editing program," said the master. And suddenly the novice was enlightened.

    And so it is with yet another image format.

  • by Tatarize ( 682683 ) on Sunday December 26, 2021 @03:16AM (#62115675) Homepage

    Now let me start saying jpeg is a giant pile of crap that couldn't be fully implemented by anybody but PNG is actually great. It's simple, highly extensible, lets you ignore stuff, has different bit structures, outsources the compression and is perfectly easy to understand. There is literally nothing in this format that isn't already part of PNG. And even putting JPEG and PNG in the same realm is in error, PNG is superior in literally every way. But, saying this Qoi format is as good is both wrong and seriously problematic. You can quite literally write a PNG writer in like 7 lines of code.

            return b"".join(
                    [
                            b"\x89PNG\r\n\x1a\n",
                            png_pack(b"IHDR", struct.pack("!2I5B", width, height, 8, 6, 0, 0, 0)),
                            png_pack(b"IDAT", zlib.compress(raw_data, 9)),
                            png_pack(b"IEND", b""),
                    ]
            )

    Also, the biggest deficit here is that it's focusing on the original code writer for the library not for the user of that library. I don't care what's in the magic stuff Pillow does to process .save("myfile.png") -- I care that it's not lossy, I care that you could extend it to include other things not originally given in the spec, but somebody is going to write that code 1 time, and people are going to use the features in that code a million times. --- Making the former improved at the cost of the latter is a serious problem. We would be much better off if we could focus our efforts and killing non-png type formats.

    • by ebvwfbw ( 864834 )

      Seems to me png doesn't support images of type movie. So no mp4 or MPEG compatibility.

      • You'd be wrong. It's so extensible that all you'd need to do is add another block type for mPeG that would contain the mpeg data and you're fine. You could easily use a png to store that type of data. This is the kind of joke that only somebody who didn't read the spec would make. It's totally within spec to make different block types that contain other data.

  • by billyswong ( 1858858 ) on Sunday December 26, 2021 @03:27AM (#62115689)
    There are two places that care of faster decoding than PNG / JPG that I can think of. One is GPU texture. Another is HDR high definition monitor output transmission. Anything else that involve mostly static images don't need ultra high speed decompression. PNG / JPG are fast enough. QOI doesn't suit the two high speed use cases, so it will be just a hobby format that have no place in mainstream.
    • Both those places of course have existing codecs optimized for their use cases, such as ASTC and DSC. And deployment in hardware is what matters for them; they have no benefit from the byte orientation of QOI. QOI is does not satisfy either case: entirely serial breaks texture use, and 33% base expansion breaks the monitor case. Only perfectly flat horisontal runs of 3 or more pixels achieve lower than 8 bits per pixel.
    • To me this is just history repeating itself again.
      Developer: "[X] is too complex. I have created [Y]”
      Community: "That is great but can you add [feaure 1] and [feature 2]? We really need it."
      Developer: "Ok but no more."
      Years and many features later. . .
      New developer: “[X] and [Y] are too complex. I have developed [Z]."
      And the circle of life begins again. . .
      • Sometimes such cycle is meaningful because old use cases may be obsoleted or people found better structural design so that current use cases are better integrated into the infrastructure and less hacked on.

        But QOI is not those cases.

  • I am sorry, but I don't really care if the source code fits on a Tiny85. We don't need a mediocre format.
  • What about encoding with QOI and then running through gzip? How does that perform in file size and speed compared to PNG? Or QOI followed by zstd?

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...