Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software

A New Web Image Format 154

MrP- writes: "BetaNews is reporting that a company called LizardTech has developed a new image format for the Web called DjVu." Apparently, it differentiates between forground and background components of an image, and compresses each appropriately. Good idea, but I'm skeptical of improvements (especially because they say it's "20 times faster then gifs" -- which measure compression in terms of speed? And they also say it compresses faster then pdf, but pdf isn't really an image format). No Linux support. And I don't see any source code on the format, so don't expect it to get a lot of support on any major Web sites, regardless of the compression.
This discussion has been archived. No new comments can be posted.

A New Web Image Format

Comments Filter:
  • Nude: girls naked. Naked: guys nude. hehe
  • Which is more important - open standards or better technology?

    Open technology.

    Even if this company ports its format on a variety of OS, I won't be able to use it in my own products, I won't be able to use it with my favorite browser, etc. See what happened with GIF.

  • New Image Compression Algorithm claims 1000:1 ratio [slashdot.org]

    Talk about posting the same story twice...
  • by GoRK ( 10018 ) on Wednesday November 22, 2000 @10:28AM (#607733) Homepage Journal
    DjVu is almost three years old. It was developed by AT&T not LizardTech. LizardTech just bought it about 8 months ago. It is not, was not, and will never be designed to replace PDF, GIF, JPEG, or PNG. I have been using DjVu as the core of a web based document management system for over a year and a half. It is absolutely bar none the best, fastest, and most cross platform way to go from paper->web out there.

    Look at the ways to get a scanned document on the web:

    1) GIF, PNG, JPEG: Large filesize or bad rendering. If I need to send a 300dpi page to a web browser, the browser isn't going to let a user pan and zoom on it and it certainly won't print it correctly. JPEG is the only one of the three formats that actually has a place to store the document DPI regardless.

    2) PDF: Creating a pdf from a scanned image means either encapsulating a lossy or losless image in a file or doing OCR and risking unreliable information.

    DjVu regularly achieves compression ratios of 1200:1 or more at very very acceptible quality. There is a IW44 fractal compressed background layer and a loslessly compressed foreground layer. The information is progressive also. As the file downloads the foreground shows, then the background, then the color information loads. Example documents on the DjVu website have shown entire 300 dpi full color sharper image catalogs compressed to fit on a floppy disk.

    Btw not only are djvu plugins available for windows, macintosh, linux, and solaris. Let's not forget HP-UX and IRIX. How's that for covering the bases? If youre not supported, you can write your own for your particular flavor of UNIX.

    Geez get it straight.

    ~GoRK
  • in the main post, it claims no linux support, and not that i support this format, they do have a linux plug-in viewer, just no authoring/encoding software for linux :o)
  • I really can't see much market, and very little application for this compression. On-the-fly compression of images for web download would be redundant, since a png would be smaller than this format, so the speedy on-the-fly compression of uncompressed images is pointless.

    No, it isn't. Think about how handy it would be to let someone dial in their own image quality, or do it for them of course. Some poor bastard on a 28.8kbit/sec modem would be able to set it to super low quality, which would look somewhat lousy but be quickly navigable.

    This would be handy in a system with CPU to spare but limited storage, like a 200mhz+ embedded webserver using flash memory to store images. Then again, you'd have to store them compressed there, as well, but at least you wouldn't have to store N different versions of the file for users with various different quantities of bandwidth.

  • by Hard_Code ( 49548 ) on Wednesday November 22, 2000 @05:00AM (#607736)
    Did anybody even follow the LizardTech link? Right on the front page is a link to a page describing DjVu. The whole product ("image format") seems geared towards scanning in Real Life documents and presenting them online. If you *read* the page it explains why it claims it is faster (first downloads high contrast data, then photographs and graphics, clarifying the image as it goes) and smaller (some wavelet kung foo). I don't see anywhere where they are pitching this as competition for gif or png, so everybody put down the flamethrowers. This is a very small niche product for digitizing and presenting real life mundane documents.
  • This should do a good job of a lot of the things that JPEG (even at high-quality q) is currently failing on for us: photographs that contain a lot of bluescreen / greenscreen; pencil / pen artwork, etc. With JPEG, I get nasty haloes around foreground elements; these are very visible on a flat background such bluescreen or paper. JPEG also forces a mottled effect on the bluescreen due to color quantization I believe. DjVu appears to reduce or eliminate these undesireable side effects. I will be following its progress with interest (as long as it doesn't get the kiss of death from proprietary interests...)
  • Compression speed DOES matter. First of all, many things on the web are compressed on the fly - you just don't see it. gzip is used, as far as I know, for most of it. Partly due to its modest processor usage. You try serving up a million pages all compressed using something like bzip2 on a rackmount server with only two processors - you'll quickly melt the metal.

    Dave
    'Round the firewall,
    Out the modem,
    Through the router,
    Down the wire,
  • As a Ph.D. applied mathematician whose research specialty is wavelet compression methods, I have some comments.

    The website clearly indicates that this software is designed for images that come from scanning papers documents that contain both text and graphics. The algorithm is supposed to recognize the text and store it as a separate layer (but still as an image) from the rest of the image. Furthermore, the image can be transmitted in such as way that the text and significant features of the image are transmitted first, followed by the bits representing less important features later - a technique known as progressive image transmission (among other names).

    I certainly believe that they use wavelets to do this. In fact, the hot wavelet method of the 90's, EZW (Embedded-Zero tree Wavelet) compression allows for exactly that: to compress an image in such a way that the more significant bits come first, followed by the less significant bits in order of significance. Picking out text that is layered on top of a background image is relatively easy with wavelets: just pick out the really large coefficients in the wavelet expansion, since those most likely correspond to parts of the image where large jump discontinuities occur. This can all be done automatically, of course.

  • DjVu was not developed by the Olivetti and Oracle labs in Cambridge. Instead the guys who did it were at Redbank, NJ.

    Some of the members of the team include Leon Bottou, Yann LeCun (the guy who was one of the few inventors of Backprop), Yoshua Bengio, Patrick Haffner, Patrice Simard, and Larry Rabiner. I know those guys and they're very good. Don't know why ATT ultimately sold this product, though.

    As far as the text compression goes - it works by clustering individual characters into subgroups and using the latter as a highlevel compression scheme plus encoding the differences. So you could even 'edit' the text (they did it but didn't release it for some reason). Anyway, it's pretty cool stuff.

  • On an off note, its nice to see that PNGs are becoming more widespread, probably 12%-20% of sites that I've seen recently are using PNGs for all the images
  • Here's my own image compressor. It's just like jpeg but I had to trim the source a bit to fit inside the teeny-tiny submission box. I hope you can still understand it.

    The codec emits a byte stream considerably larger than the raw image (4x) but this stream is easily compressed by bzip2 or gzip. It reads and writes portable gray map (.pgm) files (both ascii and binary). If you want color I suggest you transform the image into YUV colorspace and compress each plane seperately. The color planes should be downsampled 4:1 (2x2 blocks) and compressed at very low quality (less than 15).

    Build instructions:
    Save the code as 4a.c
    gcc -O2 -ansi -Wall -o en -DMAGICK=1.0 4a.c -lm
    cp en de

    Running the program:
    convert image.gif image.pgm
    cat image.pgm | ./en 50 | bzip2 > image.4a
    cat image.4a | bunzip2 | ./de > image2.pgm
    convert image2.pgm image2.gif

    To make a pgm from a jpeg use:
    djpeg -grayscale -pnm image.jpg > image.pgm

    Hints:
    Try quality settings from 15 to 200.
    Bad input causes core dumps.
    The image is cropped if the dimensions aren't divisible by 8.

    ====================

    /* 4a by Ryan Salsbury */

    #include <string.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include <math.h>

    #define D(a,i) a=(i); while (a--) {
    #define i int
    #define d double
    #define szi sizeof(i)
    #define szd sizeof(d)
    #define gc() getchar()
    #define pc(c) putchar(c)
    #define w while
    #define v void
    #define L goto

    #define _9 2+7
    #define _27 3+24

    #define di(F,A,B,C,G,E) \
    v F(i j,i h,G* M,E* S){ d O = 0.0; i x,y,u,q,s,t; D(t,h/8)D(s,j/8)D(y,8)D(x,8)\
    D(q,8)D(u,8)O+=M[u+q*j]*(A?ca[u]*ca[q]*(lQ[u+q*8 ]-23.5)*LQ:1.0)*co[B*u+C*x]*\
    co[B*q+C*y]; } } S[x+y*j]=O*(A?1.0:ca[x]*ca[y]/(lQ[x+y*8]-23.5)/LQ) ;O=0.0; } } M+=\
    8; S+=8; } M+=j*7; S+=j*7; } }

    i nm()
    {
    i c = gc(), a = 0;
    L0: if (c > 32) L L1;
    c = gc();
    L L0;
    L1: if (c - 35) L L2;
    L3: if (c == 10) L L0;
    c = gc();
    L L3;
    L2: if (c <= _9*_27 || c >= _27*_9) return a;
    a=a*10+c-48;
    c = gc();
    L L2;
    }

    v rp(i* x, i* y, d** X)
    {
    d K, *V;
    i h, m, p, G, H, I = gc()*gc();
    *x = (p=nm())&~7;
    *y = nm()&~7;
    m = nm();
    *X = V = malloc(256 + szd **x **y);
    K = MAGICK;

    D(G,*y) D(H,p)
    if (I==4240) h = gc();
    else h = nm();
    *V++ = h*K - 128.0;
    } V-=p-*x;
    }
    }

    v wp(i x, i y, d* V)
    {
    i n;
    printf("P2\n%i %i\n255\n", x, y);
    D(n,x*y)
    i l = *V+++128.0;
    l = l&~0xff ? l>>31?0:255 : l;
    printf((n%20)?"%i ":"%i\n", l);
    }
    pc(10);
    }

    d co[64], ca[8], LQ;
    char* lQ = "%! %,9CK\"\"#(.IJF##%,9HRG#&*0Ca[L'*7GQtoY,5FN\\pweAN Zao~}mUehkvlok";

    di(fw,0,1,8,d,i)
    di(rv,1,8,1,i,d)
    i main(i argc, char* argv[])
    {
    char T[32];
    i x, y, *pi;
    d* ui;

    d p = 0.1963495408494;
    d pp = 0.3535533905933;
    D(y,8) D(x,8)
    co[x+y*8] = cos((2*x+1)*y*p); }
    ca[y] = y?0.5:pp; }

    if (argv[0][strlen(argv[0])-1] == 'n')
    {
    LQ = 100.0 / ((argc-2) ? 70 : atoi(argv[1]));
    rp(&x, &y, &ui);
    pi = malloc(x*y*szi);
    fw(x, y, ui, pi);
    printf("%i\n%i\n%f\n", x, y, LQ);
    fwrite(pi,szi,x*y,stdout);
    }
    else
    {
    x = nm();
    y = nm();
    LQ = atof(fgets(T,32,stdin));
    ui = malloc(x*y*szd);
    pi = malloc(x*y*szi);
    fread(pi,szi,x*y,stdin);
    rv(x, y, pi, ui);
    wp(x, y, ui);
    }
    return 0;
    }

  • Sorry, I did take a brief look at it, didn't see that part and did let myself be goaded by our glorious leaders.

    I don't know if "dumbass" is neccessary though. I was refering to the fact that the W3C seems to have a preference for PNG so perhaps we should stick to their standards.
  • I work for a bunch of architects, engineers, and designers who could probably use a good image compression scheme. But I'm not going to tell them about this. Why? Because the proliferation of formats can become a huge pain for little sysadmins like me, and enough is enough.I am so tired of troubleshooting "but I can't convert myfile.bozoimage into myfile.gonzoimage". I realize standardization is a pipe dream, but I will not encourage this kind of thing. (Same for document formats - Screw Corel, Microsoft, and everybody who has to make a proprietary document format)
  • I'm willing to bet those weren't in fact FIF files, but FITS files. FITS (flexible image transport system) is the de facto standard for astronomical images these days.
  • The problem is compressing them. The ideas behind decompressing fractals are easy; use the self similarity inherent in pictures (a tree looks much like another tree, so build a picture of a forest by layering trees and then correcting the errors).

    But figuring this out ammounts to exhaustive search. Apparently IFS had decent heuristics that got decent compression in only a few minutes. I think this is what was licenced, and that the file-format was open? This was a while ago, so I'm filling in the gaps in memory by guessing.
  • Perhaps I'm just ignorant, but I thought PNG was supposed to replace GIF when Unisys started seeking royalties? Slashdot advertised the "convert your web page's GIFs to PNG" contest or whatever it was. (Ironically, Slashdot still has MANY Gifs!)

    Secondly, isn't compression becoming less of a concern since more and more people are stepping into the broadband arena?

    It seems to me like the product makes sense, but it would have been much better received 10 years ago or a year ago when Unisys was going insane.

    Yes, we have no spoon.
  • Can it detect the difference between art and pornography?
  • The comapny was Iterated Systems [iterated.com]. The technology is still being developed, and highly advanced - they even have a fractal moving image compressor that can, [according to a workmate who did a thesis on compression - take with a grain of salt if you like, because I can't back it up] fit eight minuted of better-than-VHS-quality film on a floppy disk. He saw it at a trade show, allegedy. But believe what you will.

    The licensing fees for FIF and other iterated technologies are huge. And the weird thing is, if they were open source, Iterated would be in much better straights than they are now. We'd all be using infinitely zoomable, highly comrpessed images [both GIF, JPG, and PNG are web standards due to seeming freeness], and despiute the fact they'd loose the revenue, they'd be known as the number one player in town for fractal image creation tools. And people would but their software over the other utilities which use the same Open Source image format, because they had the fastest algorithms, extra features, or other competitive advantages.

    Just a thought.
  • I have to address a point in the betanews article:

    The Internet has become a place where images rule, and visually pleasing Web sites reign over the simpler, more text-based sites.

    The man has obviously never been to asciiboner.com [asciiboner.com]. Anne Marie should check this site out too, so as to get an idea of what to expect on our wedding night.

    --Shoeboy
  • by Happosai ( 73708 ) on Wednesday November 22, 2000 @02:30AM (#607751)
    The BetaNews article is actually very misleading - it is confusing two of LizardTech's products.

    DjVu is a document format (like PDF), not an image format, and the techniques mentioned in the article refer to compression of documents.

    LizardTech do have a compressed image format. This is called MrSID, and uses completely different techniques for compression.

    I would be interested to see an independent comparison between MrSID and PNG - unless there are huge advantages of using the proprietry MrSID format over the OS PNG format, I don't predict much of a future for MrSID on the web (although it would seem that LizardTech are touting it more for internal use rather than for general distribution anyway).

    [Happosai]
  • by Lemuel ( 2370 ) on Wednesday November 22, 2000 @02:32AM (#607752)
    I don't know why the summary on Slashdot says "No Linux support". LizardTech has both decoders and encoders available for Linux.

    Also the summary picks on LizardTech's use of speed as a feature. While this isn't a standard measurement, it is a way to tell people that you will get your images faster because the files are smaller. That's not a big crime. They also do talk more specifically about their format producing smaller files, so they do understand real measurements. BTW, while it is possible that they say elsewhere that DjVu compresses faster than pdf, what I saw was that the documents download faster, not compress faster.

    The Slashdot write-up complains about LizardTech's comparison of DjVu with pdf, pointing out that pdf isn't an image format. True, but the LizardTech description refers to DjVu as "DjVu for Documents", and their web page describes why DjVu is good for documents. Images seem to be just part of the data they need to handle.

    Finally, I haven't seen any source for Acrobat either, but it is very popular on the Internet, so lack of source won't necessarily keep LizardTech from succeeding with DjVu.

    Is DjVu actually any good? I have no idea. Slamming the product with incorrect and misleading comments doesn't help one decide, though.
  • This sounds remarkably like an image compression algorithm and associated file format that I invented while I was at Medior in San Mateo in late 1994 - I actually implemented it in 1994 and it got used in some shipping multimedia CDROM products such as the Men are From Mars, Women are from Venus CDROM in 1995.

    My algorithm was lossless and what strikes me the most is the speed - my algorithm was notable for the speed of decompression, and we used it in particular just because it was so much faster than GIF, which was a significant advantage when you were scanning through lots of images on CDROM. While I only implemented it for 8-bit indexed images I felt it would likely work fine for any bit depth or color space.

    Medior was later purchased by AOL [aol.com] and renamed to AOL Productions. I think AOL Productions isn't around anymore but lots of old Medior people still work for AOL, for example former Medior President Barry Shuler is a high-level exec at AOL.

    Basically, my invention worked by dividing the image up into lots of little subregions and encoding each pixel in a given subregion in the minimum number of bits required to encode the number of colors that occurred in that region.

    For example, in an image that was black on top and white on the bottom, I'd have two subreqions with zero bits each and a single element color table that was either black or white.

    If it was snow - black and white pixels randomly intermixed - the whole image would be one bit with a two-element color table containing black and white.

    The big trick was to divide the image in a good way, in such a way that the whole image was reduced the most and the size of the data required to reconstruct the regions wasn't too big. In practice I found it worked OK to start with lots of small fixed-size squares then merge adjacent squares that had similar color schemes.

    I wrote a document for Medior that described what I invented in great detail and what I predicted this could be made to do. What I actually got it to do in practice was not nearly what it was capable of, but this was because of the limited time available to implement it.


    Michael D. Crawford
    GoingWare Inc

  • JPEG is the only one of the three formats that actually has a place to store the document DPI regardless.

    Perhaps you're unaware of the PNG pHYs [w3.org] chunk, then, which lets you specify the physical resolution of the image.

  • PNG's are good, BUT:

    • They *still* aren't supported properly on the most used web browser (IE prompts me to save them to disk rather than open them and display them, perhaps this is fixed in Me, I don't know. I've also had problems with the alpha channels in IE, as well as the gamma correction stuff - they displayed very dark). Obviously this is typical MS sloth at work, and has nothing to do with PNG itself ..
    • I've never managed to get PNG files to approach JPG's in terms of compactness. Obviously JPG is lossy, so it can always have an advantage - but the fact of the matter is, for the vast majority of image distribution that takes place, a certain amount of information loss is acceptable (JPG's look perfectly good to 99% of people anyway, so why would these people want to download bigger files in PNG format?) (Most people probably aren't even aware that JPG's are lossy, hence the large of amounts of ultra-crappy looking porn on newsgroups, from people saving and resaving images just to STUPID things like adding a huge blue border around an image.)

      Software for creating/reading them (end-user software as well as API's) still isn't as common and widespread as it *should* be. These things take time, yes, but they are slowed by factors like popular browsers not supporting PNG's properly anyway.

    Don't get me wrong though, I would like to see PNG becoming the dominant format. But PNG seems to forever be stuck in the "early adopter" phase ..

  • http://www.djvu.com/cgi-bin/products/products.pl
    lists DjVu Solo 3.0 (encoder for single pages only) as a free download. Could you use that?

    http://www.djvu.att.com still has source for the reference library for people wanting to write viewers, but an encoder would be harder - I guess there might be patent issues too.
  • Like so many other factors, speed is one that does matter if in the extreme, but once it's good enough, it has very little importance. Size is arguably more important, because every byte saved is saved every time you use the file, while time saved is only saved during compression, which is often batched.
    So for time, order of magnitude seems to be the differentiator, while size has a linear importance.
  • Obviously, this person is not very well informed.
    Linux has always been the primary development platform for DjVu products. Typically the release cycle is Linux and other unix platforms, then Windows, and finally Mac.

    You'll also find the 2.2 DjVu Reference Library is available with GPL licence. The 3.0 Reference Library is scheduled to be released shortly...

    http://www.lizardtech.com/products/djvu/referenc elibrary/DjVuRefLib.html
  • I hope that the GIF thing doesnt happen again..
    Wait till it's a "standart" then ask for license fees.
    The idea sounds pretty nice, but I'd like to see quality / compression ratio comparisions with hard numbers ;)
  • http://slashdot.org/article.pl?sid=older/980628231 4218&mode=nested

    That was almost 2.5 years ago. The reason they compare against pdf is that the target application seems to be text; compress the black and white text with one-bit RLE, and the background with JPG sort of deal.
  • First time I used PNG, I thought the same thing.
    Actually, it is just that PNG has a far more
    complicated interphase for selecting options.
    Once you select the right options, PNG will compress on the average about 10-20% smaller than
    a GIF. I've seen as high as 50% smaller. The
    only thing I've seen that is better for lossless
    compression hasn't been released publicly yet.
    When it is, I'll be happy to tell people about it.
  • Come on people, let's stick to open standards here...
  • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Wednesday November 22, 2000 @02:13AM (#607763) Homepage
    A few years ago a company came up with a compression which was actually rather good, using fractals. It was called FIF. They made the mistake of greed ovtaking common sense and tried to charge for a license to write compressors for it... The result - when is the last time you saw an FIF file?

    If these guys don't have an open format they will simply go the same way.
  • It doesn't do animation, it's closed and it doesn't even compress down as small as gif. Have any of you acturally tried this thing? It sucks. It's like that jpeg 2000. It's not even as good as the old jpeg but they still managed to fool enough people to turn it into a standard. It's just a railroad job and they are winning.
  • If it's been patented commercially already, no thank you. The last thing the web needs is to get wrapped up in another Unisys/GIF problem.

    And how the heck does it figure out what the "foreground" and "background" components are, anyway? If I have to go in and mask out the background before saving, then forget it.

  • by dne ( 10173 ) on Wednesday November 22, 2000 @02:15AM (#607766)

    See DjVu "non-commercial" site [att.com].

    --Daniel

  • by SEWilco ( 27983 ) on Wednesday November 22, 2000 @05:57AM (#607767) Journal
    And we discussed it here on Slashdot:
  • by Anonymous Coward on Wednesday November 22, 2000 @05:57AM (#607768)
    Gary got is mostly right.

    I am one of the four persons who created DjVu in the first place. The events took place in AT&T-Labs Research between 1997 and 1999.

    1. There is Linux support. Just go to the download page [lizardtech.com] and select the Linux platform. Most of DjVu was first coded under Linux.
    2. After the Lizardtech deal, we set up a "non commercial site" named DjVuZone [djvuzone.org]. It contains general information [djvuzone.org], benchmarks [djvuzone.org]. links [djvuzone.org], a searchable digital library [djvuzone.org], etc.
    3. There is source code. Lizardtech recently had the good idea to relicense version 2 of the DjVu reference library under the GPL [lizardtech.com]. We have the corresponding online documentation [djvuzone.org] on DjVuZone. We are just waiting for the release of version 3 to redo that part of the site.
    4. DjVu combines several new technologies including new approaches to arithmetic coding (Z'-coder), new compression methods for textual images (Soft pattern matching, JB2), new wavelet method (IW44), and new ways of combining them together. The current implementation is geared toward compressing scanned document images in 24 bit colors around 300 dpi (raw size is 25MB) and typically packs them into 50-60KB. Neither TIFF, nor JPEG, nor JPEG-2000 nor Fax-G4 can do that. None of these technologies will let you realistically view such documents over the web. DjVu can [djvuzone.org].

    Hope this helps :-).

    - Leon Bottou, AT&T-Labs Research.

  • ...unlike the original story commentary above stated. It's on AT&Ts site Right Here [att.com] and you can download and tinker with it. But read the AT&T Source Code License first; you can only distribute patches to the distribution if you change anything. Also, they don't give you all the encoding code, omitting the background/foreground image seperation and the lossy JB2 wavelet encoder that handles bitonal images (some of their best examples of compression!). They actually suggest someone should make a GIMP plug-in for all this. And because they give you the rest of the JB2 back-end, they're practically begging for someone who's read the literature to bridge that gap.

    That was real nice of them.
  • by buttfucker2000 ( 240799 ) on Wednesday November 22, 2000 @02:16AM (#607770) Homepage Journal
    The article has it about right. Png [libpng.org] is vastly superior - excellent lossless compression, sometimes better than lossy methods (plus technical features like a full alpha channel), and, most importantly for its dominance over gif, it is unencumbered by patents or closed source algorithms.

    Speed of compression is not a factor in compression - otherwise we would use bmps or xpms, which have zero compression time - because they're uncompressed. Size matters. Speed doesn't.

    I really can't see much market, and very little application for this compression. On-the-fly compression of images for web download would be redundant, since a png would be smaller than this format, so the speedy on-the-fly compression of uncompressed images is pointless.

    And in any case, modern PCs are more than powerful enough to almost transparently display well compressed images, so a simpler format is about 10 years out of time.

    If it was open source, it could perhaps have a market in replacing things like xpms, which are used in games for processing speed, but even it was, the benefit would be marginal, since hard disk space is, relative to image size, almost infinite, so compressing them slightly wouldn't make much difference - and for download those images would be gzipped anyway.
  • Not so fast - they have a plugin available for Netscape and IE at:

    http://www.lizardtech.com/cgi-bin/products/desc. pl?tsb=443224

    Cheers,

    Tim
  • You can also grap the current viewer and a FIF Plugin for Adobe Photoshop [Win16, Win32, and Macintosh only] from Altamira Group [altamira-group.com]
  • Comment removed based on user account deletion
  • I hope it'll reduce it to zero byte

    ____________________
  • The result - when is the last time you saw an FIF file?

    Last time? There's never been a first time, for me. This is the first I ever heard about it, and I'm not exactly a newbie.

    Stefan.
    It takes a lot of brains to enjoy satire, humor and wit-

  • And how the heck does it figure out what the "foreground" and "background" components are, anyway?

    As it only works on scans of paper documents, "background" is bland paper-texture, perhaps coloured. You can easily use an edge-detection filter to work out which is which.
  • Flash? Hell, no! check out my home page and see how many images and plugins I've used (hint: none).
  • DjVu is applicable for any raster image. It just happens to be good at seperating high constrast regions (text) out of the image. And they may be referencing the name AT&T gave to the whole technology, while LizardTech chose to split it up into two distinct groups.
  • I'll second that MrSid is pretty cool. (I found it by way of some old railroad maps [loc.gov] at the library of congress. So, if you are looking for some free content, there you go.)
    --
  • by bludwulf ( 39653 )
    I have to question if CmdrTaco even went to the web site. There are SDK's for linux/win/solaris, and they measure the speed in terms of PDF because DjVu is a contender to PDF: you can add hyperlinks to documents you scan in.
  • As I understand it, it is possible to set different compression factors (from lossless to very lossy) on different polygonal regions of a JPEG file.

    Since JPEG is already fairly-well supported, why not use it? The only thing that needs to be changed is the image-creation software; it needs to know how to detect the difference between background and foreground content. Image-creation software also has a huge advantage here; layers (and their relative z-Indexes) give great hints as to what is, and what isn't, "background"

    --
  • The BIG idea underlying this is that text requires high spatial resolution (say 300 DPI, 1 bit) with measly colour resolution. In the case of a page of bloack text on white paper the colour "pixel" could be the size of the page!

    The photographs need moderate colour AND spatial resolution (say 24 bit, 72 DPI). This is a very old idea. Colour print systems in the 70's used it. What's neat is the idea of "gating" the lower resolution colour data through the higher resolution "shape mask" to get the properties of both, at least for real world pages.

    Appropriate (and quite well known) compression techniques are then used on the 2 channels.

    The wavelet technique you describe sounds like it would get the frequency separation concept in a more "implicit" way. DjVu is very explicit. BugBear

  • This post looks legit to me despite the AC status. I.e., you can believe it. Cheers, Ed. (Only people from AT&T Labs-Research write "AT&T Labs-Research" :-)
  • I've worked with MrSID, too, and I have to agree that for the intended applications it's much nicer than PNG. The ability to rapidly and progressively zoom into an image is very nice and implemented in such a way that selective zooms can be delivered on the fly over the web without downloading the entire image at once. The demo pieces I looked at were mostly historical maps -- the Library of Congress uses MrSID extensively -- and it was a great way to examine a document that was originally 4 feet across within the confines of a monitor. DjVu was also interesting, and definitely outshines PDF for scanned documents, but MrSID is LizardTech's real marvel.

    I wish people would get off the "if it's not OSS it must suck" bullshit attitude. LizardTech has some really well-designed software, and if it bothers you that they're proprietary (and IMHO, grossly overpriced), then bigod go out and clone it. Having an OSS equivalent of MrSID and DjVu, especially in a form that would be usable to the point-and-drool crowd, would be a very strong selling point for getting Linux into the MS/Adobe-dominated document archiving market.

    --

  • Not only you are an Anonymous Coward (if you have something to say come out and speak up !) but you also had to post this off-topic bullshit 2 times to get it right. Next time don't write anything and use the Preview button to make sure nothing actually spilled in the Comment box 8)

    Thank you
  • by ozbon ( 99708 )
    If this graphics format is so great, why aren't they using it on their own site? Everything I saw was in .gif or .jpg. DOH!
  • You CAN get a DjVu plugin for Linux:


    Have a look at:


    http://cgi.netscape.com/cgi-bin/pi_moreinfo.cgi?PI D=11473 [netscape.com]

    Will

  • It's AT&T's technology.
    http://djvu.research.att.com/

    Even mentioned in slashdot LONG ago (and we know how timely slashdot is ;) ).
    http://slashdot.org/article.pl?sid=99/03/29/0316 220&mode=thread

    http://slashdot.org/article.pl?sid=older/9806282 314218&mode=thread

    Do a search on slashdot for djvu for more links.

    Link.
  • JPEG is a lossy compression format.
    PNG and GIF are lossless compression formats.

    The reason JPEGs are smaller is that they don't have all the information in there. So on the size front it's no surprise that JPEG "wipes the floor with" PNG. They just look crappier.
  • Um, you can't arbitrarily change a license for software (that you have already licensed.)

    If you have a free open-source encoder, you can continue using it for no cost. No $2000 fee.
  • I didnt read through all the comments here, so this has probably been said already...

    But wasn't DjVu developed a GOOD while back by AT&T or Bell Labs or something? I could have sworn I took a look at this 2 or 3 years ago.

    --
    Dave Brooks (db@amorphous.org)
    http://www.amorphous.org
  • lost the sources :( went back to download a new copy of it and found that all of a sudden it cost $2000. That was actually my problem.

  • actually I was using the linux command line tool as a batch processor. So while, yes, the documents were one page each....the problem is the lack of the command line tool.

    It was really just a proof of concept anyway....so it's not costing me enough bandwidth to matter having switched away from DjVu.

  • The imaging model used by DjVu to get such good compression is to divide the image into (typically) three planes: a background plane (colour, compressed using wavelets, usually at a low resolution), a foreground plane (colour, compressed using wavelets, also at low resolution) and a selector plane. Imagine an ad page in a magazine: there's coloured text printed across a photographic image. The photograph is stored in the background, the text *colours* are stored in the foreground as blobs of colour (e.g., a blue word would have a large blue blob in the foreground), and the text *shapes* are stored in the selector. The decoder decompresses all three, and then draws the foreground colour where the selector is '1' and the background colour where the selector is '0'.

    This scheme has the advantage that it keeps high-frequency content (the edges of the text) out of the background, enabling it to compress better (wavelets and JPEG don't handle high frequencies well: they either smear them, add ringing, or require a lot of bits). Another advantage is that there are specialised algorithms for compressing binary (1 and 0) images containing text. The one used by DjVu is called 'SPM' (Soft Pattern Matching). It works (roughly) by breaking the text image up into isolated characters, then deciding which characters look similar to each other. On most pages, there will be a lot of repeated characters. You can then get improved compression by storing only one instance of each unique character shape. SPM does something fancier than that but that's the basic idea.

    This three-layer model is known as MRC (Mixed Raster Content). It's used outside DjVu - most notably in a file format called TIFF-FX. This is an extension of TIFF intended for Internet Fax (including colour fax). TIFF-FX is, if I remember right, RFC2301. There is an extension to TIFF-FX in the works to add support for JBIG2. What's JBIG2? It's a format standardised by ISO for doing bilevel image compression, using the same concept of improving text compression by identifying repeated shapes. (Actually, there's a lot more in JBIG2 but I'll leave that aside). The concepts used in DjVu's SPM were incorporated into the JBIG2 design (AT&T participated in the design of JBIG2), so JBIG2 can do anything SPM can do. JBIG2 was approved by ISO and ITU this past spring.

    Another file format that includes a lot of the same concepts as DjVu (MRC, repeated shape compression) is ScanSoft's XIFF - it's yet another TIFF extension, and is used in ScanSoft's Pagis line of products. In fact, much of the stuff in TIFF-FX is a standardised version of features that appeared in XIFF. MRC is also going to be in one of the later parts of JPEG-2000.

    I've been involved in the work on XIFF and TIFF-FX and I was the editor of the JBIG2 standard. So what do I think of DjVu? One problem is that AT&T abandoned the standards process - they felt it was moving too slowly. This means that they opted for a proprietary solution over an open standard one. Yes, the standards process is slow, but when it works it produces things like JPEG and PNG (a W3C standard) - formats that everyone can use, and where you have a choice of encoders, a choice of decoders, and few worries about interoperability. Another problem is its reliance on arithmetic coding - while that gets you good compression, it can have speed problems. JBIG2 offers the choice of arithmetic compression (for applications where size is the most important issue) and Huffman-based compression, for applications where speed is the most important issue. I've personally seen a JBIG2 decoder decompress at over 1 gigapixel per second...

  • If you want to learn more about what DjVu actually IS then don't bother reading the marketoid page. Look here [att.com]. Specifically, on the what is DjVu? [att.com] page, they say the following:

    The commercialization of DjVu is handled by Seattle-based LizardTech Inc. in partnership with AT&T Labs. DjVu is an open standard. The file format specification, as well as an open source implementations of the decoder (and part of the encoder) are available.
    Among other claims, they say "Black-and-white pages at 300 DPI typically occupy 5 to 30KB when compressed. "

    I have long given up on slashdot reporting the whole truth and nothing but the truth...

    -Chris

  • This was on slashdot like two years ago, when it was AT&T pushing it instead of this LizardTech place that apparently picked it up.
  • -1 redundant? This should be -1 doesn't-know-wtf-he's-talking-about. Comparing DjVu to PNG is like comparing GIF to Jpeg -- each format serves a very different purpose.
  • by Hasdi Hashim ( 17383 ) on Wednesday November 22, 2000 @02:49AM (#607798) Homepage
    He posted this two years ago: DjVu plug-in available on Linux/Irix/Solaris/Mac [slashdot.org]. First post by sengan:
    New Image Compression Algorithm claims 1000:1 ratio

    Hasdi :-)
  • More over that, the first demo compressor / netscape plugin that AT&T released was for Linux... And this was two years ago... :)
  • Not only is there a Linux x86 binary available, the source code is also online here [att.com].

  • (Ironically, Slashdot still has MANY Gifs!)

    Maybe the advertizers demand that their ads run as GIFs not PNGs.

  • Yea, it knows it when it sees it...
  • MS used to use FIF for Encarta, Art Gallery and a bunch of other multimedia reference titles - I expect they've gone to JPEG since they now use DVDROMs and don't care so much about space or quality.

    Hmmmm, nah. Some jokes are too easy....
  • grep the homepage for gif, you'd be surprised...
  • You're missing the advantage that MRC-based image formats like DjVu (and TIFF-FX and eventually JPEG-2000) have: by separating continuous-tone content like photographs from monotone (spot colour) high-frequency content like text, you can achieve really good compression while maintaining image quality. You don't want to do lossless compression of high-resolution photographic images - even with the best lossless compressor, the files are still huge. And if you do lossy compression of mixed content (text over images), you'll get really lousy results unless you separate the different types of image content.
  • by Suydam ( 881 ) on Wednesday November 22, 2000 @03:03AM (#607823) Homepage
    On that note, I've been using DjVu's open-source encoder for several years to encode text documents. It's compression ratios are incredible and the plugin is also free and easy to install.

    The big problem I have with this article is that DJVu isn't a "new image format". It doesn't even display things inline (like GIF, PNG and JPEG). It is however an excellent alternative to PDF if size of file is your main concern.

    The extensive references to "speed" when compared to GIFs and PDFs could be one of two things. They could be talking about Download speed (my personal experiences show DjVu files to be about 10 times smaller than GIFs and even more when compared to PDFs. Or, they might be speaking of encoding speed which DjVu seems to excel in

    Here is a problem however: the command line encoder used to be free for non-commercial use. I was using DjVu for encoding swim-team documents for a small non-scholarship collegiate swim team. Certainly this counts as non-commercial. HOwever, the new version from Lizard Tech would cost me $2,000 USD to run. That is absurd by comparison. So I'm abandoning DjVu since I can no longer afford the encoder.

    Incidentally, if you want to see how it worked for me, I used it on nearly every swim meet results page for a few years. Here is an example, just click on the links next to the word "Splits" in each event: http://www.k-swimming.org/cgi-bin/swimming/results /meet_view.pl?8 [k-swimming.org]

  • I agree that licensing mania was a deciding factor in the fall of FIF, but as I recall, there were other issues at play, as well--one of these was that compression times tended to skyrocket the higher level of compression you chose. Another was that certain types of images would compress better than others--that is, a picture of a tree worked far better than a picture of a skyline because of the shapes involved in the images, because the fractal encoding worked better on the "natural" shape of the tree than it did on the straight-edged, blocky skyline.

    Last I used it (on a 486 laptop, mind,) I could compress a JPG image in about five seconds to the FIF's 150 seconds or more. The images were comprable in quality--FIF did a visibly better job on some photos, but not so much of a difference that it justified the extra effort of encoding them.

    Again, this is all from pretty old experience; the compressors may be far better now, and there's also the fact that an open source format/non-licensed technology approach would have likely resulted in better compressors. Any more recent info on how FIF fares, performance-wise?

    $ man reality

  • So, we get a new format? Not that easy. What about the spotty support for PNG? What about the various competing vector image formats? What about the more aggressive wavelet based image compression?

    First you have to have a good format, then it has to be accessible and affordable, then it has to be accepted. For the life of me, I can't figure out why PNG hasn't replaced GIF.

    Actually, though, that's part of the final hurdle -- a Catch-22. No one will adopt it until web pages and browsers support it. Web pages won't support it until browsers do and browsers have no reason to until web page creators demand it.

  • And why exactly is "fractal compression" so much better than wavelet-based?


    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • by snookums ( 48954 ) on Wednesday November 22, 2000 @02:16AM (#607829)

    They have a browser plugin for Linux/x86/glibc2 available for download here [lizardtech.com]

    Yes, I know that link is broken because /. put a space in it -- you'll have to get rid of it yourself
  • by Nailer ( 69468 ) on Wednesday November 22, 2000 @02:17AM (#607830)
    DjVu has been around for 2 years, and isn't anything new. In fact, it wasn't actually designed by Lizardtech - it was developed as an Open Source technology in the Olivetti and Oracle Researtch labs in Camridge, UK, and was sold when US telco AT&T purcahsed the labs.

    Hence the Open Source products generally only seem to be there to satisfy existing licensing requirements from prior to Lizardtech's purchase. It's doubtful Lizardtech tend to encorage that aspect of the technology, and they're only promoting the closed source stuff.

    However, the compression is indeed very real and the cross platform nature makes it quite useful for archiving stuff that won't be modified frequently in the future - remeber, that text ain't vectorized, it's just another layered image, AFAICT.

  • GIF = "jif"
    JPEG = "JAY-peg"
    PNG = "ping"
    DjVu = "duh-ju-voo"

    So you have the choice between drooling on yourself or saying "deja vu," which has more syllables, tone changes, and stops than its nearest competitors. Plus the danger of the illiterate calling it "day-JAH-voo."

    -----
    Go ahead, blame me... I voted for Nader!
  • There's another competitor to MrSID by Iterated Systems [iteratedsystems.com] called MediaBin. We were helping Iterated out from an actual field use (we're a prepress/graphic arts shop) but they were more interested in the science and not implementation. Looks like MediaBin is their offering.

    Lately we've been much more interested in MrSID and have been using it a little bit, but are hoping to include it in our home brewed media management system.

    Jason
  • It appears that they've simply filtered the documents into 2 layers.
    One for bilevel compression, say using CCITT group 4 compression (Faxes use group 3, group 4 is better ratio, but bigger disaster if you drop a bit), or maybe JBIG (IBM have a patent on the statistical model of the Q boxes, not on the actual compression algorithm, so all you need are your own Q boxes). Both of the above compare the current line with the previous line, and barf horribly when given dithered bilevel images, such as newspaper photos, or noisy scans.
    The second layer is the full colour one, and can be implimented as a JPEG.
    Tiff 6 supports all of the above.
    The 3GB Tiff document they talk about on their site was probably 3GB using the "uncompressed" setting. Apples and Oranges, as they say.

    The world doesn't need any new de-facto "standards" when there are perfectly good present non-proprietory standards which can do the same.

    FP
  • > IMHO there's very little need for highly compressed images on the web right now.

    If you download a whole book with text + pictures, then it would make a difference.

    Today, scans of archives avalaible on the web are often ugly 1bits-per-pixels low-res scan embedded in PDFs. And that suck badly / is almost unreadable.

    So, DjVu _is_ something interesting for reading non-copyrighted papers, research work, FOI publications, or public domains archives.

    > but considering the format's not open

    As many others have already pointed, the format is open.

    Cheers,

    --fred
  • True. If you want to see DjVu in action, go get the plugin at djvu.com and visit one of my projects here at CWRU. http://www.cwru.edu/UL/DigiLib/Hours/homepage.html [cwru.edu]

    Picture this: Start with a 15th century Flemish "Book of Hours", hand illuminated on vellum (goat skin). Scan it at 600dpi 24bit for archival purposes. Reduce your tiffs to 300dpi and you still have 1.06 GB of image data (not very downloadable). Using the DjVu compressor we achieved 205:1 compression so the final product totals 5.44MB. By separating the pages so they only download when called for the initial download is a mere 45.06KB (including all of the HTML and other images on the page) with an average download of subsequent pages only 21.34KB.

    DjVu was developed by AT&T Research. It was then purchased by LizardTech last year.

    Code commentary is like sex.
    If it's good, it's VERY good.

  • Actually, PDF documents generated directly to PDF and not scanned first are fairly small. This is a format for scanned documents that knocks the socks off of PDF. On average a DjVu file from a scanned document will be 1/5 the size of its PDF equivalent. I have actually achieved compression down to 1/15 the size of the similar PDF.

    Code commentary is like sex.
    If it's good, it's VERY good.

  • so don't expect it to get a lot of support on any major Web sites
    Images are displayed by browsers so they have to implement this image format in IE and Netscape before it can be used by Web sites. And if you consider how slow was implemented support for PNG which was free, you can expect support to djvu to be much slower.
  • by funkman ( 13736 ) on Wednesday November 22, 2000 @02:21AM (#607859)
    Here is the press release [lizardtech.com]. Their market is not for general sites like slash, yahoo, etc. This is a business oriented product for storing digital assets, so they may easily be cataloged and transformed into a format a user may see. From their press release, some example apps:

    Corporate digital asset collections

    Real estate sites

    Online catalogs/retail companies

    Auction sites

    Libraries

    Medical sites

    Geospatial imagery/government agencies

    Corporations are storing everything digitally now: pictures, instructions, etc and need are searching for a way to manage all of this. This product is attempt to fill that void.

    In a nutshell, this will be a specialized format that we will see for businesses that need to pass digital assets to the user.

  • They seem to be focusing on print documents (LizardTech mentions using it to present a catalog online, for example.) I have a hard time believing their claims of being smaller than PDF for that purpose--their figures for improved compression appear to be derived from pdf's made by scanning a print document in full color at high resolution. Who makes pdf's that way? It might be useful for archival purposes, but anyone who's distributing catalogs and non-archival documents online is going to make pdf's the correct way (ie, ps2pdf or some other way of making a pdf that isn't just a big bitmap.) anywho, PDF is a vector format, and when used correctly, will be smaller than anything DjVu can accomplish, Their "comparison" is an astonishing case of apples and oranges.

  • by Nailer ( 69468 ) on Wednesday November 22, 2000 @02:21AM (#607861)
    Its more a document archiving format than a new web image format [although it happens to be viewable over the web] - as the article states although its non vectorized, it uses layered bitmaps to create more efficiently encodable data chunks.

    And, actually, there is Linux support, and source code available. Just Lizardtech aren't going out of the way to tell anybody about it - see my above post :-)

  • The format they discuss on the LizardTech page is for both an image and a document format. The DjVu format is really just a new document format for viewing, you guessed it, documents (which includes images). This isn't a new image format that will replace GIF, PNG or JPG. It's just another way to put together documents (as if we don't have enough formats out there for this). They also have an image format that is more compact than other ones, but I doubt it will become a standard of any kind.

    Now the claims on the document format bother me. First, they compare it against PDF which we all know is large and bloated to begin with. Sure, if I took any document and separated out the text and formatting and images, compressed it down I'd probably have a "new" revolutionary document format. Doesn't this just sound like HTML? I've written server side scripts and client applets that will compress HTML the same way, and I think my results would be about the same as this (and perhaps faster?). You'll still need their plug-in to view their documents. They also say that a 2.5gb TIFF is compressed down to 3mb. Wow. I can do that now if I convert the TIFF to JPEG with little loss of quality. I really don't see what the big deal is about.

    I wish the /. reporters would do a little research before they go posting messages that send the readers into a frenzy of clicking and sending off emails to friends about the next wave sweeping the internet.

    liB

  • by gary.flake ( 7241 ) on Wednesday November 22, 2000 @04:16AM (#607868) Homepage
    As far as I can tell CT's post and the article have anumber of things wrong. I've known some of the people involved with DjVu for a couple of years, so let me list a couple of facts in no particular order:
    1. DjVu was originally developed at AT&T by a group that has traditionally worked in machine learning. LizardTech purchased the technology from AT&T.
    2. This format is specialized for scanned documents.
    3. The technology is very different from just about everything else because it seperates background and foreground planes. The background is compressed with wavelets, and the foreground probably uses a form of clustering on characters shapes (in a typeface and language independent manner). As a result of the latter, you get a form of OCR almost for free. You can also do text search.
    4. Everything can be viewed at 300dpi directly in your browser and in realtime (you normally only view at 100dpi but you can zoom in).
    5. The linux viewer plugin and compressor has been available for years.

    The main attraction of DjVu is that your scanned documents are tiny (typically less than 50KB) which makes it feasible for putting them on the web. Just about every other format results in files too big for easy distribution on the web. Interestingly, you can convert a *.ps.gz file into a DjVu file, and see a dramatic improvement in file size while preserving almost all of the detail. I am not talking about simple pages here, by very complex ones with a mixture or real images / artwork, and text.

    Apologies for any mistakes, but I think that I got most of it right.

    -- GWF

  • frankly, who cares what the individual formats are? Really. That is completly besides the point.

    The important thing is that they try to make a semantic interpretation of the input image and apply differing approaches depending on the content. My above post answered the question above as to why they are comparing themselves to pdf; they focus on compressing exactly one kind of information, so they shouldn't be compared to standard image compression.

    So I guess I don't agree about that being redundant. whereas the fact that they use wavelets... well, that's nice, but hardly germane.
  • by dmarney ( 257231 ) on Wednesday November 22, 2000 @09:07AM (#607872)
    I have been using DjVu for more than a year now, and have tested it extensively against other image compression technologies. If you've got scanned images to display on the web, especially if they are in color, then DjVu is far and away the best technology on the market. Here is just a short list of the things that I like about DjVu: 1. I can encode a 20MB color scan into a 100-200K file, and still get a great-looking image on the screen. I can create astonishingly tiny B&W images (how about a full page of text scanned at 300 DPI rendered in 17K?) Nothing else comes close. 2. I can create separate image and data layers. DjVu produces a color background layer, color foreground layer, high-contrast B&W layer, and a data layer. This is essential for doing OCR, where we really need to have that B&W image to get to the text. 3. I can encode the DjVu image to automatically upsample to match the greater resolution of my printer vs. the screen, in the same file. 4. I can create a multi-page document either as a single file, or as a linked list of indivdual pages, with a file for each page. No more horrid PDF byte-serving! (Please pardon us as the Author weeps for joy.) 5. I can construct a URL that will drop a user into the middle of a document (page 50 of 100, for example), and not lose the context of the other pages. 6. I can use the EMBED tag to provide automatic installation of the free DjVu viewer, and I get to specify which image comes up once the software is installed. There are no sign-up forms, no harvesting of my customer's email addresses, and no taking the user out of my visual space. 7. The viewer zooms and pans on the fly. You can zoom in to 100%, and pan through the image simply by clicking and dragging the mouse. This is the only way this should be done. 8. All of the encoding and decoding tools are completely free for eval purposes. The decoder is free for all purposes, and most of the source is open. 9. Once I create a DjVu file, I can convert it to other file formats such as JPEG. Try doing that with PDF! 10. IT LOOKS BETTER -- A LOT BETTER -- THAN ANYTHING ELSE.
  • by harmonica ( 29841 ) on Wednesday November 22, 2000 @09:19AM (#607873)
    DjVu is for scanned documents. There is no major accepted file format in use for this kind of data. This will be a huge market once bureaucracies around the world start digitizing their tons of documents. OTOH, DjVu is there for quite a while already and I don't see it having succeeded. Plus, when I installed the plugin under IE 5 a year ago, it was in some dubious beta state. Not nice to work with.

    Lossy / lossless image compression types. You cannot compare PNG tolossy schemes. PNG cannot beat a lossy method because the goals are different. Lossless: Compress as small as possible (but the exact original must be restoreable). Typically, the algorithms that throw more resources (CPU and memory) at it are better. Lossy: For a given file size, reach the best quality. You can easily beat PNG with a lossy scheme by simply choosing very bad quality.

    Open source. There are a few programs out there. Try TIC [waikato.ac.nz]. It's GPL'd and beats JBIG-1 by about 40 percent on scanned images, according to the website.

    Resources: Image Compression Resources [uni-rostock.de], The Data Compression Library [dogma.net].
  • by karzan ( 132637 ) on Wednesday November 22, 2000 @04:49AM (#607877)
    LizardTech had a booth at Seybold this year, and let me tell you this technology is very, very impressive. They demonstrated extremely high-res files at various zoom points--and explained that the files were very small. This thing also worked lightning fast. Except for a just-visible delay, zooming happens almost instantly. Another thing: I'm fairly sure it was doing raster, not vector. In any case, it was obvious it went way beyond PNG.
  • by Black Parrot ( 19622 ) on Wednesday November 22, 2000 @02:24AM (#607880)
    > Apparently, it differentiates between forground and background components of an image, and compresses each appropriately.

    Yeah, but can it detect the difference between nude and naked?
  • Wasn't that already posted?

    Wait - I get it ;,)

If you didn't have to work so hard, you'd have more time to be depressed.

Working...