JPEG 2000 Specs 82
Richard Finney
writes ""JPEG 2000 might be the graphics file format
that even dazzles the pros": at a ZD article entitled
JPEG
2000 to give the Web a new image. JPEG 2K will have
multiple channels, metadata, variable decompression levels
and progressive decompression. "
PNG (Score:1)
256 channels (Score:1)
Medium for Landsat Images. (Score:1)
Re: So how many patents... (Score:1)
This is a too idealistic view. The reality is that MPEG-2 for instance is patented to death ; they have a company dedicated to resolving patent issues, see www.mpegla.com [mepgla.com]; you have to pay even for distributing a MPEG2 video (not a coder or a decoder, just the video file itself)! MP3 encoders have also been indicted.
it's not as if there are any companies involved that could harvest money from patent infingements anyway.
They do harvest money from licensees (see mpegla site) ; I don't think they sue that often, but sending an official letter with "You're infriging on our patent #USJunkPatent, please do stop distributing your product or give us 1%" is very effective.
For an example, see the League for Programming Freedom page [mit.edu] on Unisys/CompuServe GIF Controversy.
I won't be surprised at all if companies on the contrary tried to force standards to use their patented technology ; the MPEGLA example shows that there is an insidious incentive to accept the other's patented technology if the other accept one's patented technology: at the end, the standard is published, the patent owners happily cross-license (or make a patent-pool from which they get %), and the rest of the world is pissed.
JPEG2K vs. DjVu? (Score:1)
DjVu [att.com] really is 4 compression formats in one.
- IW44: state-of-the-art wavelet-based continuous-tone image compression method. It's
similar to what JPEG2K will be, but it's available now, with source code, and without patent restrictions if it is used in OSS projects.
- JB2 lossless: a compression technique for "bitonal" images (i.e. black&white, no grayscale).
Very efficient for scanned pages of text
- JB2 lossy: lossy version of the above (smaller file sizes).
- DjVu: a "layered" image format where the text and drawings are coded with JB2, and the backgrounds and pictures are coded with IW44.
IW44 is the thing that should be compared with JPEG2K. It's progressive, and cheap to decode (no multiplications required, unlike many other wavelet methods, and unlike JPEG).
As of now it doesn't support anything else than
RGB (really YCrCb internally, like JPEG).
Files are about a factor of 2 smaller than JPEG for relatively high compression rates, but they are only marginally smaller than JPEG for high-quality settings.
- One of the creators of DjVu
There is NO patent problem with JPEG 2000. (Score:1)
So, while there are patented things in it, such as the SPIHT coder (which they will use without any doubt), the patents are merely taken as precautions by academic people to ensure nobody will run away with their idea, in forms of money.
In other words: there will be absolutely no no problem to create free software that writes or reads JPEG 2000 images.
a knowledgeable Anonymous Coward.
Color Management and the Wavelet Tide (Score:2)
In addition, JPEG 2000 will ride to victory on the shoulders of wavelet compression formats, which are taking the web by storm. I have seen the future and it is wavelet. It's everywhere; here is a list of sites that use it:
(nil)
Not only that, JPEG 2000 isn't just a JPEG/GIF/PNG killer, it's also the RIAA-endorsed MP3 killer! Said RIAA spokesman Ana L. Retentive, "by promoting the revolutionary new JPEG 2000 file format for music distribution, we can stop the MP3 plague, ride the wavelet of the future, and increase the American public's sheet music literacy in one fell swoop. Once the filthy, groveling peons out there succumb to the sheer joy of making their own music from lyrics and sheet music (JPEG 2000 encoded, of course) they find on the web, they will quickly realize what poor musicians they are and will head in droves to record stores."
So how many patents... (Score:1)
PNG (Score:1)
Well in this great? (Score:1)
PNG (Score:1)
256 channels has to be one of the most absurb ideas I've heard of. What would you ever use them all for? Also, compression speed on wavelets may be agonizingly slow, but decompression speed should be faster than most other formats.
PNG (Score:1)
I understand this, but the option of 256 channels even feels like overkill to me.
Still waiting for PNG (Score:2)
Wavelet theory is cool.
But then, we've been promised PNG for years, and it too contains many excellent ideas. But for the entire park of current technology browsers to shrink down to 5% of all browsers in use, we shall have to wait an awfully long time. Especially as less and less people (proportionally) care about using the latest versions. Things will only get worse.
So it's a fair bet to say that this new format will never amount to anything much on the Web. I hope I'm proven wrong, but the way things are going, the current Web standards are slowly but surely congealing into a qwertyuiop-like mass of immovable standards. People are happy enough with GIF and JPEG, and as line speeds get upgraded, people don't really care any more about an image that takes .02 secs to transfer rather than .12 secs. It just doesn't matter any more.
And it's all going to be patented to the bits (Score:2)
What do I use for lossy compression? JPEG, due to its popularity. Lossless compression? GIF, PNG, xcf.bz2, ps.gz... Use the right tool for the job.
I'd love to see a new, openly developed, lossy compression scheme. Unfortunately, I wouldn't know where to begin, because most of this work is done by commercial software companies and whatnot...
And it's all going to be patented to the bits (Score:1)
Well in this great? (Score:1)
Well, it is supported, but not very well. I don't remember which browser it is, but either Netscape 4.x of IE 4.x does not support the transparency in PNGs. That really cuts into the usefulness of them for me.
do we get alpha channels? (Score:1)
Isn't PNG the standard? (Score:1)
JPEG2000 just looks like it's adding more features of PNG to JPEG, so you can still have mega-compression of photos but keep transparency, etc.
GIF forever!? (Score:1)
JPEG and GIF are here to stay. Style Sheets have yet to prove useful for most websites.
People are afraid with backwords compatiblity, and even more afraid of things being supported poorly by browsers.
Copyrights will be a major issue with all formats in the future, which brings up a question: Is it ethnical to copyright a Internet Standard. Standards should be open and allowed so all can access it. Nobody has yet tried patenting HTML (then again, Microsoft or Netscape probally have a patent pending), so this is getting riduculusly stupid.
I'll Stick to plain HTML and GIF and JPEGs for the near future.
Thanks,
AArthur
MS & PNGs (Score:1)
Actually, Microsoft has been very good about PNGs. IE has supported PNGs since version 4.0b1, and Office 97 uses PNG as its native compressed image format and also directly in its PowerPoint, Excel, Word and OfficeArt components.
I sure hope... (Score:1)
that disallows it's use in open source software.
Something as simple as that could end up being a
bad turning point -- what if all of a sudden, we
could not display images on linux desktops, or if
something like Mozilla was not allowed to have jpegs (just like it's not allowed to do 128 bit ssl or IM)
There is NO patent problem with JPEG 2000. (Score:1)
Is JPEG2000 the answer? (Score:1)
Technical papers (Score:1)
I've followed the standard, and I think that it will do very well. Even on the Web.
The only problem I see is with licensing, but since HP developed the algorithm and they are
giving [hp.com] away licenses for the JPEG-LS algorithm (which they helped develop), I can't see this as
a problem.
--
Christian Peel
chris.peel@ieee.org
vs. PNG (Score:2)
However, given the perceived "good enoughness" of GIF and current JPEG, any new format faces an uphill battle.
JPEG2K vs. DjVu? (Score:2)
Also, it was unclear in the JPEG2K article whether the new image format maintained the current distinction between compression and file format. Currently most "JPEG" files use a format called JFIF, but the file format and compression are separate -- so the file format can be used to store information other than the image, or could be used to store images compressed with other compression algorithms. Conversely, you can have JPEG-compressed images stored in other file formats. Some digital cameras like the Kodak DC-260 do this -- use JPEG compression but not the JFIF file format.
Anyway, that seemed like a good design, but it seems clear that the new JPEG2K requires both new compression (wavelet) and a new file format (for multiple channels). I hope they manage to keep the two separated as in the original JPEG.
--JT
Advertising Next? (Score:1)
Advertising space in graphics files?
and really, with all that information in the J2K file format, just how small is "small"?
are we talking 500k here?
PNG (Score:1)
JPG2k seems more useful for print then for the web (Score:1)
The advantages for the web could be many, but with the assumed slow lack of support in the browsers, and the need to be backward compatable with older browsers, i think this format might give tiff a run for it's money, but not jpeg.
Let's put it straight.. (Score:1)
Wrong. Wrong. Wrong.
DWT and relatives are computationally very effective algorythms.
The reason it was not used ealier - it was not developed earlier.
I have a doubt...too (Score:1)
BS. Human eye works in nice square pieces? Utter bullshit. Do your homework. It is the other way around.
Don't know what WT compression you looked at, but all I have seen (a lot) have a MUCH better quality per bit, and most important - they are free from this nice square artifacts of JPEG, that human eye is so well equipped to pick up.
And when coded sanely - they are computationally MORE efficient.
Do yourself a favor, pick up a good book (Say Mallat's 97 Wavelt tour of signal processing) and read it...
extra comment.. (Score:1)
extra comment.. (Score:1)
- most wavelet transform based compression algorythms achieve a better quality due to the fact, that it naturally concentrates on changes - edges - of the image - the same thing human eye and mind concentrates on.
Look at some papers [mathsoft.com] on image processing...
Notice one about deblocking of JPEG compressed images, for example...
I have a doubt...too - and so you should (Score:1)
Huh? You should brush up your understanding of the subject... Most of the developments happened later - I already mentioed some good references...
From your description of your testing I am positevely sure you fucked it up. E-mail me if you want some more references on the subject...
link to the book I mentioned... (Score:1)
JPEG and GIF (Score:1)
PNG (Score:2)
JPEG 2K might be God's own personal image format, but if a format falls in a forest and no one is around to implement it, does it make for pretty pictures?
why not a new compression scheme in TIFF? (Score:1)
I guess because then they couldn't do a truncated download for low-res.
Oh well. If they release a freely usable source library, I hope the format flourishes. Otherwise, it can go to hell with my bootprint on its ass.
wavelets do better compression (Score:1)
Silly me, (sarcasm implied), I forgot, there's only one wavelet algorithm... I do understand the algorithm, if you are talking about the Haars, etc. There are many wavelet transforms, related mathematically, but with differing resolutions, etc. Even subtracting the iterative math problem above, determining an algorithm which will work well for the vast majority of images to be encoded/decoded is not an easy task.
Oh, let me just fire up my SGI, my Mac, my OS-2 box, my Netwinder, my 486/DX-2, and my Alpha, my HP... and do a re-compile. I think you get my point. The problem isn't re-compiling the code, it's coming up with code that is general enough to be easily moved between platforms without suffering serious performance degradations. In lieu of that, you have to have coders on those platforms prepared to deal with and optimise for the "truly hairy math"
Is JPEG2000 the answer? (Score:2)
For those not familiar with DCT or wavelet compression methods, what we're talking about are ways of telling the computer how to generate the image mathematically from a much smaller set of data. The "ideal" algorithm would quickly reproduce any image accurately with the smallest possible dataset. The problem is, different techniques work better for different images. Which is why I suggest that until hard numbers are available, I wouldn't add momentum to this particular bandwagon. One of the folks working with me on wavelets said it way better than I ever did -- "the math is truly hairy", which means that even if JPEG2000 is a wonderful algorithm, moving it into code for alternative platforms will still be a daunting task. "Hairy math" takes time for the computer to resolve.
Meanwhile PNG... From my limited work and reading, PNG appears to be an excellent format -- but one that hasn't reached the critical mass that Linux has. HowToHelp: plug ins. M$ probably won't listen [ever tried to run IE for Linux? ;')] but Mozilla's developer's will listen. Octave's developer's will too. (I haven't checked to see if there is already a PNG code branch in Mozilla or Octave, so apologies if I'm speaking out of turn.)
A last question for the /. world: (later in the article) Speaking about Lizard Tech's MrSID, I noticed a feature I wondered about: "...MrSID supports an exact coordinate system that lets a user zoom in on an area of the picture." Does anybody know if this particular feature is covered under any patents?
I have a doubt...too - and so you should (Score:1)
For wavelets we started with the company that provides the wavelet decompression engine for Microsoft's Encarta (our direct competitor). If you have ever seen Encarta (at least '98 or earlier), the photos look like shit. Cloud-shaped artifacts on a blue sky are (barely) acceptable. Cloud-shaped artifacts on grass, a window, a (tiled) rooftop - that's pretty dire.
Having tried that same wavelet compression engine, we compressed images with it at the highest possible quality. 640x480x8 images came out around 65-69K, with both wavelet and JPEG. The wavelets were of a quality we considered unacceptable. The JPEGs were very difficult to distinguish from the originals (excepting almost flat colour areas and sharp borders).
You may also note that the first thing I said in my post was that this research was done more than a year ago. It was, admittedly, only done on the web, but I figured most companies touting such a technology at a level appropriate for multimedia applications for home-users would also be trying to provide us all with browser plug-ins.
Oh, and comparing artifacts again, I think that most people would have greater difficulty observing 2x2 square artifacts, than ones that look like 200x2 pixel squiggly lines running all over the place. Oh, and that attractive fuzzy "I can't focu anymore, maybe I need glasses" effect.
Finally, though, thanks for the book recommendation. I'll check it out.
~m.
I have a doubt... (Score:2)
So, with that disclaimer:
1). I seem to recall that JPEG already had a facility for storing multiple resolutions in one image, with all general compression info stored at the start of the file (in JFIFs), and the lowest res version next, followed by the 2nd lowest res, and so on, until you reach the highest res at the end.
2). Whilst looking at different compression formats, I also looked at several different wavelet based implementations. Without exception they looked worse than JPEG at the same file size. Yes, you read right - WORSE. Both in a 256 colour display (JPEG at its worst, as dithering looks really quite bad in palettised display), and with millions of colours. We dropped the idea of using wavelets. Oh, and the decompression speed _appeared_ to be slower for wavelets than for JPEGs. Not significantly enough for it to have affected our decision, but still...
3). JPEG, even though it is lossy, makes most photos look better than they started off. This is due to the way that the sampling for the DCT works. It takes into account how the human eye works, and leaves out things that our brains wouldn't even notice anyway. Kinda like the psycho-acoustic sampling used for MP3. Many people are reported to prefer that sound to the original digital sounds aswell.
Of course, we would never use JPEGs for line drawings, or maps (with lots of text). For those we use a lossless compression. Sadly the choice here had to be GIF (supported by Java, and our app is being written in Java. *sigh*).
JPEG2K does, however, sound interesting due to the combination of a lossy and a lossless compression scheme into the same "standard". On the other hand, the same thing could be done with JFIF, and already IS done with BMPs and PICTs. And coming from JPEG, it is likely to be fully supported by Sun years before PNG is finally supported. *sigh*
~m.
Is JPEG2000 the answer? (Score:1)
I have implemented a read for MrSID and there is nothing special about it's multi-resolution capability that you can't accomplish with a tiled and pyramided TIFF file (and I have). However the compression is great, and is taking the world by storm which is why we need a public and popular wavelet standard.
IJG - OSS Implementation (Score:2)
More information on JPEG 2000 can be found at http://www.jpeg.org [jpeg.org].
I am wondering if the IJG is planning to (and has sufficient resources to) implement JPEG 2000 support quickly as the specification finalizes. Does anyone know? I asked Tom Lane about this indirectly a while ago, and he just pointed me to the www.jpeg.org web page.
The IJG did a great job on the current library, and I hope that they can do JPEG2000. I also think that if they need support (manpower/money) it would behoove the OSS comunity to provide it.
If for one agree that wavelet based approaches to compression are the future off lossy continuous tone compression. The MrSID technology [lizardtech.com] for instance is great, but they hold a very tight hold on their proprietary technology. I think it is important to establish a popular, and public format and technology to fill this void or proprietary interests will damage OSS efforts.
Tom Lane of IJG says... (Score:3)
Nothing is happening within IJG; we are waiting to see what emerges from the ISO JPEG committee, and in particular whether it is (a) patent-free and (b) enough better than JPEG-1 to be worth a universal upgrade cycle.
On point (a), I have made my views quite clear to the JPEG committee, but I dunno whether they are listening. There will not be an IJG implementation of JPEG-2000 unless it is freely distributable and freely usable under essentially the same restrictions (ie, none to speak of) as our current code. Patent licenses are a show-stopper. But from what I've heard, all the proposals before the committee have some amount of patent encrustation.
On point (b), the poor track record of progressive JPEG has left me unenthused about pushing incompatible standards that offer only marginal or special-purpose improvements. JPEG-1 took the world by storm because it was an order of magnitude better than anything else available. Unless JPEG-2000 is that much better again, it faces at best an agonizing uphill fight; the world might be better off without the ensuing confusion. (I have not heard anything about what performance improvements they actually expect to get ... but I am suspicious that we are going to see percentage points, not integer factors.)
So, I'm waiting and watching.
PNG (Score:1)
Seriously though, I don't see it taking off very fast. Wavelet (de)compression is a slow process. While todays newest computers probally won't have much trouble with this, it will be painful to view the images on a (4|5)86, and you might as well totally forget about saving the images. While messing with wavelets on my 486 it wasn't unusual for a 640x480 image to take 16 hours to compress. I could view the compressed image back rather fast, but it definaly didn't flow onto the screen like JPEGs or PNGs.
So how many patents... (Score:2)
They have seemed to be pretty open in the past.
KISS (Score:1)
PNG (Score:1)
Ah, and 640K of RAM was more than one would ever need ;-)
256 channels (Score:1)
Aren't things like this already done?
Still waiting for PNG (Score:1)
Whenever my professors start describing how wavelets work, my eyes glaze over while my brain silently hemorages.
At least I was never asked to perform wavelet compression by hand on an exam. I did have to do LZH compression by hand on an exam once. I wonder if I could be sued for not licensing the encryption algorithm as I used it on an exam?
Wavelets cool (Score:1)
1) Start with a picture of a teapot on a carpet.
2) Reduce the image to a wavelet (it turns into a really nasty mathmatical formula).
3) Eliminate some of the terms
4) Use the new formula to create an image
5) The teapot is now gone, with the texture of the carpet where the teapot used to be.
Very very cool.
Wavelets explained (easily) (Score:1)
When you use a wavelet to compress an image, you reduce it to a mathematical expression. Each term in the expression represents a level of detail. Optimally expressed, the mathematical expression takes as much room as the original image. However, now we can hack off the terms that represtent very high levels of detail, without changing the visual quality of the image (to the naked eye). This is what gives us a saving of space.
In order to acomplish the trick with the teapot, you eliminate some of the more signifigant terms of the expression. Since the carpet texture is highly detailed, it stays. Since there is no information left that the teapot existed, that space gets filled with the carpet texture.
There. I hope your brain did not hemmorage.
do we get alpha channels? (Score:1)
Well in this great? (Score:1)
If this thing ever does get the support it needs, I'll gladly use it, but as it is, absolute vs. relative positioning give me enough headaches...
Peter
Patents in General (Score:1)