JPEG2000 Coming Soon 489
Sonny writes "In a few months time, internet users will be able to make use of the JPEG2000 standard which, its developers claim, enables web graphics to be downloaded much faster than is currently possible. This will not only make graphics-heavy web pages easier to download, it will also preserve image quality. The JPEG standard compresses image files which are then transmitted across the web faster than uncompressed files. Now, researchers at universities around the world have developed JPEG2000, the next-generation image-compression technology under the auspices of the International Standards Organisation. It is the first major upgrade of the standard since it first appeared in the early '90s. What is also important about the technology is its ability to send files without loss of data, which is not the case with current JPEG files. To take advantage of a JPEG2000, web browsers will need a Plug-In for either Internet Explorer or Netscape browsers. These free plug-in's are expected to be available later this year. The extension for the new files will be ".jp2"."
Re:It's obvious where this is going. (Score:3, Interesting)
Sure, shorter download time would be nice, but PNG isn't really providing that. PNG however makes the job of the web developer easier.
(*) I'm still annoyed that IE doesn't support alpha-transparency though.
JPEG does have a lossless mode (Score:5, Interesting)
JPEG does support a lossless mode, it's just that no one uses it. To paraphrase, JPEG supports a lossless spatial algorithm that operates in the pixel domain. Some amount of prediction is used, resulting in about 2:1 compression, but the error terms for the predictions are included in the data stream (encoded using either Huffman or arithmetic coding), resulting in no net errors.
What's a lot more exciting is JPEG2000's use of wavelet compression, which isn't mentioned at all.
Artwalker.com to use JPEG 2000 (Score:3, Interesting)
Wow. This couldn't have been timed better (Score:5, Interesting)
A) It's an excellent codec, though computationally heavy
B) The design of the codestream along with JP2/JPX file format has a lot of potential to create a "new" type of image that isn't just a picture. Yes, you've heard this before, but this time it's built in at a codec level. In stream ROI's, very flexible reconstruction and compression controllable through great numbers of options - and that's only the codec (at a *very* rudimentary level
C) It won't succeed without a decent opensource, "IPR free" (as much as is possible) implementation.
D) Read C again. It's important
To this end, I've started (with support from others in the JPEG 2000 community), a JPEG 2000 Group (See http://www.j2g.org [j2g.org] - It's very sparse at the moment, but if you're interested, bookmark it and come back in about a month). Tom Lane and IGJ have expressed no interested in JPEG2000, for various reasons (which I don't entirely disagree with, but I'd rather be proactive and try to correct flaws than walk away totally).
The aims of the JPEG 2000 Group are to create a public, open source (probably BSD license) implementation of "Part 1" (This is the codestream syntax, codec, and file format wrapper). We'll also provide a community JPEG 2000 resource. To facilitate this, we've already attained a Class C liaison with the committee. This grants all members the option of acquiring the standard free of charge. We also get a minimal channel back into the process to give opinions.
The point of this ever rambling post is this : We need members. The standard is large, and the support around it will be larger. We need volunteers who would be interested in assisting in the creation of the codec. Sadly, "Membership" is going to require some form of contribution and commitment to acquire copies of the texts you'll need - I hate this as much as you, but it was accept it, or don't get any copies at all (without $$$). If you're interested in contributing in any way (code, documents, testing, support), please drop by the at forum [j2g.org] - Even if its only a passing interest, I'd be happy to go into more detail regarding the project (or just JPEG 2000 itself). I'd do it here, but I'd loose all my (low
So, rather than bitch about the lack of a free implementation and how late it is, and how it'll never get used, come and help out! You know you (might possibly | maybe | someday) want to!
What happened to DjVu? (Score:5, Interesting)
But there are other, more powerful formats.
For a non-descructive compression, the PNG format is fortunately getting more and more popular, although the late inclusion in Internet Explorer slows down its wide adoption.
But when it comes to a destructive compression, there's an excellent (and not new) format made by AT&T and called DjVu. It was one of the first wavelets-based format.
DjVu is really better than Jpeg. Images are better looking (more contrast, less pixels with odd colors), and files are way smaller. Plus you can smoothly zoom any DjVu image without getting big and ugly blocks.
DjVu has been available for a while as a plugin for common browsers.
There's a 100% free implementation of the format called DjVuLibre [sf.net] .
However, nobody uses it. I don't understand why. Some times ago, it may have been because compression was slow. But nowadays, it's no more a valid point.
People are enthusiast for Jpeg2000. But why would Jpeg2000 be adopted while DjVu has never been?
Re:Slow to change ... (Score:5, Interesting)
Coming Soon to a PC near you! - NOT! (Score:2, Interesting)
Or did
As far as I can tell with a quick google, nothing has been done with this standard since early 2000 (maybe that's why the standard name hasn't been updated, eh.) I wouldn't hold my breath waiting for widespread adoption any time soon...
Re:JPEG does have a lossless mode (Score:5, Interesting)
The lifting algorithm (one way of computing the wavelet transform; actually, the simplest to understand and one of the fastest) works by splitting the original signal into two (in the case of a 1d signal) subsignals. One of these is the "trend" signal. It's sort of a compact version of the original one. Using only this signal, the original signal can be reconsturcted pretty well, but not perfectly. That's where the other signal, the "detail" signal, comes in. It contains the information needed to reconstruct the original signal perfectly. If the trend signal is a good prediction of the original signal, the detail signal will be very small, and can be compressed well.
But, there's no need to stop there. The whole process can be applied to the trend signal again, and even to the detail signal if it's necessary.
I'll give a more concrete example, the Haar wavelet transform. In this case, the trend signal is simply the averages of the original signal. So, if we start with
1,3,4,5
The trend signal would be
2,4.5.
If we were to reconstruct the signal with only this information, we'd get
2,2,4.5,4.5,
which is not too bad. The detail signal would contain the information needed to get from this to the original signal; differences between pairs of consecutive terms will work (Note that these pairs shouldn't overlap; that would just be redundant. Therefore, the detail signal, as well as the trend signal, are each half as long as the original one). So, the detail signal in this case is
2,1.
It's easy to see that if the original signal is constant, the detail signal will be all zeros. It's possible to construct different ways of making the trend and detail signals such that the detail signal will be zero if the original signal is linear, for example.
A good paper about this (that explains it better than I do!) is Building Your Own Wavelets at Home [bell-labs.com]
Smart Extensions (Score:1, Interesting)
Everybody should make it a practice to use 8.3 filenames when possible.
1. less bandwidth.
2. Easier to type. One time I had to download a new version of Netscape, and the file name was ns_release_browser_en_foober_fiddle_3.0.1x.tar.gz or something like that. Maybe there was a way to regex this, but sheesh! I had no desire to think about that, I just wanted to download the bloody file.
3. Backward compatability.
I realize that sometimes a longer file name is good, but sometimes people use them when they shouldn't. I always try to come up with something that fits in 8.3 without losing meaning, and I'm almost always successful. I cringe when I see people have named the file some big long gobbledegook like bobbys_8th_book_report.doc when bookrep8.doc would have done just fine.
Re:Slow to change ... (Score:2, Interesting)
Re:Stupid extensions (Score:3, Interesting)
You're right about 8.3, though. It disgusts me. But this is just the
Re:comparisons to other formats (Score:3, Interesting)
This doesn't make JPG2K appear too impressive.
JPEG-2K is really intended for lossy coding, and that is where it shines. The lossless spec is included primarily because you can use the same algorithm for both lossy and lossless coding. The only real difference is in the choice of wavelet transform, which is irreversible (floating-point) in the lossy case but reversible (integer) in the lossless case.
A better comparison pits JPEG-2K against the original (lossy) JPEG. According to a figure given in this paper [jpeg.org], J2K provides roughly a 2dB PSNR gain over JPEG for a wide range of bitrates. At the low rate of 0.25 bits per pixel, this gain takes you from 25.5dB to 27.5dB; perceptually, that is a noticeable difference. At low rate, JPEG is also subject to blocking artifacts, so the perceptual problems can be even worse than the PSNR numbers would indicate.
In other words, JPEG-2K is a Good Thing.
Re:PNG *is* a god-send. (Score:3, Interesting)
However, at the same time that endusers are getting faster connections, more and more webmasters are starting to feel the very real pain of bandwidth costs. I think the adoption of smaller image formats will come more from their side.
If jpeg2000 makes enough of a difference, I can see at least a very large percentage of graphics oriented sites switching - almost forcing the users to follow suit.
JPEG 2000 looks like the right thing at last. (Score:5, Interesting)
Good one. Thanks for the link.
Looks like JPEG2000 finally got things right for the human eye:
- Higher compression ratios just gently blur details, rather than creating artifacts. Losing the extra information leaves the part that DID get through intact.
- The text says the compression allows for progressive downloading. This implies that the coding scheme does something like working upward in spatial frequency - encoding the basic stuff first then sending progressively finer deltas. For a given compression ratio just stop the downloading (or file saving) when you have enough.
- The compression seems to match eye processing so well that highly compressed (100:1) images actually look BETTER than the basic image. The features important to the eye (facial stuff, especially eyes) gets through or even enhanced, while unnecessary detail - including sampling artifacts - gets selectively blurred out. Something like the soft-focus filter in portrait photography. The only thing in the samples that got noticably worse at high-compression is hair, which just gets a bit blurry. (Meanwhile, JPEG looked like it had pixelated measles.)
Of course the images selected for the demo could have been optimized for the compression scheme. B-)
And what's more... (Score:2, Interesting)
I say we just refine the
Metadata Section (Score:4, Interesting)
I always thought it would be cool if your digital camera could include the settings (fstop, exposure time, ISO, etc, compression ratio) along with data, time and author directly in the image file.
The Relevant Entry (Score:5, Interesting)
Here's a summary of the jpeg2000 situation that I wrote up, but never made it into bugzilla:
Re:What happened to DjVu? (Score:5, Interesting)
In the real world, companies don't want to either GPL their software (required if they use this GPL library), or reinvent all the code from scratch based on the spec, unless there's huge demand for it (which there won't be due to chicken & egg scenerio)... So, don't expect to see any support for DjVu anytime soon.
Re:And what's more... (Score:2, Interesting)
Re:B SD-licensed JPEG-2000 implementation (Score:2, Interesting)
All algorithms used in the JPEG-2000 standard can be used freely. (like IBM's MQ-coder)
About the video codec: I implemented an H.263 and MPEG-4 compliant codec. But I didn't get paid by the opencodex guy, so I aborted the project. Still, if anybody's interested, I have some pretty good MMX and altivec optimised code lying around.
Re:JPEG does have a lossless mode (Score:2, Interesting)
Enjoy!,
Jouster
Smart server format? (Score:2, Interesting)
The server is probably a good place to do this (maybe with some mod_rewrite hack), since it could be responsible for caching heavily-requested conversions.
Anything like this exist? Similarly, I've always wanted to find some nice way to keep my photo-album online in the highest quality, but only send scaled-down images to casual visitors (as well as thumbnails). http://ids.sourceforge.net/ looks like it comes the closest to this kind of thing, but looks a bit too server-heavy (doesn't seem to support caching).
Re:Photoshop plugin "only" 79$ (Score:2, Interesting)
I work for an imaging company (Score:1, Interesting)
What's missing from the comparisons (Score:3, Interesting)
Unix System Resources is a backronym (Score:2, Interesting)
These days when disks are enormous and logical volume management ensures that you can just add whatever space is necessary where it is needed, we should get rid of