jpeg2000 Allows 200:1 Wavelet Compression 241
Polo writes "Here is an
EE times article about the ISO JPEG2000 standard that has been
finalized and allows a new wavelet compression scheme that gives
good results at as much as 200:1 compression ratios. It looks
pretty promising. It is royalty-free, but there is also discussion
about a second standard that allows third-party, royalty-based extensions. I wonder if motion-jpeg with wavelets could fit a movie
on a CD or something."
OK, NSVD-ROM format proposal (Score:1)
Impact on web design (Score:1)
What I found interesting was the required HTML: .lwf images in the HTTP_ACCEPT variable - now you're stuck using Javascript to test for support. Bummer.
<EMBED SRC="eisbaer.lwf" width="384" height="256" limit="29491.2" type="image/x-wavelet">
It supports the obvious width and height, and allows you to set a limit on the number of bytes transferred - that kicks ass. It's obviously loading the image progressively, and you can see this happen (with the plugin) if you enter the URL of an image directly into your browser. On the downside, the browser won't report its ability to handle
Barring whatever patent issues there are, it's still quite a nice implementation.
a (pseudo-)expert's view (Score:1)
WAVELET VIDEO COMPRESSION is FAST (NeXT's NeXTTime (Score:1)
Although 10's of millions of dollars have been lost looking for patentable efficient wavelet video compression, and some companies died trying (Captain Crunch chip of Pro Audio Spectrum), wavelet video has always been way faster and more pleasing with artifacts than cpu-hostile DCT stuff such as MPEG.
Steve Jobs showed a 4.5 megabyte file of Lucas's "Wow" montage demo playing from a 486 chip PC clone in wide screen format being consumed at about 400 K sec i think, with stereo 44.1 Khz dolby sound and 10,000 observers in Moscone in San Francisco dropped their jaws simultaneously before eventually bursting into standing ovation.
Since that day i thought NeXTTime would take over the world. It never did. Mysteriously. The NeXT Dimension went with a problem-filled C-Cubed DCT chip and a i860 instead of mostly NeXTTime. And NeXTTime codecs never can to the mac, though i think a decoder is hidden in Rhapsody DR1 powerpc in the player app.
Wavelets are real they are exiting. They do take a little more time to decode thatn some techniques because they are very SYMMETRICAL. For example encoding at 2:1 time instead of typically 63:1 cpou time as with current compression schemes not based on Wavelets.
Wavelets are great for very very low bandwidth because it looks like fuzzy videotape, instead of wierd JOEG artifact cubic shimmering near edges of objects.
Wavelets are the greatest thing ever discovered (or rediscoverred) in math since 1988 or so, but sadly its a patent-filled litigious mess besaddled with snake oil selling fraudulant hucksters.
One time a southern university announced a proprietary video compresion breakthrough, and distributed object code that was quicky reverse engineered and shown to be existing stock wavelet code!
All those fraudulent "fractal" compression companies that went public on the sleazy Vancouver Canada stock market did not help bring honor to engineers trying to make wavelets work.
The hubble telescopre stores stare fields with wavelets (monochromatic). And i am aware of other uses. But only about 7 or 8 different programmers have ever shared public domain wavelet libraries. Dr Dobbs had two nice wavelet articles WITH WORKING SOURCE CODE.
Alas, that was many years ago. Wavelets seem to be discussed about as much as Neural Network code (the BackPropagation Hopfield type and relatives).... ie.. that is harldy at all. Its like a passing fad that never caught on.
Fractal-labelled compression existed. Berkely Systems (AfterDark Screensaver fame) used a 3rd party fractal compresser with thier static slide show Marvel Comics screensaver. But Fractal compression has been a CNN news scam multiple times, otherwise.
I wish NeXTTime was better well known. if so i am convinced thousands of firms would have spent more money trying to replicate the dream.
Frequency compression that preserves both time and frequency information when given more cpu is absent with FFT and FFT is our grandpappy's crap (crica 1954 for computer usage). Wavelets are the most exciting thing i have ever seen and
This JPEG compression trick to add a GPV style intellectual virus into the reference source by requiring patent liscen is repulsive. I hope someone, somewhere, who is well versed in mother functions, wavelets, colorspace perception, etc, will donate some time to help code a PATENT FREEE version of the proprietary scam JPEG 2000 proposal. Maybe the nice guys who work on BSD could bone up on wavelets and help out.
I do not want to have a GPV on my code and i want a simple reference source example i can honestly build up from and happily optimize and contribute BSD-style code back to humanity to share. But if its like the 4 different patents that infest teh ridiculous MPEG-4 audio AAC stuff, then ISO and IEEE etc, need to adopt a strict definition of prepetual cheap or free patent liscensing for ENCODING AND DECODING and not just decoding.
This kind of cocktease just makes me mad instead of happy, because it means more delays in the world benefiting from wavelet compression. Of course wavelet audio compression is needed for mp3z collecting, much more than ho-hum static image wavelet compression implemetations such as bloatware clouded-up JPEG is needed by mankind.
This announcement looks like its about one year premature, and the crap thats downloadable today looks like a race to make money of a proprietary codec. No thanks. Keep yOur image destroying digital fingerprint fascist crap out of my eye space. Photshop 5 was slowed down to scan every file opened for copyright infringement hinted at by watermarks.. as if that app is not slow enough with its long history of insane file formats and pathetic 32Kilobyte file read requests.
When i get this mad i seem to ramble on a million topics... and get quite tangential. Sorry about that. Lets start a new source tree and in 3 years i bet we will get firsther than the JPEG 2000 attempt and we can make it fast and free! what do you think? Maybe the concept of programming itself is patented so maybe we can't program anymore in the US though. Hell this ACTUAL patent for playing with my cat I have violated routinely http://www.patents.ibm.com/details?&pn=US05443036
ARRRGHGHGHGHGH! >
ASF exists to show Open Source apathy... (Score:1)
If there was money to be made selling individual software licenses for the linux platform somoene would step in... but the big ones dont even try cause of linux hostility to commercial software. (and the rest of the free OS's dont really count, no user base to speak of) And the common saviour for handy app's, shareware, has never worked in the open source OS world.
Free software (in either sense) doesnt work for everything, and the biggest hurdle for open source platforms is their anomisity to other forms of software... for some kinds of software the only way it gets written is if somoene ends up paying for it. Not support, not porting, or any of the other oft quoted open source way's of making money with software... just good old buying a single copy with full copyright restrictions intact.
Maybe RH&VA can spring in with their hoards of Ill gotten booty to fund what otherwise normal developers would do on their own... but since there wont be a big tangible pay off from open source development I doubt their investors would appreciate it.
Re:patents + the future of the movie industry (Score:2)
Doh! (Score:1)
:|
Re:A few issues (Score:2)
Re:Extrapolating... (Score:2)
In this case, um... Lucas [mailto] has me especially baffled. I didn't see any mention of intellectual property above, just of a technical standard that would allow packing more data into a given amount of bandwidth or storage capacity.
I would think the biggest market for such a thing, if applied to motion pictures, would be the manufacture of lower-cost video players, not movie piracy. The ability to pack a movie onto a CD would allow movie distributors to bring prices on their product down to the point where piracy simply wouldn't be worthwhile.
If you're going to extrapolate effects of this technology, I'd say it would be better to look at what it could (potentially) do to the video rental business, not how it could be used to steal intellectual property. I mean, if you could *buy* a movie CD for $7 or $8, who'd rent one?
- Robin
Yes, patents can be dropped. (Score:2)
I am not a lawyer, but a patent can be "dropped" quite easily. Once the patent is active, its owner has to pay an annual fee, which is pretty high if you want the patent to apply world wide ("world wide" = in those countries where 95% of the global market is). If you don't pay the fee, the patent is no longer valid. This is actually done very often, because usually after 5 - 7 years, upholding a patent is not worth anymore. Either the technology is outdated, or some new patent has made the old one obsolete, or the revenues from licences with other companies no longer cover the costs - there are many reasons for dropping a patent.
Re:A few issues (Score:1)
This sounds flat out wrong to me. You can take white noise and apply a Huffman scheme to it.
Fundamentally you are always limited by mathematics: if you have a 1 to 1 matching function, your domain and range must be the same size. A lossless compression scheme is a 1 to 1 matching function, mapping the original image data to a unique "compressed" image. So you're stuck. Now, most images that we're likely to compress have the sorts of patterns that allow us to do some compression, so generally we don't have to worry about that.
What is often missed regarding lossy compression, however, is that that nature of the loss is very significant in the perceived quality of the compression. For example, one problem you often see in compressed video is changes in parts of the image that shouldn't change: the viewpoint is still, a wall is still, but compression artifacts on the wall are different from frame to frame. Software that evaluates the effects of losses on an image, taking things like this into account, would be good -- as would compression software that does likewise.
Wavelet transforms super slow (Score:2)
MP3 has limited lifespan (Score:2)
Basically, .mp3 will do fine until something better comes along that's supported... then it'll go the way of the LP, the 8 track, etc. For support, all one needs to do is get the major players onboard (nullsoft, real, the xmms gang and maybe mung$oft) and everyone else will pick it up for fear of being left behind.
--
rickf@transpect.SPAM-B-GONE.net (remove the SPAM-B-GONE bit)
mm... 3d texture compression :) (Score:1)
we need a hardware decompressor for this... so we can put it on a video card and do 200:1 texture compression
if the decompressor was fast enough, just think, it would be a way around the bandwidth current bandwidth limitations... if you can transfer blah gigs of data to your videocard 200 times quicker, that leaves quite a bit of time to do decompression of it and keep up (or be faster with later implementations..)
hrm.. 200x32meg = 6 gig of textures
smash(err.. i assume he has less than 6 gig... or he needs help
actually..... DVD... (Score:1)
lets say we have an 8 gig dvd. assuming it gets average mpeg compression
8 gig = 8,000 mb / 20(20 times better than 10:1) = 400mb.
still not quite 0 day warez, but the modem download time would be mere days as opposed to weeks
besides, i am pretty sure you could get way better than 200:1 on motion video... if you compare the quality of mpeg stills to jpeg stills, mpeg is much more lossy... because it moves you notice less.
interesting...
smash
Re:You cannot compress white noise (Score:1)
hmm... by that definition, I doubt you would get white noise in many pc files.
the reason for this would be quantization (i guess):
for example, say you have an image file greater than 4000x5000 pixels in 24 bit color. that is 20 million pixels. 24 bit color only allows a total of 16.7 million colors, so at least a *couple* of the pixels must be redundant
in the "real world" you could definately get white noise... but due to computers being digital, there is only a maximum resolution you can work with...
smash (over-tired... hope that made some sense
Re:Oh the irony :) (Score:1)
because I doubt your browser can show JPEG2000 files. They had to put the results into a framework you could see.
Naturally, no, not in my case either, but it can display 24-bit JPEGs and PNGs. There's no leap of logic that can explain why the image had to be converted to an 8-bit gif (unless, of course, you're afraid that someone will connect to EE Times via an old copy of Mosaic
Why, oh why.... (Score:1)
Re:A few issues (Score:2)
I'd guess perhaps the fact that it's OK to perform lossy compression on an image, and that people see image compression as 'good' if most of the main image features are there (that is, good enough that a human observer won't immediately notice a difference). You wouldn't be too happy, I suspect, if you decompressed a text file and found that gzip had trimmed it down to something resembling Cliff Notes
Daniel
Re:GIS applications (Score:1)
www.lizardtech.com
Re:A few issues (Score:1)
The LuraTech Software Product are licensed to users, not sold. The license issuer retains the right of use of the enclosed software and any accompanying fonts (collectively referred to as LuraTech-Software-Product) whether on disk, in read only memory, or on any other media. You own the media on which the LuraTech-Software-Product is recorded but LuraTech and/or LuraTech's licensor(s) retain title to the LuraTech-Software-Product. The LuraTech-Software-Product in this package and any copies which this license authorises you to make are subject to this license.
Eyewww.....
Why not just give in to terrorists? (Score:1)
FYI, GPL'ed wavelet code (Score:1)
About time... (Score:5)
I worked with wavelet transforms (Daubechies wavelets) a year and a half ago, and back then it was pretty clear that it would be possible to compress images and sound much harder using the wavelet domain rather than the Fourier domain.
For those who don't know, the trick is:
JPEG/MPEG/MP3 uses Fourier transform to transform the image/sound data into their spectral components. But this spectral representation of the data does not say anything about the _locality_ of the frequency components. Therefore representing spikes/discontinuties will require a very large number of frequency components when using Fourier domain, which in turn leads to poor compression. You can see this problem by drawing a few sharp lines in an image and compressing it hard with JPEG.
Wavelets on the other hand, represent both an equivalent of the frequency component, along with locality information. Spikes/discontinuities can be approximated well using only a few wavelets. This in turn leads to good compression.
Another nice thing about wavelet compression is, that wavelets tend to represent discontinuities well, even with hard compression (eg. a lot of missing or roughly approximated wavelet components). Therefore a very hard compressed image will still have fairly sharp edges, completely contrary to JPEG compression. This is pretty important if you compress a picture holding text.
Anyways, someone is now working on JPEG with wavelets... What about sound and video ?? There is no reason as to why wavelets should not provide equal improvements in both audio and video.
My personal conspiracy theory is, that there exists a *LOT* of expensive hardware that can do Fouries forwards and back to allow real-time encoding and decoding of MPEG movies in good quality. The companies producing these devices will lobby any standards-organisation to *NOT* consider wavelets and stick to good old Fourier. If this holds, it will take a few years until we see Wavelet compressed video
Some actual comparison testing (Score:1)
I found that the comparison on their site was biased rather strongly in favor of JPEG2000, in two ways. First, their JPEG encoding for comparison was notably inferior to libjpeg 6b with Huffman table optimization. Second, the comparison at very high compression ratios is not particularly meaningful. When compressing at 96:1, there is virtually no image detail present above half the original image resolution. Thus, scaling the original image down prior to compression (the usual practice with JPEG images) produces good results with standard JPEG.
When these biases are removed, the quality gap between JPEG2000 and JPEG narrows substantially. JPEG2000 is somewhat better, most noticeably in its relative lack of chroma bleeding, but the margin is quite slim. My recommendation is to make up the difference by using a little more bandwidth and/or storage.
I've prepared a summary of these results, with example images, on a comparison page [levien.com]. The page is on the slow side of a DSL, so please be gentle
Enlightening J2K discussion in gimp-devel (Score:2)
Patent problems with jpeg2000 (Score:3)
Maybe I'm just being too cynical, but I think that one of the major points of the JPEG2000 effort is to fix the "bug" in the original JPEG (or at least the universally implemented arithmetic coding-free subset) is patent free.
And basically, I agree with Tom's assessment here. While JPEG is far from perfect, it is "good enough" for photographic images, and the massive increases in bandwidth and storage capacity kinda make high compression ratios irrelevant.
Finally, a 200:1 compression ration is pretty meaningless without some kind of context. A much more meaningful comparison is how many bytes are required to get the same quality image as some other compression standard, such as the original JPEG. The figure itself reminds me of when Triada [triada.com] was hyping their 200:1 lossless compression. Joe Bugajski gave a talk on this at Berkeley, and started waxing raphsodic on the incredible stuff that the Library of Congress had in their collection. Books and audio materials were fine, but when he got to the Stradivarius violins, my fellow grad students just broke up laughing. It's not hard to imagine 200:1 compression of that, but uncompression is tricky at best
Comparison is rigged (Score:3)
Basically, what they did is take a high resolution image and compress the shit out of it with both original JPEG and JPEG2000. Yes, JPEG does poorly at such high compression ratios.
What they neglected to point out, however, is that you can get excellent results from just shrinking the image. From what I can tell, the test image is displayed as a GIF shrunk 3x from the original "3 MB" image. A very reasonable thing to do, if you want a 19k target file size, would be to first shrink the image 3x, then compress it 17:1 using plain JPEG. I tried this, and got results entirely comparable to the JPEG2000 example (the problem with my informal test is that the GIF dithering artifacts are noticeably softened, which is basically a problem with the fact that they presented the image as a GIF instead of true color).
So the bottom line is that JPEG2000 performs a lot better if you're doing something stupid with it, but take the claims of dramatically better compression with a grain of salt.
Re:Well. (Score:1)
Re:Patents (Score:1)
information is free.
the only question is:
Any mathematicians in the house? (Score:2)
Anyway, I'm curious: I understand that current JPEG uses Fourier transforms (a full integral transform, or a Fourier series?) to get a spectral representation of the image data, then drops subtle information from that transformed data to get a similar image from the reverse transform.
So I assume the new JPEG (aside from all the quicktime-esque formatting overhead) uses the same technique, just with a different complete set of functions, the wavelets.
So my first question: is there anything about the above that I'm misunderstanding?
And my second question: What are wavelets? Bessel functions? Something I haven't heard of? Is there a simple formula, or a simple ODE generating wavelet solutions, that I could look at or plug into Maple? I gather that whatever they are they approximate discontinuous functions with much better convergence than sines and cosines... but that could describe a lot of things.
Re:Wavelet is very nice (Score:1)
Remember, MJPEG (which is basically only a lot of baseline JPEGs streamed together, optionally with some audio and hooks for hardware cards) usually compresses 3:1 for broadcast video (perhaps slightly more, if your TV station is really big).
/* Steinar */
Re:Wavelet is very nice (Score:1)
/* Steinar */
Re:Wavelet is very nice (Score:2)
/* Steinar */
File interchange format (Score:2)
Eventually people created the JPEG File Interchange Format (JFIF) and this is what modern 'JPEG' files are stored as. I hope the standards body has given some thought to file formats this time round.
DVD CCA's worst nightmare (Score:2)
If a DVD can be decoded with DeCSS and converted into mjpeg2 with mp3 audio, the result could possibly be stored on a single CD-R. This would make content copying considerably more convenient and less expensive than having to use DVD-R.
By no means does this justify any injunction against DeCSS, but it does prove RMS' maxim, "information wants to be free". Ultimately, it will be.
Copyright cannot be enforced by technical means, it has always relied upon good faith, combined with legal remedies against those who are caught in violation. Nothing prevents me from taking apart a copyrighted book, scanning it page by page, and reprinting it or transmitting it electronically, except my ethics and respect for law.
DVD should expect nothing more, nor less, than the same.
Re:They're just not getting it... (Score:1)
JPEG2000 part one will be the plain-vanilla royalty-free version, but part two can include various types of third-party extensions that may or may not involve royalties. "Part one will satisfy 90 percent of the applications developers, but it will be 90 percent more work to engineer that last 10 percent of the job for special purposes in part two," said Barthel.
Most of the spec is open!
--
Not so fast (Score:2)
However the compression technique has to "understand" your binary, wavelets would not be a good choice.
Just so that people understand. A compressed file is a concise description of the original file. A compression technique is an agreed upon way of describing things. And, just as you would use different methods of description for describing a book and a painting, different compression techniques are appropriate to each kind of signal.
And to clear up another piece of confusion. Lossless compression means that you have a complete description, you can recover the original exactly. Lossy compression means that you have an incomplete description. You have a description of an approximation of the original, the distinguishing details got filed in the circular file.
Cheers,
Ben
*chuckle* (Score:2)
That said, our visual perception of white noise is itself a highly lossy form of compression, therefore it is easy to fool.
Incidentally wavelets are widely used for "denoising" because they are able to handle both smooth regions and sharp boundaries. Therefore they are much better at concentrating the sharp edges into a relatively small number of big components, so that you can distinguish a sharp edge from white noise. Fourier transforms don't do a good job of telling the two apart, and are therefore frequently unsuitable for denoising.
Cheers,
Ben
Uh, not quite (Score:3)
Understanding it that way could be complex, let me give the direct definition for a binary signal.
A source of a stream of 1's and 0's is a white noise generator if each digit is randomly (50-50 odds) and independently a 1 or a 0. The resulting signal carries the maximal information/pixel possible, and hence there can exist no compression method that is an overall win in compressing white noise.
What does this look like? Start tossing a coin and record what comes up! (OK, that is not perfect but it is close enough.)
As an image it looks like..static.
When played, the noise sounds like..static.
Any signal with a large amount of information/bit looks like white noise and hence looks like..static.
It is a fundamental fact of information theory that white noise (which resembles static) is identical with a perfectly encrypted signal is identical with a perfectly compressed signal.
Cheers,
Ben
You cannot compress white noise (Score:4)
Many things that look like white noise are not, but white noise itself?
Incidentally static is what white noise sounds like, and any efficiently compressed signal looks like white noise, which is why a modem sounds like static.
Another interesting fact - a compressed file is a pretty good source of random data, and a compressed encrypted file is substantially more secure than a file just encrypted with the same algorithm. OTOH an encrypted compressed file is a PoS. The encryption messes up the attempted compression.
Cheers,
Ben
DjVu by AT&T (Score:1)
It compresses a high resolution full color scan of a magazine page by 1:200. And I am talking about real-life performance here, not ideal cases.
The trick is an algorithm which automatically separates text and line art from continuous tone images and compresses each one with a different algorithm. The continous tone algorithm is wavelet based, of course. This is mentioned in the JPEG2000 article as a possible future extension but DjVu has been doing it for almost two years now.
They have a Netscape plugin for viewing this stuff and the compressor is free for noncommercial use. It supports linux and many other operating systems.
There are many compression schemes better than JPEG being promoted by their inventors. I believe JPEG2000 will probably be the winner for a very simple reason - the name JPEG.
----
need support in gimp soon... (Score:2)
We need to get support for this in gimp, ee, etc...
It'd be cool if these beat commercial packages to the punch.
Re:Oh the irony :) (Score:3)
Yeah, I noticed that too. Kind of like those TV ads that show an HDTV picture... they don't look too impressive on my ordinary TV set.
My guess is that they used GIF so they could control the color palette and make the presented images appear to approximate the image appearance. In other words, it's a demo.
--JT
Re:You cannot compress white noise (Score:1)
You won't be able to compress your compiled binaries with this compression.
Re:Not so fast (Score:1)
None of the techniques used to compress video/audio are useful when compressing data becuase they are all just lossy approximations of the data.
I guess my point was that white noise video/audio signals are just as compressible as any other video/audio signal.
AT&T DjVu (Score:2)
The net result? 300 dpi full page scans in under 50K! When you print them, they are true enough to the original to justify the difference between a 5MB TIFF and the 50K DjVu.
I'm very glad to see some of the same technology being adopted into JPEG. I'm kind of dissapointed that the spec doesn't seem to allow the mixing of compression algorithm's inside of one image, which is what most all of the very very good compression methods seem to rely on -- the fact that different information in the same chunk of data can be optimally compressed in different ways. I mean, depending on the image, JPEG in some cases can smear wavlet-based methods into the dirt.
Just some food for thought...
~GoRK
Re:ETA? (Score:2)
When IE-whatever supports JPEG2000, I'll move to them on my website - but only 'cause I don't have to make a living from it.
The article is wrong (Score:1)
Re:Hope Browsers Support it! (Score:1)
Here is why it hasn't cought on. The C code to decode an animated GIF89a is about 600 lines (14kB) including all the file handling and error checking. The memory overhead beyond the resulting pixels is about 1.5KB, 1KB of that being the RGBA colormap. Adds about 10-15kB to your resulting executable code space.
The code for zlib, which is only a part of .png is 9k lines (309kB), and the libpng code is another 23k lines (798kB). Memory useage is rather large, and the executable swells just a wee-bit. Sure it has all kinds of nifty extentions, but it's all just a block of pixels in the end.
Some of us still think about things like memory usage and code size. And dont forget at least a few bugs in all that code.
Re:patents + the future of the movie industry (Score:2)
Consciousness is not what it thinks it is
Thought exists only as an abstraction
Re:patents + the future of the movie industry (Score:3)
I'd thought they'd had these techniques rolled into JPEG and MPEG already but it looks like you're right, they've kept the techniques for their own products.
Consciousness is not what it thinks it is
Thought exists only as an abstraction
Re:patents + the future of the movie industry (Score:3)
Because the inventor held onto the patent too closely. Fractal compression works... encoding is horribly slow but decompression isn't too bad at all. Compression ratios are amazing, better than wavelet I think. But if the inventor ever wants to see it in widespread use, he has to let go and make it free.
*sigh*
Re:Comparison is rigged (Score:1)
Of course, shrinking the image after the JPEG2k compression/decompression may have concealed some compression artifacts. Therefore it would have been a good idea to provide a detail at full resolution.
java decompression demo (Score:1)
http://ltswww.epfl.ch/~neximage/de coder/applets/ [ltswww.epfl.ch] They're very slow, though.
Extrapolating... (Score:1)
Yeah, DeCSS needed to be created so that Linux users could watch movies, but now witness the rush to create a method which enables people who don't own DVD players or DVD discs to be able to watch DVD on their CD drives. And everyone here said the all that DeCSS was solely to enable the viewing of movies on Linux.
I'm only ranting like the because that seems to be obviously what Roblimo or Polo meant when they mentioned the fitting of a movie onto a CD. Seems no one here respects IP anymore... Yet they don't criticize themselves, or their forum. Just others. For instance why isn't slash GPLed? Because IP is real, and there is value to it... Just as the movie industry puts up huge amounts of cash in hopes that they can convince the customer to see a movie, Rob had been donating his time until the IPO to make this site as it was and knows that if anyone could use his SW, he'd be at a competitive disadvantage... he'd essentially be doing R&D for potential competitors...
Whoops... Didn't mean to go that overboard.
END OF RANT.
Interesting std name (Score:2)
Re:A few issues (Score:1)
Re:some tech details about JPEG2000 (Score:1)
You still use bitmaps for that? Why not PNG? With PNG you may not even have to keep a separate master copy. Welcome to the present!
cheers,
sklein
Re:Comparison is rigged (Score:1)
http://www.ratloop.com/~gonz/hoax/ [ratloop.com]
Obviously the author has left something out...
Oh the irony :) (Score:3)
Anyway... it sure looks promising, but I'm not really impressed by that 158:1 result on this particular image, most of it comprises of a gradient that should compress rather well with their scheme. I'd like to see results with images containing more detail.
-W
A few issues (Score:3)
1. Is the 2:1 5/3 'mother wavelet' truly lossless for any and all inputs?
2. What kind of 'average' compression can we expect? One poster already mentioned that the example had a simple gradient as a background which would certainly compress well.
3. How CPU intensive is it to decode these things? Will MJPEG2000 (or whatever) practically require a hardware decoder for DVD-quality playback?
Anyone care to comment,refute, or otherwise flame? *g*
Faster FTs = better "standard" compression? (Score:2)
Re:Faster FTs = better "standard" compression? (Score:2)
Mostly true (Score:2)
Oh, and best here can be rigorously defined: just plot mean squared error vs number of components. The lower, the better.
Also, there are fast wavelet transforms. In fact, some of them are O(N), rather than O(N ln N) as for the FFT. Again though, depends on the wavelets that you're using.
Re:A few issues (Score:2)
http://www.luratech.com
:)
Re:Oh the irony :) (Score:2)
What disturbs me is that the '19K jpeg' example on the bottom is in no way or form what would happen if you tried to compress the top file down to 19K in jpeg. It's like what you would get if you reduced the original file to 25%, shoved it down to about 16K in GIF compression, then blew it back up 400%.
With that and degrading the original source image down (by converting it to 8bit GIF) far more than (presumably) JPEG2000 compression degraded the second image, before it too was degraded into 8-bit GIF, this demonstration is useless...
They need to give us two 3 meg files: one souce file, and one file that has been JPEG2000 compressed, then saved as a full-size source file (in BMP, PICT or some other lossless mode) so we can do our own comparisons...
Kevin Fox
www.fury.com [fury.com]
Re:A few issues (Score:3)
Re:need support in gimp soon... (Score:2)
What is the full implementation time? (Score:3)
Lets see here you need updated graphics programs (I'm sure GIMP and Photoshop, et al can have this fairly quickly). You need updated browsers (should be rather easy for Netscape/Mozilla, and technically not to difficult for IE, depending on whenever MS wants to get around to it). I'm still thinking 2 to 3 years before you seed wide usage, which is unfortunate becuase the big advantage is for those of us still stuck using lousy POTS lines to connect to the internet. Ironically, those who would get the biggest benefit from this (like be on this on-a-good-day 28.8 line) will probably have broadband access of one sort or another by the time it becomes widely used for the web at least.
Then again, the whole 28.8 to 56K changeover seemed to happen rather quickly and both required upgrading your hardware, and a dueling set of standards....
waveFORMS, not waveLETS, dumbass (Score:2)
"Suble Mind control? why do html buttons say submit?",
Dvd (Score:2)
The only diffrence between the two is the laser used. DVD drives cost about the same amount. In anyevent, DVD moves cost about as much as audio CDs, so I don't see why putting movies on CD would make them any cheaper.
"Suble Mind control? why do html buttons say submit?",
Re:You cannot compress white noise (Score:2)
pixel = random();
for the most part, the human eye can't tell the difference between one random function and another. Ture, this is very lossy, but it looks the same.
This idea has been used in computer graphics by substituting portions of images with "detail" textures which are essentially the high frequency components of a part of the image overlayed with the lower frequency components. The lower frequency components compress very well with convential techniques and the higher frequency components look identitical at all scales.
So, as far is the human eye is concerned, white noise adds very little information to the picture and it can be thrown away and replaced with software generated white noise.
Re:Hope Browsers Support it! (Score:2)
That way, all browser creators have to do is link it into their project... At least that works for an additional image file format decoder.
Re:ASF Format? (Score:2)
GIS applications (Score:2)
We've been using MrSID compression on an environmental information system. We've been using sat photos as base layers on maps, and plotting data on top of the map. MrSID is a wavelet compresseion system that is now supported in several commercial GIS packages.
There are three really cool aspects of wavelet compression.
First, there is the raw compression; 10 or 20:1. Since we work with satelite photos of entire counties, these can easily be well over 200MB per image. Keeping a bunch of uncompressed images around is pretty awkward, and normally you need a number of them (different seasons or months, leaf on leaf off; black and white/infrared; etc.) With this kind of compression you can burn a pretty comprehensive library of images onto a CD.
Second, there is speed. You can open up a map of a good size county, and the image loads very, very quickly, but looks very sharp, because it throws out detail too small to be seen. As you zoom in, it clips out the region you can't see, and brings in another level of detail, so the map repaints and still looks sharp. You can easily start with a map 100 miles across, then zoom down to less than a mile across, with individual buildings easily recognizable. With uncompressed images, you could not use the same image to cover the wider geographic areas because display would be very slow. It makes for terrific demos -- being able to zoom from an astronoauts view to the view from a small plane feels like you've strapped on a pair of seven leage boots.
Finally, there when you go beyond the zoom level where there is no more data, instead of getting sharp pixel boundaries you get smudgy edges -- like you are a little out of focus. This is less distracting and more intuitive than seeing large, blocky pixels.
DVD threat? (Score:2)
OTOH, it could also be a boon for home or low budget movie makers.
P.S. I could claim "first post", but that would either be wrong or obvious.
Wavelets are our future (Score:2)
You can think of all things as functions. An image, for example, is a function of two variables, x and y, which give a pixel location. The function should return a color, and perhaps an alpha value. Fundamentally, one should be able then, to simply write down the formula for an image on a napkin. The image can then be generated by simply filling in the formula over the domain (width and height).
In practice, however, it is very difficult to derive this function. So we must approximate until it is close enough. As I see it, this is basically what wavelets do. They approximate the function with a series of trigonometric wave functions.
If you think about movies, now, they are just really frame after frame of images. So you simply have a three-variable function: x, y, and frame #. Theoretically, you should be able to reduce a movie to a function also. and just feed in three variables and be able to render an image.
I guess you could also represent images and movies as functions which returned matrices. You could probably do some nifty mathematics which would account for areas which stayed the same and would not have to be changed again...essentially 'masking' the new image over and over again with new image matrices.
Anyway, that is my take. It might not be perfectly accurate, but I've always thought of images, and movies, and data in general (the same procedure can be performed on arbitrary binary streams), as simply functions of position.
Jazilla.org - the Java Mozilla [sourceforge.net]
Re:More information about wavelets (Score:2)
This is dead wrong for non-lossy compression.. which is obviously what he is talking about (since he mentioned information theorey). He is making two eperate statments (1) you can not do any better with a non-lossy algorithm (but the compression phase is slow) and (2) wavelets degrade better then anyhting else if you start leaving out the big parts (lossy compression).
You could start spouting bushit about how (2) is subjective, but (2) is not the part of his post which you quoted.. and (2) is probable stil true *enough* in the near future.
Jeff
Re:A few issues (Score:2)
--
Re:Lockout users? Happens all the time. (Score:2)
--
Re:A few issues (Score:2)
It can be, for instance there's the S+P transform, which is lossless. The compression ratios aren't that much better than with conventional lossless image compression methods.
This is totally data dependent. For instance you can't losslessly compress white noise using any known method. For lossless image compression you'd probably see something like 1.5:1 - 3.5:1 on the average depending on the characteristics of the images.
It depends, I suppose the coding, where they are looking for the best mother wavelet, is the most time consuming. The decoding is probably done in O(n), since the wavelet-transform is O(n). For comparison the DCT in the original JPEG was O(n log n) so the complexity grew as bigger blocks were processed, which is not the case for the wavelet transform. I can't comment more as I haven't seen the spec either.
Re:A few issues (Score:2)
A few points:
1) A wavelet basis is a set of orthonormal functions that span L2(R) (the inner product space of integrable functions + some other conditions) and are generated from a single 'mother wavelet' by translations and dilatations. The fourier basis also spans L2(R).
2) The 1D discrete wavelet transform can be implemented with a pyramidal algorithm, where each iteration halves the number of processed elements. Each iteration consists of filtering the sequence with a pair of QMF filters. Less than 2n iterations, O(n)/iteration => the complexity is O(n).
3) as a sidenote, maybe you were thinking of the Gabor transform, which is a precursor of the wavelets.
4) Wavelets can be made lossless, basically you just use enough precision so that the errors cancel out during reconstruction and rounding. The transforms used are usually just multiplications with a matrix and the inverse transforms multiplications with its inverse.
Re:About time... (Score:4)
There has already been a lot of research into the subject. The biggest discoveries would probably be the EZW coder (patent status?) and the SPIHT coder (patent pending at least).
Actually there an uncertainty attached to the time-frequency representation. The more accurate the knowledge of the time the less accurate the frequency and vice versa. The traditional time-domain representation is on the other end with absolute knowledge of the time no knowledge of the frequency and the fourier representation on the other with absolute knowledge of the frequency and no knowledge of the time. This similar to the Heisenberg's uncertainty principle in physics. Wavelet-packet-compression tries to minimize this and find the optimum representation for the information.
Another advantage is the easy handling of edge discontinuities with biorthogonal wavelets. If the transform frames have ends at different levels then even hard quantization won't move them closer together as in DCT/DFT-based compression and consequently there are less blocking artifacts.
Well. (Score:2)
"What a great way to piss of the RIAA, make the DVD consortium superflous, and archive my multiple gigs of porn on one CD in one fell swoop."
:-)
---
Re:Well. (Score:2)
Yes, sir, a wavelet based compression scheme... AFAIK. This is about wavelets.
VQF [vqf.com] is related to MP3 [mp3.com], and both are related to Jpeg2k (wavelet compression)
Further along my train of thought: VQF allows 18:1 over MP3's 12:1. Who says it'll stop there? I wouldn't mind some of the thinking that went into Jpeg2k to go into MPEG layer 4 audio
---
information links (Score:4)
Re:A few issues (Score:2)
2: dunno
3: Wavelets are just cool weight-functions on a regular fourier transform. Weight-functions are often used on General Fourier transforms (using Bessel-functions or such as the base) to make the base functions orthogonal. The nice thing with wavelets is that the 8X8 grids seen in low quality JPEG are gone. low quality compression is just a bit blurry, but never blocky.
Since we're still using fourier transforms with wavelets, FFT still works, and we get the n * log(n) performance, and hence the performance compared to JPEG will only be lowered by a constant factor. Not even a large one at that.
Sorry if this post is a bit mathematical, but most people here have probably studied differential calculus, so you should know what I'm talking about.
it is definitely real (link) (Score:2)
Wavelet image compression has been around for several years now, mostly in proprietary form. Let me give you one particular link - and no I don't work for them :)
Lightning Strike [infinop.com]
Best regards,
SEAL
Re:Extrapolating... (Score:3)
Fact is one of them has to lose over the other in the future. Corporations can't grow in power without abusing people, and people can't earn and live out their rights without lost revenue to corporations.
It's as simple as that.
In their eyes, the best way for corporations to earn money is to drug us down and make us their drones. Then they can do whatever they want of society. But what is really the point of that? So that a small elite can feel powerful and rich? There should be something else to life than that, and that means power and freedom to the individuals to do what their hearts desires.
In the future DVDs WILL be copied and pirated, since we're going to count storage in Tb instead of Gb. This could have a positive effect on the movie-bussiness, as they will have to come up with better answers to people's needs. Ie, bigger cinema screens, social aspects of movie-going, physical effects etc. The initiative-takers of this will be the ones who earn money from their ideas. Upstarts and copycats without one creative idea in their mind will as previously, not.
People that claim piracy are hurting corporations, forget what side is the most human. We don't live this life for corporations to grow on us. The more power you give them, the more numerous and bigger they get. So there have to be some resistance. Call it illegal. Call it perverted, or whatever you like, but I doubt it has ever hurt the bussiness as they claim it have. When they say that, they're thinking about quite a different reality than this one.
Think about that.
- Steeltoe
PS: This is not FUD, it's just to show the extreme that could happen if we suddenly just "gave up", stopped thinking for- and talking among ourselves. In some countries this has already happened, but with governments, which is much more probable.
Re:DVD CCA's worst nightmare (Score:3)
DeCSS (or dodrip) + a nice little mpeg2 resizer will make you what's known as a "miniDVD". That's 352*480 mpeg2 with AC3 audio. You'll need 2 or maybe 3 discs for a complete DVD movie, with _really_ good quality. So far they're only playable in very few stand-alone DVD players, but on computers they work really well.
Dodrip (or DeCSS) + another little mpeg-manipulator will make you a VCD. Worse picture quality, and "only" Dolby Surround, but most movies fit on 2 discs, and are playable in consumer DVD players.
Dodrip (or any other ripper) + some Microsoft software will make you an ASF movie. Picture quality can be anywhere from terrible to better-than-VCD depending on bitrate, sound is good (Dolby Surround). Can be 1 to 2 cds for a whole movie, again depending on bitrate.
Who in their right mind actually takes the time to copy movies this way?
The ones living in countries where the cinemas lag 6 months behind the US. The solution is of course to release movies world-wide, both cinema-wise and DVD wise. That'll kill a LOT of the piracy.
Intel Wavelet Licensing (Score:3)
Re:A few issues (Score:2)
Sounds to me like they have the potential to be faster -- faster if all other things, like the number of transforms required, are equal.
Anyway, sounds pretty cool. Makes me want to go look up the math.
John
Hope Browsers Support it! (Score:2)
Indeo 4 and 5 use wavelet compression (Score:2)
More information about wavelets (Score:4)
1) It has been proved that a wavelets can represent the theoretical minimum information for an image. Proof uses the information's counterpart of the Heisenberg's Uncertainty Principle. That's right folks - you cannot get any more compressed than that - it simply isn't possible.
2) Unfortunately, finding the coefficients for the minimum possible representation is a bit hard on the computation side - so normally certain constraints are made like fixing the angles of the wavelet. However, decompression is pretty much standard in any case - easier. But nothing quite as close as FFT.
3) JPEG chops the picture into managable bits - you may get discontinuities at boundaries. Wavelets sweep across the picture, from big to small, gradually improving the quality at smaller areas. So you can get a nice smooth degradation of details if you decrease the size of the file - or in other words, if you are downloading incrementally, your picture kinda shimmers in nicely.
4) Wavelets have a normal like-curve. Normal like curves occur frequently in nature -- for example the human face can actually be represented by very few wavelets - the eyes, nose, mouth, face all fit quite nicely. Probably good for photos.
Download tools here: (Score:2)
(http://www.luratech.com/products/productoverview/ pricelist_e.html)
What I can't find is information regarding the patents/etc. regarding the new format - anyone?
some tech details about JPEG2000 (Score:3)
Here are some quotes from an article about JPEG2000 [webreview.com]:
Since August of 1998, a team within the Digital Imaging Group (DIG) has been developing a rich file format for JPEG 2000
It surely took a long time to develop it. I hope it's worth it.
Image authors will also have the option of saving the picture in lossless format for archival storage
This is great! It means I no longer need to keep all of these uncomressed BMP files lying around.
Wavelet technology also provides for a continuous download stream of data that allows the user to control the amount of image resolution desired
This is also great. If I understand it correctly, it will allow you to download 30% of the image and get 30% of the quality, download %50 and get 50% quality or download it all and get full quality. But I might be mistaken.
Another innovation is that a new standard, "sRGB" will be the default colorspace for this format. In the current JPEG standard, there is no notion of default colorspace. This lack of precision contributes to inconsistent JPEG color rendering
This is a Good Thing, too. Great for printing.
The JPEG 2000 standard for metadata also provides for extensibility of the metadata properties. In other words, new functionality can be added without having to rewrite the standard. And speaking of adding information, the metadata catalog can be modified without having to rewrite the entire image file. These abilities make for a very nimble, adaptable image file format
Well, we don't seem to need this (using different formats is easier). If the format is too extensible, it can lead to the "get-the-latest-viewer-you-moron!" syndrome, like all the problems with the HTML that we have now.
If all goes as planned, the official schedule for implementation will be released in January 2000
Other good links:
.j2k files for your viewing pleasure (I wish a had a viewer :-)
JPEG2000 Requirements and profiles document, V.6.0 [jpeg.org]
SEMINAR ON IMAGING SECURITY AND JPEG2000 [eurostill.epfl.ch] - this is an interesting collection of documents about digital image security and watermarking. These gyus take security seriously!
JPEG2000 bitstreams [ltswww.epfl.ch] - actual
JPEG2000 Decoder (Version 2.3.1) [ltswww.epfl.ch] - written in Java, the source is not available yet (it will be)
patents + the future of the movie industry (Score:4)
Barthel said this patented variable-sized window-scanning technique has been incorporated into the JPEG2000 committee draft. Besides LuraTech, Ricoh and several other committee members found bits and pieces of their patented technology in the spec. Barthel said all involved companies have signed agreements that give developers royalty-free rights to part one of JPEG2000.
The open source community should be very conserned about this issues. We don't want the LZW patent screw-up with the GIF format to happen again. There are two sollutions: either drop the patent (I don't know if this can be done, any lawyers?), or make sure that the software using it can be GPLed forever. The word "forever" is very important, so that we won't have any problems.
I believe that we should put enough pressure on this standart and make it really free. If the screenshots are real, it is definitely worth it.
By the way, I am not completely sure that this is real. Something like this usually pops up every second year, and usually it's fake. I remember reading about fractal compression, which was supposed to blow JPG away. It was in the early 90s, and obviously it didn't have much effect on the industry.
Such advances are really great for the IT industry and the commmunity. I can't wait for downloadable high qualit movies to become available. The next big thing after MP3 will be movies. Either pirated or commercial, movies will be available on the Internet. I can see all the big movie companies making movies available for download for a small charge (even if it's $3 or $5 I will gladly pay it, instead of spending hours looking for the same movie on some pirate site. The Internet will drop the marketing and distribution costs for most of the movies, and it will make it profitable to make the files available, even if the piracy level stays high.
Re:patents + the future of the movie industry (Score:2)
...which is precisely why no one uses the format. Dang shame too, it compresses to an insane level with very little loss of detail, and artifacting takes the form of some blurring at the edges instead of the blocky tiling of overcompressed JPEGs. Fractal images also have no inherent dimension and can be decompressed to any size, like vector formats.
Scientific American ran a short article (I think about a page) in the early '90s about it. I think it was written by the inventor.
Mathematically brilliant, but no sense. He didn't realize that by keeping it proprietary, he was limiting its use to applications where it is necessary, as opposed to places where it would be convenient (like the Internet). As long as he sits on his patent (it's a British patent, IIRC), nobody is going to use his format.
---