William H. Welliver wrote
> Baseline
> JPEG uses the DCT (discrete cosine transform which is a linear transform
> akin to the Fourier transform) to convert image data to a more compressible
> format. When using a transform such as DCT, it is entirely possible to reduce
> image size far beyond conventional compression algorithms even while being
> lossless.
Fourier and DCT methods work by first translating into the "frequency
domain", then throwing away the high "frequencies" (otherwise known
as fine detail), and then applying the inverse transformation to
retrieve the picture. Furthermore, since our eyes resolve green better,
more fine detail is thrown away from the red and blue components.
It is true that some compression can be achieved by lossless
compression techniques such as run length encoding, which remembers
how many adjacent pixels there are rather than each separately. This
works best if there are large expanses of uniform colour - often not
the case in photographs. Some of this is going on with DCT methods,
but not much, because they only consider smallish blocks. Most of the
compression is got by losing information. If there is a lot of
information there to start with, it can't be otherwise - information
theory says so. The best you can ever do is, for a given compression
ratio, minimize the information lost, and subjectively decide which
has the least objectionable artefacts (traditionally using a slightly
risque self-portrait of a researcher named Lena).
Chip Stratton wrote
> I'm pretty sure that JPEG is never 'lossless' and so
> is never 'the same quality as the original'
What is commonly termed JPEG is lossy. There is a "lossless JPEG",
which is a completely different algorithm. I think it might be
encumbered by an IBM patent on arithmetic compression, which is
indeed an excellent form of lossless compression.
< This message was delivered via the Olympus Mailing List >
< For questions, mailto:owner-olympus@xxxxxxxxxxxxxxx >
< Web Page: http://Zuiko.sls.bc.ca/swright/olympuslist.html >
|