The bit depth of your computer is independent of the bit depth of your
images. If you're running Windows 95 (1995) or later you're running a
32-bit operating system. The bit depth of the processor controls the
maximum amount of memory that can be directly addressed. The maximum
value of a 32 bit binary number is 2 raised to the 32nd power or, in
decimal, 4,294,967,296 or 4 gigabytes. A 16-bit computer is limited to
directly addressing 2 raised to the 16th power or, in decimal, 65,536 or
64 KB. Early DOS computers were 16 bit computers but the Intel
processors they ran on could support up to 1 MB of RAM by dividing the
memory into 64 KB segments which could be operated on one segment at a
time. 64-bit computer can theoretically address enormous amounts of
memory but the computer is throttled there by implementation restrictions.
When we speak of image bit depth we're not talking about the size of the
image but rather the number of bits in a pixel for a gray image or the
number of bits in each of the RGB components of a color pixel. The
number of bits controls the maximum number of different brightness
values the pixel can have. An 8-bit color pixel actually has 3x8 = 24
bits and a 16-bit color pixel actually has 3x16 = 48 bits per pixel.
But you need to use care when reading these descriptions since 8 and 24
or 16 and 48 are often used interchangeably to describe the same thing.
Since 8 bits can contain decimal values ranging from 0-255 an 8/24 bit
image can only have 256 different brightness values for each of its gray
or color components. A 16/48 bit image can have up to 65,536 brightness
values for each gray or color component.
It's important to note that pure black is the total absence of light and
pure black is represented by 0 whether it's an 8 or 16-bit image. Pure
white, on the other hand, is represented by the maximum value of the
pixel so it 255 in an 8-bit image and 65,536 in a 16-bit image. When
8-bit images are converted to 16-bit the 8-bit values are multiplied by
256. Thus a 1 in 8-bit becomes 256 in 16-bit. Note that there are is
no difference between the maximum or minimum brightness levels that can
be realized in each. But the vast empty range between 1 and 256
represents a huge range of subtle brightness differences that can be
represented in 16-bit pixels but not in 8-bit pixels.
The difference in representation is critical when editing an image's
brightness, contrast, color, saturation, etc, etc. The only data held
by a pixel is brightness of a gray or RGB level. All changes of
contrast, saturation, sharpening, etc, work on changing the only data
available which is the brightness level of the pixels.
Now consider how brightness is changed... by multiplication or division.
If a pixel has brightness 32 and we want to raise its brightness by
one stop we multiply by 2 and get 64. If we want to darken 32 by one
stop we divide by 2 and get 16. All well and good so far. But what if
we want to raise the brightness of the pixel with value 32 by 1.3?
1.3x32 = 41.6. But in integer arithmetic (that's what 8-bit binary is)
there is no such thing as 41.6. We either have to truncate to 41 or
round to 42. On the other hand, if we were working on 16-bit pixels the
equivalent of 32 in 8 bits would be 32x256 = 8192. Raising its
brightness by 1.3 would be 8192x1.3 = 10,649.6. Once again we're still
working in integer arithmetic so the number has to be truncated to
10,649 or else rounded to 10,650. If that's all we did when we
converted back to 8 bits by dividing all by 256 we'd be right back to
the same 41 or 42 that we got with 8-bits. But the difference comes
from repeated editing. The effect of truncation or rounding is less and
those empty spaces in the 16-bit number range start to get filled in.
When those values get converted to 8 bits the results will be different
than had the editing been done only in 8-bits. If you edit an 8-bit
image long enough you may start to see posterization due to holes and
spikes appearing in the brightness number range. You see it in the
histogram as spikey lines and empty spots.
Now consider RAW images for a moment. These images typically start life
as either 12 or 14 bit images and get converted to 16 bits on the way to
editing. But these images don't start with a bunch of holes in their
value ranges as does an 8-bit image converted to 16-bits. The full
range of brightness ranges is real and the image will survive much more
severe editing changes without succumbing to posterization. Compared to
a JPEG image there is also much more leeway in recovering dark shadows
and blown highlights... typically up to a stop on both ends. For this
advantage you only need to use the RAW converter in the first stage of
editing. Use the RAW converter to do all of your brightness, contrast,
color balance, saturation, etc. changes up front. Then you can convert
to 8-bit for cropping and other editing changes with little or no effect
on color and brightness. Resizing and sharpening still have some effect
on pixel brightness but is minor compared to other edits.
My last comment is that FastStone can call external editors. If you
open a RAW file in FastStone I'm sure you can pass it to the Oly RAW
converter before doing further work in FastStone. Or just do all of
your work in the RAW editor first and then move to FastStone after
conversion to JPEGs.
Chuck Norcutt
On 12/12/2013 7:29 AM, Brian Swale wrote:
> I have no practical knowledge of the differences between 64, 32, 16, and
> 8-bit images.
>
> I really don't know what my machine works in. I wouldn't know where to look
> to find out ...
--
_________________________________________________________________
Options: http://lists.thomasclausen.net/mailman/listinfo/olympus
Archives: http://lists.thomasclausen.net/mailman/private/olympus/
Themed Olympus Photo Exhibition: http://www.tope.nl/
|