Chuck Norcutt wrote:
> Thanks. That's a great read. The two points I picked up that I hadn't
> previously considered is the effect of diffraction as a limiting factor on
> DOF and the effect of diffraction on color accuracy at high resolution.
>
> Also, the fact that the G7 is diffraction limited at f/2 means that the airy
> circle is spilling across pixels at any aperture.
In a sense, I suppose it always is spilling over. The question is
whether the amplitude is sufficient to cause a noticeable effect. It's a
wave pattern, so what if it skipped the first adjacent pixels, only to
affect the next round? :-)
> I wonder if downsizing the image to combine pixels at some level actually
> produces a better image. Moose's 12 MP Canon is probably a good test case.
>
How does one test such a question? I find it very difficult to compare
images with different numbers of pixels. Even assuming working with some
sort of test image with objective resolution info, there is no f2 or
f1.4 aperture to compare to the actual f2.8 of the camera as the wide end.
I suppose if detail resolution increased as it was stopped down that
might tend to indicate it wasn't diffraction limited. But if lens
limitations are improving while diffraction limitations are increasing,
how does one separate them.
The increase from 10 to 12 MP is only a 10% increase in linear
resolution The dpreview test of the G9 (same sensor system as the A650)
does show a slight increase in resolution over he G7, 10% vertical and
3% horizontal. If the G7 were already significantly diffraction limited,
would one see such an increase?
About all I can say in general from my own experience, and without
testing different f-stops, is that the A650 images, when downsized to
the same pixel dimensions, resolve more detail than the 6 MP sensor of
the F30. As the sensors are of the same size, I suppose that says there
was some gain still to be made from a higher pixel count that is greater
than losses from diffraction limiting?
The other thing that may not be included in the theoretical calculation
is that the loss is calculated based on criteria that predate sharpening
and LCE. To the extent that diffraction only reduces visible resolution
because it reduces contrast at edges, thus rendering detail less easily
visible, but still there in the data, it may be possible to recover
detail that wasn't visible in the unaltered image file. I find that to
be true of many cameras, not just the A650, though. Is it more with the
A650?
Many A650 RAW developed images do seem to thrive on NeatImage with some
resharpening set. It has an almost magical effect of eliminating much
noise while increasing visible detail. Depends on the ISO and subject,
of course, but generally it's a real winner. Now I'm wondering if I only
see that effect, or mostly see it, on A650 images. I'm not sure.
Moose
==============================================
List usage info: http://www.zuikoholic.com
List nannies: olympusadmin@xxxxxxxxxx
==============================================
|