<<Which makes me wonder if certain lenses plus camera do better or worse with
the
<<AI? Besides just the subject matter. Does that depend on what the AI filter
was
<<trained on?
All speculation but would bet the subject matter trumps almost everything else
as results critically dependent on the training set. Unlike deconvolution with
a known PSF, (where cam/ lens are critical parameters ) the new pixels are
totally fabricated based on the surrounding pixels and training set. As in all
neural networks the exact procedure is a black box. Eliminating halos and
while lines at high contrast edges is low hanging fruit
as these artifacts would be absent in the training set. GP AI eliminates the
tendency towards pixelation and keeps bird wing edges as they should be. The
same elimination of artifacts should happen with the "sharpening" products,
though anything can be overdone, but even deconvolution (especially blind
Lucy-Richardson deconvolution assuming a Gaussion PSF ) will easily lead to
ringing artifacts.
Back in the day I had a ISA board optimized for back-propagation neural
networks used in an attempt to determine fractal dimension of vessels in
retinal photographs. The hardware really wasn't ready for that. The poor 386
almost had a meltdown.
Almost 8PM at work and my neural networks are fried for the day and out of
fuel, Mike
--
_________________________________________________________________
Options: http://lists.thomasclausen.net/mailman/listinfo/olympus
Archives: http://lists.thomasclausen.net/mailman/private/olympus/
Themed Olympus Photo Exhibition: http://www.tope.nl/
|