On 2/16/2020 5:04 AM, C.H.Ling wrote:
There is no contradiction between Moose and Wiki say.
I agree. I was commenting on why and how anamorphic movie shooting and projection came about, not about how they might
be used in digital video.
I'll try to make my point more clear:
To accommodate different display Aspect Ratios (4:3, 1:2.35...etc.) with same
movie system (35mm, 70mm...etc.).
1. You can make widescreen movie on a 4:3 old system by just screening out the top and bottom of the frame. In that
case you only use part of the whole frame, the quality will suffer.
2. Use anamorphic lenses during film shooting - the distorted image will cover the whole frame then it will be
corrected with another anamorphic lens during projection. In this case the output quality will be kept and there is
only cost on shooting/projecting lenses involved.
So the point of using anamorphic lens on widescreen movies is to have better image quality while using the same
equipment.
I'm not sure where the advantage lies. Although I shoot short videos of things where a still shot doesn't show the whole
aspect of the subject, I know very little about digital video, so what I say is largely theoretical.
If I shoot Panny 4K video, I have 3840 horizontal pixels. My current TV is FHD, i.e. 1920 pixels wide. If I get a new,
4K TV, the horizontal pixels match. I don't understand how compressing the width, then uncompressing it for display, at
the same resolution, can increase detail.
In addition to the various problems/artifacts discussed in the Wikipedia entry, there is inevitable loss of detail
resolution. Assuming a detail one pixel wide on the sensor with a standard lens, with a 2x optical compression, it
either disappears, or becomes 2 pixels wide, when expanded.
Yes, that's a gross simplification, but the effect is real. With fisheye images, an app like Fisheye-Hemi can do a
wonderful job of "un-fishing", but details, especially as you move to the corners, are poorer than a shot with a linear
UWA lens. With a 20 MP sensor, for display on the web or in a printed book, that's not a practical problem, but very
obvious pixel peeping.
Likewise, with video, those effects are simply not visible to the human eye viewing a moving picture. I remember when I
was a projectionist being amazed at the poor detail in individual frames, and the amount of motion blur, in an
apparently sharp movie.
In fact, cameras with hor. pixel counts higher than 3840 need to deal with this issue. Panny sticks with the 1 to 1
pixel method. Looks good on the 16 MP sensors, where the frame is only slightly narrower that a still shot. On the 20 MP
GX9, the frame coverage gets noticeably smaller in video. If I were interested in shooting a lot of 4K video, I might be
using the 16 MP GX85.
I assume cameras with even larger sensors must do some re-sampling to record 4K.
Am I missing something?
Which Kay Moose
--
What if the Hokey Pokey *IS* what it's all about?
--
_________________________________________________________________
Options: http://lists.thomasclausen.net/mailman/listinfo/olympus
Archives: http://lists.thomasclausen.net/mailman/private/olympus/
Themed Olympus Photo Exhibition: http://www.tope.nl/
|