At 12:08 AM +0000 9/7/03, olympus-digest wrote:
>Date: Sat, 6 Sep 2003 04:32:44 -0700
>From: Jan Steinman <Jan@xxxxxxxxxxxxxx>
>Subject: [OM] Re: Macs vs PCs; 16-bit apps (was Re: an Albert intervention)
>
>WARNING: long boring stuff for people who don't like off-topic postings --
>please delete instead of complaining!
Actually, this slides back on-topic in the latter third. Sorry; I tried.
> >From: Joe Gwinn <joegwinn@xxxxxxxxxxx>
> >>From: Jan Steinman <Jan@xxxxxxxxxxxxxx>
> >>
> >>...on the 680x0, 16/32 bit ops were completely regular -- they just had to
> >>be on an even 16 bit boundary.
> >
> >True. Actually, any 16-bit boundary would work, even or odd. Or by "even"
> >did you mean "exact"?
>
>Ah, you caught me mis-using words. Yes, I meant "exact."
>
>But as I recall, when the 32-bit busses came out (68020?), there was a slight
>performance penalty for being on odd 16-bit boundaries when fetching 32-bit
>quantities. (Assembly code programmers are so anal... :-)
True. (Guilty.)
> >> >Actually, with modern computers, the memory system speed is more
> >> >important than the CPU speed.
> >>
> >>Exactly! And the new Mac G5 has a 128 bit memory bus that can hit 6.4
> >>gigabytes per second! That's nearly four times as fast as the nearest
> >>desktop competitor.
> >
> >So, the new machines will be four times faster than anything else on the
> >market?
>
>Certainly it will approach that level on some things. Photshop use (memory
>intensive) benches at 2.2 times faster than a dual 3GHz Wintel box. (Thus, my
>particular interest.) Some floating point ops are over 10 times faster!
I'll wait for the benchmarks on actual computers. While it's true that some FP
operations are very fast, even with FP-intensive applications, most of the
computer instructions executed are integer, so the effect of speedups in the FP
hardware is diluted. The most common exception is signal processing, where a
very simple program is executed at very high speed. Signal processing is the
intent of the AltiVec enhancements in the PowerPC, and the ?? enhancements in
the Pentia family of Intel processors.
> >I would... doubt that such an advantage would long endure. Computer design
> >is a leapfrog game.
>
>That's certainly true. However, I'm hopeful the PowerPC architechure has a lot
>of "oomph" left in it.
So long as there is a market, any architecture will prove to have oomph. If
that weren't true, the Intel architecture would have died years ago.
[stuff about clock-speed marketing and thermal issues elided]
Most people don't understand how computers work, don't want to know, and don't
have to know. But one consequence is that people are always seeking a simple
one-number metric of goodness. Sometimes one has two numbers, but the basic
principle is the same. This is universally true: automobiles (horsepower),
film cameras (fastest shutter speed), photo lenses(f-stop, focal length),
digital cameras (megapixels), computers (megahertz, megabytes), etc. I'm sure
peopleon this list can think of far more examples. Important but hard to
quantify things like build quality, ease of use, suitability of the design for
a specified purpose, et al, tend to get lost in the barrage of numerical noise.
One mark of experience in a field is the use of such non-numerical qualities
as a major consideration when choosing what to buy.
> >I would not buy a computer that came out a week ago. Too much risk of
> >design or manufacturing flaws.
>
>Yea, for those who value safety over excitement, that may be a better goal.
Yes. I don't hope to get my excitement from a computer. I can design and
build a computer from small parts, and made my living as an embedded system
programmer for many years, but I don't do such things for fun. I expect my
computers to be competent and efficient, but boring.
>Certainly in the chaotic Windows world, that mind-set has been holding back
>innovation, since "new" ideas take longer to catch on. (Case in point: Intel
>invented USB, but until Apple daringly put it into every computer as a
>standard port, there was essentially no support or devices for it. Now every
>Wintel box has it, thanks to Apple.)
I think there is a simpler reason for a Wintel user to be so reluctant to
change: fear of the consequences of failed change. We've all heard the stories
of the minor change that destroyed the world, leading to the classic
reformat-disk-and-reinstall-everything approach to repair. (It happened to my
boss two years ago.) The risk incurred when making a change to a Wintel box
far exceeds that incurred when making the same change in a Mac (or a UNIX box
for that matter), so those fearful users are being perfectly rational.
>However, Apple has done a wonderful job of keeping their "early adopters"
>happy. I bought their first 17" monitor, which had problems. They voluntarily
>extended the warranty from one to three years, and completely replaced the
>monitor with a brand new one after nearly three years! I only recently retired
>it (high voltage power supply out) nearly ten years after initial purchase.
>(Most monitors in full-time use have a useful lifetime of only about three
>years, so I tripled my value on that deal, even though it cost 1.5x as much as
>a "no-name" monitor at the time.)
Apple always comes out on top of the surveys of computer reliability. I've
never had one fail, either. That said, Dell is usually right up there too.
> >...digital cameras seem to need replacement every two or three years, just
> >like computers.
>
>According to an independent survey (IDC) Apple computers have a useful
>lifetime nearly twice that of Wintel machines. Average time-to-replace for
>Wintel was 28 months, vs 47 for Apple. I'm beating the average with my
>five-year-old 400 MHz G4 (2GB RAM) that I'd continue to use for a few more
>years if it wasn't for being able to stuff more RAM in the G5.
The computer I am typing this on, a 266-MHz PowerMac G3, was purchased in
February 1999. It's getting a bit slow nowdays, so I may replace it in early
2004. That would be five years (60 months). What allowed me to get five years
was upgrading the main disk to a 10000-rpm server disk. This cost far less
than a new computer, and had an amaving effect on practical speed.
My wife's iMac was purchased in August 2002. It replaced a Power Computing
clone purchased in August 1996, so it lasted six years (when the disk died),
although it was eclipsed by the G3 in 3.5 years.
>Finally, I predict that as Moore's Law progresses, we'll begin to see floating
>point used more in digital imagery. Current imagery is hamstrung by limited
>dynamic range -- 8 bits per color, or essentially 8 stops, similar to slide
>film, but considerably less than negative film. There's no coherent standard
>among digicams for 16 bit color -- they each have their own proprietary "raw"
>format.
I don't think images will go floating-point anytime soon, because FP is too
space-inefficient a format, and FP arguably solves the wrong problem (as
discussed later). It's simpler to just make the integer words longer, and
analog-to-digital converters are all integer. What has prevented the emergence
of standard formats is that each manufacturer believes that their format should
and will set the standard. True standard formats will emerge only after the
market settles, after the losers have been buried.
Ansel Adams, in his book on available-light photography, commented about the
2500:1 illumination range he saw in a scene near his house, from white paint in
direct sunlight to black dirt in shadow, and he used 13 zones (~stops) in that
book, so we can conclude that a 2^13= 8,192 to one range of brightnesses is
sufficient for all photographic purposes. Not that Ansel reported ever seeing
the whole range in any one scene, but it's certainly possible. The minimum
discernable difference in brightness is about 1% at the finest, varying with
brightness. So, if one can express (2^12)(100)= 819,200 different brightness
levers, one can in theory achieve practical perfection. This requires
log2(819200)= 19.6 bits, call it 20 bits (per color). If we use the 2500:1, we
get log2(2500*100)= 17.9 bits. In practice, 16 bits per color would be very
difficult to distinguish from perfection.
Now, the 1umber comes from the experiments made by sensory psychologists, and
they were not measuring the effects of quantization of brightness levels on
modelling (where smooth variations in brightness yield the perception of three
dimensions), and achieving this modelling is why people use big negatives.
People are sensitive to noise in images, but even more sensitive to banding
effects. The classic mark of success is when people comment that the print
looks like one is looking through a window, not at a photo. Anyway, 0.1
0ifference is almost guaranteed to exceed necessity.
>Now imagine an image format that would allow essentially any combination of
>aperture and shutter speed.
Integer formats can do this as well, given sufficient bits, as discussed above.
>It would put an end to pseudo-film-speed ratings on digicams.
Not likely, as the purpose of those speed ratings is to control the exposure
time. In dim light, to short an exposure time leads to noisy pictures. It's a
physics issue in the design of photosensors. The number of different
brightness levels a pixel can have has little effect, except that too few
levels makes the noise look much worse.
>Even 16-bit floating point could provide 128 stops of dynamic range!
It may look terrible, because the instantanoeus dynamic range would be
insufficient to prevent the appearance of noise and banding in more-or-less
uniformly illuminated areas, like the sky.
The practical (instantaneous) range observed by Ansel is more like 13 stops, so
128 stops would be overkill, and would consume bits better spent on
instantanoeus dynamic range.
I guess I need to explain total and instantanoeus dynamic range. Floating
point is just a binary form of traditional scientific notation for numbers, so
we can do our examples using base ten numbers.
Assume that the brightness of each pixel is expressed as <mantissa> times ten
to the <exponent> power: "value=<mantissa>*10**<exponent>", so one could
express the brightness of some surface as 1.00*10^2 units (that is, one times
ten to the second power, or 100 units of brightness).
Now I want to code these numbers, to save storage space, so I allot a fixed
number of decimal digits to the mantissa and also to the exponent, and assume
that the decimal point is just after the first mantissa digit.
If I allot one digit to mantissa and two digits to exponent, the example
becomes "102", and this format can express anything from 9*10^99= ~10^100 units
of brightness to 1*10^0= 1 unit. This is an extraordinary total dynamic range,
but extended image areas with smoothly varying illumination will be severely
banded, because there are only ten possible levels near the average brightness.
The instantaneous dynamic range is ony 10:1, far from sufficient.
A better allocation of digits would be two digits of mantissa and one digit of
exponent, but even this will show banding at edges between regions of different
exponent. As this argument is carried to completion, one ends up spending all
the digits on the mantissa and none on the exponent. In other words, one ends
up with an integer format.
>If floating point imagery catches on soon, PowerPC will gain a HUGE advantage
>over Wintel, much like the advantage they currently enjoy in scientific
>simulation and visualization -- a market Apple currently owns, due to the
>floating point performance of PowerPC.
Hmm. If floating point does catch on, the next generation of Wintel will have
better floating point. Intel makes many billions of dollars on their
processors. They will not let something like this just slip by.
>All in all, it's an interesting time to be around. Whether one prefers Windows
>or not, I think most people are glad Apple is out there, nipping at Wintel's
>heels... :-)
Yes.
Joe Gwinn
< This message was delivered via the Olympus Mailing List >
< For questions, mailto:owner-olympus@xxxxxxxxxxxxxxx >
< Web Page: http://Zuiko.sls.bc.ca/swright/olympuslist.html >
|