this

FAQ

contact

rss/xml

atom/xml

search

spam notice

terms of use

sundries

xkcd

Fripp

Rahul

skippy

Sroyon

pylduck

Dogblog

thistothat

Joe.My.God.

Woolly Days

Futility Closet

Language Log

Bruce Schneier

Dinosaur Comics

archive

FAQ

contact

rss/xml

atom/xml

search

spam notice

terms of use

sundries

xkcd

Fripp

Rahul

skippy

Sroyon

pylduck

Dogblog

thistothat

Joe.My.God.

Woolly Days

Futility Closet

Language Log

Bruce Schneier

Dinosaur Comics

archive

On September 1, 1983, the Soviet Union shot down
Korean
Air Flight 007 which had violated its airspace.
The USA was quick to insist that the Soviets had fired on the plane with full knowledge that it was a civilian airliner. But we didn't know that. From the introduction to Seymour Hersh's book on KE007, The Target is Destroyed:
Those [intelligence agents] who chose to talk to me did so out of a conviction that political abuse of communications intelligence has become a reality in the Reagan administration, and a belief that to protest to their superiors about it would be futile and damaging to their careers. Some of those interviewed did retire from intelligence service shortly after the events described in this book. In a few cases, the mishandling of Flight 007 played a role in their decision to get out.The USSR in turn claimed the flight had been deliberately sent into Soviet airspace at the request of the USA. That assertion was also ahead of any available evidence. Five years later, the USS Vincennes shot down
Iran Air
Flight 655.
Statements by US officials featured numerous untruths, e.g.:
- the
*Vincennes*was in international waters at the time (it was in Iranian waters) - Flight 655 was outside of a commercial air corridor (it wasn't)
- Flight 655 was descending (it was climbing)
When I first did photography, scaling an image was done by moving an enlarger head up and down and mirroring an image was a matter of turning the negative over. In computer graphics, scaling and mirroring are mathematical transformations: (x,y) coördinates of image elements are replaced by linear combinations of the original x and y. Matrix multiplication is a concise way to express linear combinations: Sample transformation matrices and their effects on text:
A key feature of using matrices is that multiple transformations in sequence can be folded together by matrix multiplication. Sometimes a sequence of transformations turns out to be equivalent to a single familiar transformation. A carefully-chosen sequence of three shears is equivalent to rotation, good to know if for some reason you must do arbitrary rotations with a feeble program like Micros‑‑t Paint (which only rotates by multiples of 90° but which does know how to shear). But matrices aren't just for graphics. The Fibonacci sequence (0, 1, 1, 2, 3, 5, 8, ...) has the recurrence relation F _{n} = F_{n‑1} + F_{n‑2},
concisely expressible by powers of a 2×2 matrix:
(That this is correct for whole values of n is easily shown by induction.) Raising a matrix to a power by repeated matrix multiplication is a slow process, but in this case there's a trick loosely akin to the rotation-by-three-shears maneuver. There's an advantageous way to decompose this 0 1 1 1 matrix into three factors. In the language of graphical transformations, the three components are a rotation (by about 211.7°), then a scale-with-reflection, then the inverse of the original rotation. Denoting the appropriate rotation matrix by R: Pairs of inverses R ^{‑1} and R cancel out when raising the
0 1 1 1 matrix to powers, and raising a diagonal
matrix to a power is as simple as raising each element to the desired power.
(See
here
if that wasn't clear.) That is,
which gives the basis for a closed‑form expression for the nth Fibonacci number.
And because raising a matrix to non‑integral powers is even more fun than doing arbitrary rotations in Paint, why not calculate F _{n}
for continuous real values of n? The negative number ‑0.618...
in the diagonal matrix means the results are complex,
but that just adds to the flavor.
Here's F _{n} plotted on the
complex plane
as n varies from [0,7]. The curve intersects the real axis
at Fibonacci numbers 0, 1, 1, 2, 3, 5, 8, 13.
Thanks to everyone whose pages are linked to above and to Ron Knott, on whose page I first saw F _{n} evaluated for continuous values of n.
Philosophy is to science as pornography is to sex: it is cheaper, easier, and some people prefer it.Since I was a teenager, lightbulbs sold in the USA have listed wattage (power consumed) and lumens (light produced) on the package. Even though most consumers don't know what a lumen is, the numbers can be compared and you can buy the more efficient bulb if you're so inclined. A lightbulb's efficiency can also be expressed as a percentage: a unitless number, immediately comprehensible—but unheard of on lightbulb packaging or in most discussions of the relative merits of, say, incandescents and fluorescents. A cynic might say that bulb packages don't quote efficiency percentages because they're embarrasingly low (wanna guess how efficient an incandescent bulb is?). But to be fair, lumens take the color-dependent sensitivity of the eye into account and thus tell you something that an efficiency percentage doesn't. In a trivial sense, every electrical device is 100% efficient. Any energy consumed comes out in one form or another. Question is, how much of the energy consumed goes toward the desired purpose and how much comes out in an undesired form (usually heat). To gauge efficiency as a percentage, the desired output must be
expressible in terms of energy. That's straightforward for a lightbulb
(photons embody energy) or a motor (mechanical energy is easily calculated).
But what about, say, a computer? Computers throw off a
lot of heat—some more than others, making it tempting to deem
one computer more efficient than another. But the desired work
of a computer is calculation; how to measure that in terms of energy?
Is there a theoretical minimum amount of energy needed to add two numbers?
The answer is deliciously strange. Calculation inherently takes energy to the extent that the result throws away some of the information embodied in the input. If a circuit adds two numbers and outputs only the sum, the output contains less information that the input. If the sum is 19, you have lost the distinction between having been asked to add 17+2 or 0+19. Writing a bit to a computer memory discards whatever value the memory previously contained and thus can't be done without expending energy. The theoretical minimum cost of changing one bit is k T ln 2,
where k is the Boltzmann constant (about
1.38×10-23
joules per degree Kelvin), T is the temperature of the circuit
(Kelvin, natch) and ln 2 is the natural log of 2.
Real-world computers use much much more than that.
Although the mininum energy inherent in calculation has some theoretical interest, there isn't much sense in expressing computer efficiency as a percentage. But back to lighting. Y'all took a guess as to how efficient a lightbulb is, yes? A typical 100W incandescent generates about two watts of light, for an efficiency of about 2%. Frogs across the street from my house this evening. |