LETTERS

The History of Computers

Dear DDJ,

Michael Swaine's amusing "Programming Paradigms" column (DDJ, March 1996) made a major omission. John Von

Neumann's contribution was not even mentioned. This is also the 50th anniversary of the three volume "Preliminary Report" that delineated in detail the architecture, algorithm, and programming methods prior to building the computer at the Institute for Advanced Study (IAS). This is the first stored-program computer using random-access memory and a word-parallel ALU. One of the coauthors was Herman Goldstein, who also was the lieutenant who represented the Army in the development of the ENIAC, which was idled after a copy of the IAS machine was built at the Aberdeen Proving Ground. There was even a copy built at the Weitzmann Institute. The machine was last seen during the '80s in the Smithsonian Institution's American History section. The electrical and physical design methods were the forerunners of today's design disciplines. Most people ignore Von Neumann but can remember other inhabitants of IAS at that time, such as Robert the bomb man and Albert with the mad hair. Software persons ought to know that volume 2 on programming contained the flow diagram approach to describe software design. NCC, in 1975, did honor the IAS Computer Project. ACM choose to ignore this important contribution and miss the opportunity of a dual celebration with ACM's 50th anniversary in Philadelphia, so close to Princeton.

Sy Wong

sywong@markv.com

Dear DDJ,

I'm no expert in the history of computing, but Michael Swaine's remarks in "Programming Paradigms" (DDJ, March 1996) about the "Real Inventor" of the automatic-digital computer struck me as fatuous. Since when does "official computer history" (whatever that is) rely on the results of judicial proceedings rather than the best historical scholarship? Given the mixed record of the judiciary in demonstrating a grasp of basic software/hardware concepts, I shudder at the thought.

Michael simplistically confers the worldwide "real inventor" title on Konrad Zuse. As far as I can tell (my sources are various History of Computing web pages), Zuse created one of the first punched tape, electro-mechanical calculators; unlike the ENIAC, it did not even have

conditional branching. Of course, neither was a general-purpose, stored-program computer in the modern sense. (I, for one, wouldn't go out of my way to honor Zuse, who did his best to interest the Nazi war machine in his efforts.) I suspect that respectable scholarship would tell a more-nuanced story, given the many smart people working toward the modern concept of the computer during and after WW II. Having just read Andrew Hodges' biography of Alan Turing, in particular, I am astounded at Turing's clarity of vision in conceiving of a stored-program, general-purpose electronic computer at a time when most others could only imagine the usefulness of a super calculator.

As long as I'm raving, let me (humbly) suggest the reason Al Stevens' prediction in the same issue about 3-D programming tools is wrong: We already program in so many more dimensions than three, that the move to three from two is irrelevant. A program of any complexity "projects" its multiple dimensions onto the "plane" of some programming language; it's not at all obvious why projecting onto a 3-plane instead of a 2-plane is going to make programming generally more intuitive.

Phil Mitchell

Jamaica Plain, Massachusetts

neocor@world.std.com

Compact Logarithm Algorithms

Dear DDJ,

In his article, "A Compact Logarithm Algorithm" (DDJ, March 1996), John DeVos describes an approximation to log2(x) using interpolation.

Here's another useful solution to the same problem, but which is more general and easily extended to many other functions. It is based on using a Pade approximation. This has the magical property of converting a poorly converging power series into a (usually) much better behaved rational polynomial.

The general solution is a little more complex, but an approximation using three terms will do for this case.

Given the first three terms of a truncated power series: f(x) = a*x + b*x2 + c*x3 + ..., the equivalent Pade approximation can be derived as P{f(x)} = x*(a*b + (b2 - a*c)*x)/(b - c*x). This rational expression exactly matches the known coefficients for x, x2 and x3, but generally has much better convergence. The problem of interpolating the logarithm function is equivalent to requiring the value for ln(1+x) in the range 0 <= x <= 1. The common series for this has very poor convergence; see Example 1(a). Hence the Pade approximation is as shown in Example 1(b). This is already a good and fast approximation to ln(1+x) and in many applications, like real-time displays, it can be used as a basis for logarithmic scaling with minor modifications for integer arithmetic. To keep this letter a reasonable length, I won't detail the changes to allow for proper truncation in a scaled-integer version.

Where greater accuracy is required, the expression can be optimized by least squares fitting of the coefficients over the range 0-1. This yields Example 1(c). This is about as good as you can get with this simple formula, and these coefficients may be scaled up to suitable integers for use.

Much higher accuracy is possible by starting from a better series: Define Example 1(d), then Example 1(e), from which the Pade approximation yields Example 1(f).

Again, this expression also can have its coefficients tweaked to improve accuracy still further. Bit shifting and a similar approach can derive a method to calculate approximate square roots quickly based on Example 1(g).

Fast rational expressions also can be derived for sin, cos, tan, and their inverses with rms accuracies typically around 0.03 percent. Restricting arguments to 0<x<p/4 gives an rms error <0.003 percent. The series exp(x) converges too well to gain from Pade approximation.

Longer versions of rational polynomials are frequently used for the computation of transcendental functions on larger machines. It is worth considering using them on microcontrollers, too, because they are a good compromise in accuracy, size, and speed.

Martin Brown

East Rounton, England

martin@nezumi.demon.co.uk

Dear DDJ,

John K. DeVos' article, "A Compact Logarithm Algorithm" (DDJ, March 1996) reminded me of a method I worked out long ago to compute base 10 algorithms on my very first calculator. Sure enough, checking the method showed that it works just as well, if not better, in base 2. Within the limits of your floating-point representation, this method lets you control exactly how many bits of accuracy you want to get out of it, and requires just one floating-point multiplication per desired result bit; see Example 2.

Normalizing the initial x to between 1 and 2 is easy enough; depending on the way the number is represented, it will involve either shifting bits or manipulating an exponent. Similarly, division by 2 is quick and easy. This algorithm should prove a winner in embedded systems where code space is limited.

I don't expect to become rich or famous, but if any readers use and like this algorithm, I would like to hear from them!

Carl Smotricz

100015.2411@compuserve.com

Knuth Kudos

Dear DDJ,

Thanks for the wonderful interview with Donald Knuth in your April 1996 issue! He, as much as anyone, has made computing what it is today and what it will be in the future. I, too, have his books over my shoulder.

He believes that programmers should treat software as if it is literature, primarily to be read by other humans. I must respectfully go beyond this. All programs that I have seen are living documents, constantly being modified to meet new requirements. I think the modifiability of software is the more-important property. Readability does not automatically enhance modifiability; in fact, it can make it more difficult. I think the keys to making software modifiable are: 1. For foreseeable types of modifications, minimize the number of editing steps required to perform them; and 2. Structure the comments and code so as to teach the reader how to perform these changes. Modifications that are not fully carried out are bugs. This may sound obvious, but I think the implications are far reaching.

Michael R. Dunlavey

Needham, Massachusetts

miked@mga.com

Got Milk?

Dear DDJ,

Thanks to Murray Lesser for his letter (DDJ, March 1996) confirming the accuracy of my calendar for January 4713 BC. He does, however, open up another door by referring to the equivalent Gregorian calendar as "invalid."

If the criterion is that the Gregorian calendar was not used in 4713 BC, then the Julian also is invalid even if we tack "proleptic" onto it. In fact, even the notation "4713 BC" is invalid, for one can be quite sure there was no calendar, of any persuasion, at that time that proclaimed "4713 BC" to be the year.

On the other hand, if we think of a calendar as an algorithm (like a recipe for brownies) instead of strictly a way to mark the passage of the days in real time, then use of the Gregorian algorithm for that date, by us today, should hardly be considered invalid. Indeed, Pope Gregory XIII insisted that his new calendar match up with the Julian in the third century, an attitude that is consistent with the brownie- recipe idea.

To complete the cycle of letters, then, one would like to know what the extrapolated Gregorian date would be for

1 January 4713 BC Julian Proleptic. I'm working on that and will get back to you. I suspect Murray already has the answer.

Homer B. Tilton

Tucson, Arizona

Dear DDJ,

In response to Homer B. Tilton's letter (DDJ, January 1996), the Gregorian calendar algorithm was published by the Swedish mathematician Zeller in Acta Matematica in the second half of the 19th century. It is possible to demonstrate it with the simple Basic program: N=31: M=12: Y=1997: FOR J=1 TO N:T= (2*M+INT-(3*(M+1)/5)+Y+INT(Y/4)-INT(Y/100)+INT(Y/400))+2: D=(T MOD 7)- 1+J: LOCATE INT(D/7)+2,(5.3*(D MOD 7)+3): ?J: NEXT

Every term in the expression T has its precise meaning:

Y adds a day every year (365 MOD 7=1).

INT(Y/4) takes into account the leap years, and adds a day every four years; INT(Y/100)-INT(Y/400) is subtracted according to the Gregorian reform of the calendar: xy00 is not a leap year unless xy is divisible by 4. So this term subtracts a day every 100 years, adding another one every 400.

2*M+INT(3*(M+1)/5) (makes sense only if we consider January and February the 13th and 14th month of the previous year). It is introduced because the months have different lengths. For M running from 3 to 14 (from March to February of the following year), it produces (MOD 7) the sequence we could call S(M): 1 4 6 2 4 0 3 5 1 3 6 2. This is correct, in fact (S(M+1)-S(M)+7) MOD 7 is always equal to the number of the days of the Mth month MOD 7. This means that if March 1 is, for example, Sunday, then April 1 falls (4-1+7)MOD 7=3 days later in the week; that is to say, on Wednesday. Notice that this is the only empirically derived term in the formula.

2 is added only to let October 15, 1582 (the day of the beginning of the Gregorian reform) be a Friday. Zeller's original formula gave (T+J) MOD 7=0 as a result for Saturdays. The final subtraction of 1 was inserted by me only to make the calendar not begin with Saturdays, but with Sundays as I suppose Tilton's does.

Now it is clear that the only adaptation for the Julian calendar is the elimination of the last three terms in the expression of T (more precisely-INT(Y/100)+INT

(Y/400)+2). A simple calculation with the new formula reveals that October 4, 1582, the last day before the famous insertion of 10 days, was Thursday, as expected.

Tilton's proposed algorithms are both erroneous, for they do not use the integer parts of the divisions. The fractionary parts of the terms have no meaning and must be discarded: (a+b) MOD x=(a MOD x+b MOD x) MOD x holds in general only if a and b are integers. If a and b are not integers, the sum of the decimals can become greater than 1 and can condition the result. Sometimes the wrong terms give a correct result, as happens from March 1996 to February 1997. Nevertheless, it is sufficient to test the algorithm with M=13 and Y=1995, or with M=7 and Y=1997 to realize that the output calendar is wrong. In the end, I want to point out that the Julian calendar repeats itself every 28 years, not only every 700 years.

Lanfranco Salinari

Via R. Valentino, 6/D

Castellaneta (TA), Italy

Intel v. Schwartz

Dear DDJ,

I would like to take this opportunity to commend you on your editorial in the March 1996 issue of DDJ. All of us who have been following Randal's plight realize the substantial risk and courage it took to take such a stand. I just want you to know that it is that kind of advocacy for the front-line programmers that endears your magazine to us. On a personal note, I often buy your magazine on the newsstand. Your stand, however, makes me want to say thank you. Therefore, you may count me as a new subscriber. I know that my one subscription will do very little to offset any ad revenue loss you may incur from Intel, but every little bit helps.

Dave Rensin

drensin@noblestar.com

Example 1: Compact logarithm algorithms.

(a)

ln( 1+x ) =  x  - x2/2  + x3/3  - ...
     ln(1) = 0,     ln(2) = 0.833333     max error = 0.14 (20%)

(b)

P{ ln( 1+x ) }  =  x*(6+x)/(6+4x)
     ln(1) = 0,     ln(2) = 0.7     max error = 0.00685 (<1%)
               rms error = 0.00258

(c)

p'{ ln( 1+x ) } =  x*(6 + 0.7662x)/(5.9897 + 3.7658x)
     ln(1) = 0,     ln(2) = 0.69358     max error = 4.3E-4(<0.1%)
               rms   = 1.5E-4

(d) 

y = x/(2+x) 

(e) 

ln( (1+y)/(1-y) ) = 2y + 2y3/3 + 2y5/5 + ...
     ln(1) = 0,     ln(2) = 0.69300     max error = -0.00014

(f)

P{ ln (1+y)/(1-y) } =  2y*(15 - 4y2)/(15 - 9y2)
     ln(1) = 0,     ln(2) = 0.693122     max error = -0.000025

(g)

P{ sqrt(1+x) } = (4 + 3x)/(4 + x)

Example 2: Compact logarithm algorithm

/*** y = log2(x) for 1 <= x < 2 ***/
#define RESULT_BITS 40     // desired accuracy
s = 1.0;
for (n=RESULT_BITS; n--; ) {
    x *= x;
    s /= 2.0;
    if (x >= 2.0) {
        y += s;
        x /= 2.0;
    }
}

.