Letters

Dr. Dobb's Journal March 2000

Larry Ellison's Jet

Dear DDJ,

I am probably the least likely guy to defend Larry Ellison, but the issue is not as simple as Jonathan Erickson portrays it in his January 2000 "Editorial." Ellison's state-of-the-art jet is far quieter than smaller jets that are allowed to land late at night. The San Jose ordinance uses aircraft weight to determine which jets are too noisy. The city is unwilling to consider changing the ordinance to use noise measurement because it is possible that it does not have the authority to regulate it at all, and the city doesn't want to take the risk of completely losing control of it. As commercial and private airplanes get quieter and quieter, the pressure will build to have a sensible policy. The city faces a problem now, in that if it prosecutes Ellison, a court may find that the municipality doesn't have the authority. But if the city doesn't prosecute him, the ordinance is useless.

Michael Patten

MPatten@torrex.com

Jonathan Responds: In early January, Larry Ellison filed a lawsuit against the City of San Jose for the right to land and take off at night.

The True Name of the Singularity

Dear DDJ,

In his December 1999 "Programming Paradigms" column, Michael Swaine defines "Singularity" as referring to the rushing pace of change, the asymptotic graphs of processing power and such, and the resulting "massive failure in the art of predicting the future." That's not the historical meaning of the singularity, although nowadays it's often used that way. There's a fascinating story about how "singularity" started by describing a mathematical function going to infinity, then the center of a black hole, then a breakdown in our ability to understand the future, and then infinity again.

The first usage of the term "singularity" in the domain of futurism was by Vernor Vinge, referring to the difficulty of understanding a future [that] contains beings more intelligent than the author. "Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It's a problem we face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity--a place where extrapolation breaks down and new models must be applied -- and the world will pass beyond our understanding" (True Names and Other Dangers). The analogy was to the singularity at the center of a black hole, where our models of the laws of physics break down; in turn, the center of a black hole is called a "singularity" because of the asymptotically infinite gravitational forces, producing a discontinuity in the curvature of space.

Once Vernor Vinge invented the term, others tried to calculate the advent of the Singularity. The most famous projection matched Moore's Law to an estimate of the raw processing power required by human intelligence, coming up with an estimate of 2035 (now obsolete, on both counts). Other famous graphs included the time when materials science would reach the level of individual atoms, alleged to be in 2040 (yeah, right! Have you read the news lately?), and the time when human lifespan would be projected to increase at the rate of more than one year per year.

Meanwhile, others were asking what would happen after the Singularity. As Vernor Vinge's Hugo-winner A Fire Upon the Deep shows in the opening pages, intelligence-enhanced minds are more effective at enhancing intelligence. Some, including myself, half-seriously speculated that the function goes to infinity. Consider: If computing speeds double every two years, what happens when computer-based AIs are doing the research? Two years...one year...six months...three months...

The graph-projection and postSingularity-infinity concepts then sort of heterodyned to form the concept of those futuristic graphs, which happen to go to infinity on any grounds whatsoever. World population is hyperbolic to a much better fit than exponential, going to infinity in 2029. This has nothing to do with intelligence enhancement, but, being a mathematical singularity, a futuristic graph projection, a discontinuity, and an infinity, it was easily confused with the Vingean Singularity. And thus it all came full circle. But what Singularity really means is the rise of greater-than-human intelligence, and the breakdown of our predictions as a result. For more information, see: http:// www.aleph.se/Trans/ Global/ Singularity/, http://www.student.nada.kth.se/ ~nv89-nun/offloading/vinge, or http://pobox .com/~sentience/singularity.html.

Eliezer S. Yudkowsky,

sentience@pobox.com

The Sixth Sense

Dear DDJ,

When I read Al Stevens' column "Teaching C++ for the Sixth Time" (DDJ, November 1999), I rapidly lost interest because of the now-so-common-cute-story-before-the-good-stuff approach to writing articles. However, because I am a grandpa, too, I thought I would continue reading. And I am glad I did because besides a good article, he revealed one of the reasons why technical books can be so mediocre: Today's "new and improved" publisher's perception of what a book should be to entice a potential reader to buy. As a rampant buyer of C/C++ books that have ranged from good to terrible, I thought someone may benefit from my buying experiences.

I do not believe a reader is very adept at evaluating the quality of the technical content of most books (me included). Rather, it is the perception that is important as the publisher rightfully understands. Is that bad? It is if the technical content is bad. When I buy a book, it is because I don't know something, and I want to learn about it. Therefore, if I am ignorant of C++, how well can I evaluate a 1000+ page book on the subject? Not very well. The point at which I know I have made a good purchase is well into reading the book and programming its examples. Yes, it is too late then.

Let me discuss the concept of perception further, using IDG Books as an example. IDG Books has the approach a lot of publishers seem to have -- have the mostest firstest. Windows 98 came out. I needed to program it in a hurry (ha!) and thus bought the first book I found: Windows 98 Programming Bible published by IDG. It had a nice cover, the table of contents covered everything I thought I would ever need to know (some of the subjects I didn't have a clue about), 1000+ pages, MFC-based, and a CD-ROM. How could this be a wrong purchase? It turned out the purchase wasn't wrong because I learned a lot from the book. But there were two major problems. First, there was no way this book could cover the subjects it purported to cover (my ignorant and unrealistic expectation). Second, it had many mistakes, which I attribute to lack of quality control. I quit counting at 100. The great savior was the accompanying CD-ROM. Most of the code worked flawlessly.

I can understand why IDG likes the For Dummies...concept. For one reason, it is a cute marketing ploy. Who doesn't think he or she is a dummy when first learning a subject? However, after you tire of this gimmick, it has another attraction but requires the book to be a good one. That is, I once bought a book entitled Learn REXX In 21 Days. It was among the best books I have read describing an interpretive language. I then concluded that every "...In 21 days" book was probably good because the publisher was good. When I was interested in buying a book, I would first look for that line of books. Well, that did not work too well. Same publisher, but a different author.

So I have some recommendations directed to programming books:

These recommendations are limited to my experience, but I hope they will be helpful. I would be interested in others sharing their technical-book-buying experiences.

Larry Sollman

lcsollman@earthlink.net

DDJ