Dr. Dobb's Journal June 1997
Michael, DDJ's editor-at-large, can be contacted at mswaine@cruzio.com.Almost everything I discuss this month -- and I touch upon a bunch of issues, from artificial intelligence to Apple's future -- is left unresolved, with major questions yet to be answered. I apologize for that, but the theme of the column this month is a certain kind of transitional moment, and transitions are often short on final answers.
At a certain point in a game against the chess program Deep Blue, Grandmaster Gary Kasparov "got a huge chill." He believed that at that moment he had, for the first time, caught a glimpse of alien thought processes.
You scoff? I can't say that I blame you. The current thinking in artificial-intelligence circles seems to be that the narrow, specialized processing exhibited by modern chess-playing programs is exactly the wrong way to go if you want to simulate -- or understand, or create -- intelligence in a machine. So even if you think that there will one day be some sense in associating the term "intelligence" with a machine, you probably wouldn't expect that machine to be one like Deep Blue. I do happen to believe that machine intelligence is possible, and I suspect that it will strike us as pretty alien when we do see it, but no, I wouldn't look for it in a chess program.
Not only that, how exactly would you recognize an alien thought process? How would you distinguish it from, say, an unfamiliar heuristic? Or from a nonalien thought process that is simply looking at the problem in a novel way? For that matter, what does it mean to recognize a thought process at all, alien or domestic? This seems like an area where there's a lot of opportunity to kid yourself.
So I guess I'd characterize my attitude regarding Kasparov's reported alien encounter as Deep Skepticism. On the other hand, if artificial intelligence is realizable, one of these days someone is going to report such an "alien" encounter and be telling the truth. So I guess I'd also have to say that I consider it just remotely possible that Kasparov did get an early glimpse of something that will only later be recognized as the moment the paradigm shifted.
A paradigm shift, as I'm using the term this month, is a watershed transformation, in which the rules of the game, and maybe the whole point of the game, change. Like the Copernican revolution, the invention of the personal computer, or having someone give you three cats.
The exact moment of that shift is a sort of jiu-jitsu instant, when someone or something uses the momentum of events to tip a delicate balance. It's often brought about by a shift in perception, as when John Dalton described color blindness in 1794. Until then, color blindness was an unrecognized phenomenon. It's at least interesting, and potentially useful, to try to identify these critical moments.
The moment the paradigm shifts is not necessarily the same as the occurrence of the event that causes the shift. The Cold War may have had many causes, but it can be argued that the paradigm actually shifted from a state of assorted international tensions to something new at the precise moment that Winston Churchill made his famous remark about an iron curtain descending across Europe. The social changes collectively known as the '60s may have had their roots in a decades-long American involvement in Southeast Asia, rising expectations in the African-American community, cyclic patterns of generational politics, and advances in pharmacology, but the moment the paradigm shifted was arguably at the death of John Kennedy.
Okay, I'm no historian. Let's consider programming languages.
What constitutes a paradigm shift, or a paradigm for that matter, is open to interpretation, and I've certainly been loose with the definition in this column before. But paradigms at least have something to do with the unchallenged assumptions and basic tools of a profession or other domain. So, while the move to object-oriented programming is definitely a paradigm shift, I don't think it's too great a stretch to say that a shift in the dominant programming language is also a paradigm shift in programming. I think we can clearly identify two such "linguistic preference" paradigm shifts in commercial application programming languages in the past 20 years.
(Contrary to expectations I may have set up, I am not about to identify the critical moment the paradigm shifted for each of these shifts. I will identify the shifts and then invite you to speculate about the critical moment. I'll also use this idea of a critical moment to talk about several other topics later in the column.)
In the late '70s, it seemed quite possible that Pascal might become the language of choice for most professional programmers. Structured programming was hot, and the only language in sight more ideally suited to the structured programming discipline than Pascal was Modula-2, which was really just Pascal, take two.
But it didn't happen. C became the dominant programming language, becoming so firmly entrenched that when the vogue shifted from structured programming to object-oriented programming, C was able to swallow up the new model, at least for a while, by spinning off an object-oriented variant, C++.
C probably owes its success to AT&T's efforts to get it into students' hands, and to the demonstrated superiority of its implementations over those of competitor languages for those students' needs when they actually got down to work. But what was the critical moment when C became the dominant programming paradigm? Was there one?
(As to C++, according to DDJ contributing editor Al Stevens, we probably owe its success to the abstruse Windows API and its event-driven, message-based programming model. In his April 1997 "C Programming" column, Al admits to being captivated by another dialect of C. The dialect? JavaScript. I think that this says something about how far a popular paradigm can be stretched.)
The booming popularity of Java (not JavaScript) looks like it may be a second major shift in programming language dominance in recent memory. But what was the critical moment when that shift took place? If you have an idea, drop me a line.
Why am I asking you to identify the critical moments for the shift to C and the shift to Java? It sounds like I'm trying to get you to do my job -- all right, I am trying to get you to do my job, but I think in this case it's appropriate. When a programming paradigm shift takes place -- whether it's something as mundane as a change in the dominant language or as monumental as the realization of true machine intelligence -- it seems likely that it will be someone deeply engaged with the software -- on some level -- who first perceives the shift. An Al Stevens, say, or a Gary Kasparov, or you.
Then again, maybe it'll be someone in an altered state of consciousness.
Francis Jeffrey had come down with a bad case of the flu. Bed-ridden, feverish, he began to hallucinate. In his delirious state, he saw pretty patterns of flashing neurons. The neurons were firing in patterns that reflected their relationships. Rather than simply passing signals passively, they were collecting and disseminating information about their patterns of connection: building collaborative network maps of their local territory and communicating these maps to remote parts of the brain. In effect, the neurons were a mechanism of implementation for some kind of introspection, and the whole fever-induced vision suggested that the main preoccupation of the brain at the lowest level is its own pattern of activity. Nerves, like people, are mainly concerned with conversing about their own relationships, is how his friend Jason Keehn puts it.
When he recovered, Jeffrey wondered if his vision could be an accurate glimpse of how the mind does work, not just at the lowest level, but at successively higher levels. He imagined encoded maps being themselves encoded in higher-level meta-maps. It all made some kind of sense, given a body of research suggesting that the brain is organized holographically: Much of its contents don't seem to be stored in any single place, but seem rather to be somehow distributed throughout the cerebral cortex.
Wow. Was he onto some important insight into the way the brain works? Well, at least he figured he was onto a new software paradigm. I should clarify that Jeffrey is not just some guy who had a bad case of the flu. He's written some interesting code before. In the early '70s he did impressive work in trying to model cognitive processing as a student at the University of California, Berkeley. Later he wrote programs for ILIAC-IV, the world's first parallel supercomputer. Then he worked at John Lilly's famous Human-Dolphin Foundation, where he designed a distributed system of small computers, and wrote the authorized biography of Lilly. For the past six years he has been working on something he calls "eLPHIN." It was developed on a NeXT machine and, as near as I can tell, it's a piece of software.
Okay, that's a little harsh, but Jeffrey's description of eLPHIN at http://elfnet1a.elfi.com/elfnet.html is a little hard to follow.
"Virtual neuronal networks" is Jeffrey's term, as opposed to neural networks. One chunk of it is eLPHIN, "a new highly interactive, concurrent, distributed, relational network technology that can rapidly and flexibly structure (and dynamically re-structure) interactive presentation for Internet-web and all other multi-media channels."
Is neuronal programming yet another wizzy paradigm (YAWP)? At this point, that's another of those open questions.
The future of free speech on the Internet has been an open question since two branches of the United States government got behind a brain-damaged proposal called the "Communications Decency Act." On March 19 of this year, the third branch got a chance to consider the CDA, and is expected to weigh in with its official opinion this summer, if not sooner.
But those who were present when the Supreme Court heard arguments regarding the CDA sent back encouraging reports. If, as seems to be the case, the Justices were swayed by the case presented against the CDA, this day may have been another of those moments when the paradigm shifts.
Bruce Ennis, the attorney who presented the case against the CDA, said in an online news conference immediately after the event, "more...tough questions [were asked] of the Department of Justice...lawyers than were asked of me," but acknowledged that, in general, "you can't tell too much from the questions." (DOJ was presenting the case for the CDA.) Justice Scalia as usual asked a lot of questions, hammering away at why it should be possible to display in cyberspace materials that could not be displayed to a minor in a bookstore or on radio or television. Ennis said that he took heart from a question about CGI scripts from Justice O'Connor, which he felt demonstrated that the Justices had done their homework. He was pretty sure that the Justices probably had little experience with or understanding of the Internet a month before, so a meaningful question about CGI scripts was evidence of some serious research.
If anything can be read from the questioning, the CDA is in trouble. Six of the Justices had tough questions for the DOJ lawyers, while only two (Chief Justice Rehnquist and Scalia) had many challenging questions for Ennis. Justice Clarence Thomas mainly looked bored.
An audio transcript of the press conference was posted at http://www.talk.com/talk/wiredside/97/11/stuff/debriefing_1.ram, and you can find many documents related to the case at http://www.ciec.org/.
There has come a critical moment for some Apple supporters, too. Die-hard Apple supporters (if you'll pardon the redundancy) have been ragging on me for my recent opinions re the Company for the Stressed of Us.
Shocking, I call it. Me, an Apple booster if there ever was one, a Mac user since the days when it took 20 minutes to copy a floppy, getting blasted for joining the anti-Apple chorus. I thought I was just being objective.
I must admit, though, that I may be at my own critical moment, where I must choose between being an Apple supporter and being a supporter of the Macintosh platform. It's no longer clear that the two loyalties are consistent.
Certainly the fortunes of Apple Computer and the rest of the Mac market are not in lockstep. In the last quarter of 1996, the total MacOS market grew by 9 percent, at the same time that Apple's lackluster holiday Performa sales led even Apple execs to say that Santa didn't visit Apple last year. (Apparently Apple is going to drop the Performa name.)
One developer who is forthright about the Apple versus Mac issue is Dave Winer, who says he's "in favor of anyone but Apple owning the Mac OS." It's not that he has anything against Apple Computer. (Well, actually Dave does have reason to resent Apple. Take the fact that he had staked out the Macintosh scripting territory with his product UserLand Frontier before Apple announced AppleScript. When AppleScript arrived, he was left trying to compete with a free product. Frontier is now given away free. It's an excellent product, better now than when it had a price tag, but not a revenue stream for UserLand, which in fact doesn't exist any more.) Winer has a plan. He imagines "a new privately held company" that would own the Mac OS and license it to clone vendors and Apple itself. "Engineers and webmasters, keeping the system current, fixing bugs, improving performance. Distribution of add-ons thru the net. Connections with the developer community, Netscape, Be, Apple and Microsoft." All that has to happen is for the interested people, Mac users and developers and clone makers, to get together and make a commitment, and convince Apple to sell them the MacOS outright.
After that, it would be smooth sailing to a high market cap and a quick, solid IPO, Winer figures. The company "could do $250 million in its first year and grow from there." Coooool.
Winer doesn't trust Apple with the MacOS because he doesn't believe that Apple management is really committed to the Mac. "At the top of Apple they talk of digging out by making Windows/Intel boxes," Winer says. "It's serious talk. People are resigning over this issue. No Mac software will run on these boxes. The wrong people own the Mac. They're going to kill it to try to save the company."
Publicly, Apple claims otherwise. At the February shareholders meeting, Amelio said,
At the same time we're building Rhapsody, we're continuing to develop System 7....Our next scheduled operating system release, this summer, is planned to deliver the largest single advancement in the OS since 1984....And updates are scheduled for early next year and beyond.
But is that what Steve Jobs has in mind? His influence continues to grow, he is a man of discontinuities and private agendas, and the top system software and hardware positions at Apple are currently occupied by his lieutenants, veterans of NeXT. Is he also facing the Apple-versus-Macintosh loyalty question, and if so, which side does he come down on?
Incidentally, Winer has another, considerably simpler plan for the Mac. No new company, no IPO, no negotiating with Apple for its OS. Just take over the menu bar.
Winer compares the Mac Finder's menu bar to the command-line user interface of the Good Old Days. "Remember how powerful those UNIX and DOS command-lines were?" Winer asks. They were powerful because they were open. You could add your own commands to those command lines. Winer has made use of his scripting technology to open the Finder's menu bar to power users. You can add your own commands to the Finder's menu bar.
Big deal? Winer thinks so. He thinks it's a platform. You see, he's been putting menu-bar-like navigation bars at the top of his web pages, and if he can get one little hook from the browser makers, he intends to "totally liquefy" the distinction between the Finder's menu bar and these web-based bars.
Dave Winer's amusing rants come in a mailing list. Details can be found at Dave's web site at http://www.scripting.com/.
I came across the Kasparov bit and a discussion of it on Community Memory list.
Community Memory is a mailing list (archived at http://memex.org/community-memory.html) created by Computer Professionals for Social Responsibility as a place to capture anecdotes and more serious bits of the history of cyberspace. There's a wealth of good stuff here, although like most mailing lists, and this month's column, it has more questions than answers.
You'll find no end of shifting paradigms in the wonderful publication the Annals of Improbable Research. But that assumes you're willing to subscribe to and pay for a magazine that arrives in your mailbox when the publishers and the US Snail decide to put it there, which is, of course, the old paradigm. The new paradigm is free information, immediately accessible on the Internet.
So instead, visit Mini-AIR, the online snippet publication spun off from the Annals of Improbable Research (http://www.improb.com/). Mini-AIR contains all those bits "too small to fit" in the print publication. Mini-AIR recently conducted a survey of its readers (or some such carefully selected stratified sample of the target population) on the issue "Is it possible to travel faster than light?" A solid majority of respondents said that it was, indeed, possible to travel faster than light, although it's necessary to book your trip well in advance.
Mini-AIR shared many of the respondents' reasons for concluding that FTL travel was possible, and I thought I'd share one of them, by reader G. Borochoff, with you:
E-mail uses delivery through electrical circuits, therefore traveling at the speed of light (one of the reasons for its popularity over the historically traditional US Postal "Service"). America OnLine uses these same electrical circuits. It is well known that almost anything travels faster than AOL these days.
'Nuff said.
DDJ