Dr. Dobb's Sourcebook September/October 1997
In June 1973, Novo Airfreight delivered four boxes to my home from Wang Laboratories Inc. of Tewksbury, Massachusetts. These four boxes contained the monitor and mass storage, keyboard, CPU, and power supply of a Wang 2200 computer workstation -- $6700 in 1973 dollars. This, like my older $5000 Wang 520 programmable calculator, was purchased by me for my own personal use.
The monitor featured 16 lines of 64 uppercase characters. The mass storage was a digital (not analog) tape drive. The computer used Basic as its only language; Basic was microcoded and stored in about 32 KB of ROM. Each Electronic Arrays ROM chip contained 5120 bits organized as 512 words of 10 bits each. The microcode was 20 bits wide. Wang had rigorously defined the business-oriented Basic (BCD-encoded floating point, lots of string functions) and told the microcoders: Make the machine do this!
Dr. An Wang invented the ferrite core memory and sold the patent to IBM. With the resulting funds, he started Wang Labs, with the initial product being the first desktop calculator that could perform logarithmic calculations (and was programmable). Soon every insurance actuary had a Wang calculator on his desk. The company grew steadily and rapidly from that point.
By 1975 Wang Labs had shipped 20,000 of the 2200s, and 50,000 by 1976-77. Wang owned the departmental workstation computer market with this model; no other computer was even close. And then Wang deliberately walked away from this market and focused on dedicated wordprocessing workstations instead. Which was mistake #1.
When Dr. Wang decided to retire, Wang Labs was turned over to youthful, inexperienced management. That was mistake #2. Wang Labs slumped into an unrecoverable downturn. In February 1994, Wang's $80 million office complex in Lowell, Massachusetts, was sold for $525K. Sic transit gloria mundi. I think a tiny vestige of Wang survives; it holds the patent on SIMMs and collects royalties on every SIMM sold.
Anyhow, the Wang 2200 was a fine computer workstation for its day. Mine was the second unit to be delivered on the West Coast. So I had a personal computer at home a tad earlier than most folks.
Naturally, I had to learn the Basic language's ins and outs. I could already program, having given my Wang 520 and its 1848 programming steps a good workout using machine language. But Basic was new to me. After a few exercises, my local Wang salesman showed me a game called Star Trek that needed a 20-KB CPU to run. I got a printout and made it run in 4 KB (4-KB memory boards were $1500; 8-KB memory boards $2500). I did this by storing parameters in bits rather than 8-byte FP numbers (using the AND, OR, and similar string instructions). It was a good way to learn that particular dialect of Basic.
And I had to run the game to check my programming. It was the first and last time I ever ran a game on a computer.
In 1988, for the first time in over two decades, I took a job where I wasn't my own boss. (This was after a spectacularly unsuccessful attempt to sell 68000-based incrementally compiled Basic for the Atari ST market.) To my astonishment, I discovered that many (most?) of the programmers in my engineering department played computer games on the job. A lot. In fact, my boss spent most of his time behind a closed door playing Empire -- a fact that could easily be confirmed by a "who" from any of the Sun 3 terminals. (The company is now out of business and that boss has been out of electronics for a long time.)
I was astonished. I couldn't believe that people would actually play games -- a lot -- on their employers' time, or that their employers would tolerate it. I still have problems with this idea. Which brings us to the network computer (NC) in its several guises...
The NC promoted by Oracle and Sun does not have a local disk. The NetPC promoted by Intel and Microsoft does have a local disk, but the user doesn't control what's stored on that disk. I think this is to prevent employees from installing games on their company computers. Nobody will admit this.
By now, a number of you readers have undoubtedly decided that I'm a jackbooted corporate Nazi. You have to remember that I was an employer for 22 years -- in small companies, where the boss and his employees worked side-by-side. It does give one a different perspective.
With id Software and its Doom and Quake leading the way, today's games use 3D projections to make the dungeons and monsters appear more realistic. Like my occasional math-modeling and ANN back-projection experiments, these games are "infinite sinks" of computer power.
If you want to follow hardware trends in the PC industry, you have to read Electronic Engineering Times (EET) and Microprocessor Report (MPR). EET is the best source of info on systems issues, MPR is great for info on specific chips. The June 16th issue of EET was a special issue on 3D graphics; any subsequent EET reference in this column will be to that issue.
Most PCs are not used to play games and thus have no requirement for 3D video projections. But the game market is a very large niche; more than enough money is thrown by consumers at this niche to keep id's John Carmack driving whatever sports car he currently favors. How much money? IDC/Link and Robertson Stephens and Company agree that the 1997 PC game software market is about $2 billion. RS&Co. compares that to $8 billion for Sony Playstation and Nintendo 64 software (EET, p. 102).
Naturally, essentially all PC games are written to run on baseline PCs: Pentiums with 2D video cards. If the gentle reader will permit a small excursion into his or her area of expertise, this is where APIs -- and the 3D API war -- come in. Some games writers considered that Microsoft's Direct3D interface, in its immediate mode, was "...slow, complex, and poorly documented" (EET, p. 98).
John Carmack ported Quake to Silicon Graphic's OpenGL, which was developed for technical workstations. Carmack liked the results a whole lot more than Direct3D and posted the details to the comp.graphics.api.opengl newsgroup. By selecting the "right" elements of OpenGL, the result was much faster than could be attained by Direct3D. This has prompted Microsoft to retaliate by developing DirectX version 5.0, which is asserted to have a much faster immediate mode (with functionality said to be -- ahem! -- remarkably similar to OpenGL). And so the API wars continue (EET, p. 98).
The June 23, 1997 issue of MPR lists 33 companies that have announced 3D chips or plans to make 3D chips. (The previous issue also listed 33 companies, but in the three intervening weeks, two dropped out and two new entrants were added. MPR asserts, "The 3D graphics market is in a constant state of flux..." No kidding.)
What most PC system vendors want from their graphics chip suppliers is "3D for free." And they're getting it. A typical example is S3's Virge series of "3D" chips, which combine excellent 2D video performance with some 3D acceleration, but not enough 3D acceleration to raise the price of the chip significantly. So, the PC system vendors get the 2D performance their mainstream customers need, and they also get to advertise "3D performance!" without increasing their systems' price tags.
MPR projects that the cost of developing a new 3D chip is $10 to $30 million -- reasonable considering that FP is migrating to the new 3D chips. One hundred million PC system sales in 1998 with an average video chip price of $20 and a margin of 20 percent leaves $400 million for 3D chip R&D. "The majority of these 33 companies are developing conventional 3D-rendering chips with few if any differentiating features." (Peter N. Glaskowsky, MPR, June 23, 1997.)
Not all entrants are either cheap or conventional. 3Dlabs' Glint MX has a conventionally architectured 3D pipeline, but throws $900 (parts cost) of silicon at the problem, performing the 3D geometry, lighting, and setup calculations independently of the host CPU. Some upcoming 3D chips will use Microsoft's Talisman architecture with on-chip rendering (also eliminating some space-and-time redundancy) to reduce memory-bandwidth demands.
It's not clear to me how such different hardware approaches can be subsumed in a common API, but (dear reader) that's your area of expertise, not mine.
George Lucas, in his Star Wars trilogy, popularized the concept of a "used future." Spaceships and gravity sleds had dented fenders and badly needed to be run through a car wash. This fashionable grunge has been imported to PC-based 3D via texture maps.
Problem: The better the texture mapping, the larger the stored texture maps and the more bandwidth needed to feed the texture map data into the 3D video chip. There's no upper limit to the theoretical size of the texture map or the required memory bandwidth; one must compromise.
UMA burst upon the PC scene in 95Q2, when DRAM sold for $30/MB. The idea was to save $30 to $60 by placing the frame buffer in the main system DRAM. It was a really hot issue at the time: "We project that, 18 months from now, the UMA approach will be dominant for all PCs but high-end desktops." (Yong Yao, "UMA Cuts PC Cost," MPR, June 19, 1995.)
UMA failed because most PCs back then only had 8 MB of DRAM, and stealing 1 MB for the frame buffer would only leave 7 MB to run Windows -- not enough. That was only two years ago! Tempus doth fugit, don't it?
Intel probably began studying the texture-map problem around the start of '96, when DRAM cost $25/MB. Like UMA, the idea was to save money by placing the 3D texture map, and perhaps the z-buffer and other stuff, in the main system DRAM.
By the time Intel released the AGP spec (June 1996), DRAM was $8/MB and that price was dropping fast. "The primary goal of the AGP initiative is to contain the cost of implementing 3D in PCs...To take advantage of an AGP (or UMA) design, either the device drivers or the operating system must support dynamic memory allocation. Windows 95 does not have this capability today...Intel says it will not develop AGP chip sets for Pentium (including P55C) systems...Intel projects AGP penetration will reach about 90% by the year 2000, a ramp rate similar to that of PCI." (Yong Yao, "AGP Speeds 3D Graphics," MPR, June 17, 1996.)
AGP will eat some of the system bus's bandwidth, so it makes sense that Intel will restrict AGP to Slot 1 (Pentium II) systems with more system bus bandwidth because the L2 cache has been moved off the system bus. Other chip set vendors are planning AGP-compliant chip sets for Socket 7 (Pentium) motherboards (mobos).
But we're a year away from significant AGP system availability and DRAM now costs only $3.50/MB versus $25/MB when AGP began -- and the sole purpose of AGP is to save money on DRAM! Further, AGP is a performance compromise. The AGP bus goes to heroic lengths, but is obviously inferior in performance to simply placing 16 MB of texture memory on the 3D video card.
As of now, it's better performance-wise and cheaper to simply buy dedicated texture map DRAM and place it on the video card where essentially infinite bandwidth is available (via a 256-bit on-card data bus). And the resulting 3D video card will run on existing mobos using existing mobo chip sets.
Events (DRAM price drops) have overtaken the AGP initiative. Sensible companies will already be terminating their AGP efforts. Intel won't, yet, because it's hoping that AGP will spur sales of the Pentium II CPU.
With the texture map on the video card, only the 3D video card and sound differentiate a high-end game PC from a productivity PC. Please understand this clearly -- productivity PCs have no use whatever for 3D texture maps or AGP. It makes no sense to burden productivity PCs with the substantial -- but useless -- costs of AGP. The economics of the mass PC marketplace, not technical issues, doom AGP.
DDJ