Into the Future

Dr. Dobb's Journal February 1997

By Phil Mitchell

Phil, president of NeoCortek, can be contacted at phil_mitchell@neocortek.com.

The Trouble with Computers:
Usefulness, Usability, and Productivity
Thomas K. Landauer
MIT Press, 1996
440 pp., $15.00
ISBN 0-262-62108-8

The Future of Software
Edited by Derek Leebaert
MIT Press, 1996
320 pp., $13.50
ISBN 0-262-62109-6

The prolific French novelist Balzac was once asked how he was able to write so many books. He replied, "I never use labor-saving devices." It's unclear what 19th-century gadgets Balzac was avoiding, but apparently the sentiment is timeless: The biggest "labor-saving" device that should never have been used, according to Thomas Landauer, is about $3 trillion worth of computers applied in business over the past two decades. This appalling conclusion sets the stage for his engaging and lucid book, The Trouble with Computers.

The Trouble with Computers

Fiscally speaking, the trouble with computers is that economists can't seem to show that they help the bottom line. Of course, there were the early, easy gains from number crunching; and there are certain specialized applications, notably CAD, where computers have made the unthinkable possible. But in terms of the service industries that increasingly dominate our economy, where information technology (IT) was supposed to trigger vast efficiencies for ordinary workers, there's no sign of a productivity gain. To the contrary, productivity growth in the era of minicomputer and desktop applications has notably and unsettlingly slowed. Landauer goes so far as to argue that investment in IT was a major cause of this downturn, because, in many cases, it was wasted money. But while causality is hard to prove (and Landauer is no economist), what's hard to argue with is that the huge productivity gains forecast for computerization, gains on par with mechanization, simply haven't materialized.

Landauer's sedulous examination of the "productivity paradox" prompts the question: What went wrong? He's glad you asked. For despite the plodding econometrics of the early chapters, Landauer is a man with a thrilling story to tell. When he examines the reasons for IT's failure -- reasons that include the hidden costs of training and maintenance, the absence of standards and interoperability, and mismanagement and misapplication of technology -- he finds the overriding, fundamental flaw to be the failure of designers to create useful and usable software. But the thrilling part is that he's sure this is a problem that can be fixed, and he's convinced that if it were fixed, the ensuing productivity gains would truly revolutionize society.

This is not, however, a book of wishful thinking. Landauer headed up one of the human factors/userinterface research groups at Bell Labs in the '80s. He brings to the topic of usability the rigor of empirical psychology and the extensive experience of an insider from one of the few industries with a major IT success story to tell. It is a powerful combination. His critique of usability, across a broad spectrum of applications, is relentless and insightful. The quirkiest example is his application of the concept of random reinforcement schedules to computer users. (It is well known in academic psychology that a pigeon -- or a person -- who is rewarded for some behavior according to a regular and predictable scheme will behave logically: If the rewards are frequent enough, the behavior is maintained; if they become too infrequent, the behavior stops. But if the reward schedule becomes random and infrequent, the result is obsessive behavior that does not extinguish. See online Help.)

Landauer's real mission, though, is to show us how to do better -- how to create that useful and usable software to produce the gains we've been looking for. In the final third of the book, he discusses a number of examples where usability design was done right. In particular, his detailed discussion of an electronic book project at Bellcore is fascinating. From these examples, and from numerous empirical studies, Landauer draws some remarkable conclusions.

For instance, studies repeatedly show that computer users exhibit wide variability in efficiency. Among expert manual typists, the worst performer might be about 30 percent slower than the best; among expert word processor users, the difference jumps to 400 percent! (This effect is notoriously present among programmers.)

There are a number of ways that computers seem to magnify variability; but the startling conclusion that Landauer presents is that it is possible to design systems that eliminate much of this variability. Whereas a poor interface tends to separate the good users from the bad, a well-designed interface can bring most people up to a high level of performance. Contrary to current practice, Landauer proposes that systems should be designed not to maximize users' choice and flexibility, (users' intuitions being as bad as programmers'), but to offer them the clearcut best way.

There's nothing mysterious about design for usability. It centers around (surprise!) testing applications with real users in an iterative design process. But there are some useful facts to know. For example, it doesn't take a large and expensive study to do rigorous usability testing. The average interface has around 40 defects in need of repair; having two naïve users evaluate it will find about half the flaws, and six evaluations will typically find about 90 percent. Additionally, software that doesn't go through careful usability testing is pretty much guaranteed to lower user productivity. Designers who hope to do otherwise would do well to read this book.

The Future of Software

Landauer's book left me wondering what the big software manufacturers would have to say about these matters. Conveniently, in The Future of Software, Derek Leebaert invited representatives of IBM, Microsoft, Lotus, Novell, Intel, and DEC, as well as assorted industry insiders, to give their views on the usefulness of software, present and future.

The official agenda of the book is the "software problem," the fact that our remarkable progress in hardware seems to outstrip our ability to design remarkable (read: intelligent) software. But beneath this problem statement is the productivity paradox, as several contributors make explicit. Microsoft's director of enterprise computing feels our vast investment in personal computers has not delivered value to individuals and organizations. (But fear not, it will soon.)

Asking these companies to describe the future of software is a bit like asking the National Cheese Council to describe the future of hors d'oeuvres. The results are pretty unsurprising, with convergent views on the ascendance of networking, open standards, and business reengineering on the basis of enhanced information flow. Still, it's intriguing to listen to what the big guys want you to think they're thinking.

For instance, Microsoft lumps mainframes and PCs together as relics of hierarchical, assembly-line organizational thinking. It's the client/server model, supporting distributed databases and seamlessly integrated application suites, that will enable businesses to restructure around processes: There will be no more isolated order-entry clerks -- a salesperson will shepherd the entire process from order to fulfillment.

Lotus goes a step further, suggesting that the current state of computing is positively medieval, and that workgroup computing will usher in a humanistic renaissance. The writers take some hard shots at the current state of usability, averring that "...most of the problems with computers [are] that computer people [talk] too much to their computers and not to other people." Rather than productivity enhancements for the individual worker, "intelligent communications" will empower workers and revolutionize decisionmaking processes at all levels: Relationships with customers, suppliers, and business partners will become efficient, mutual, and collaborative.

The contributions from Digital and Novell have a different emphasis: Both envision a future in which the end user (and who better?) is able to create his/her own software. Open standards and reusable components will put an end to the need for expert programmers to create novel applications. The interesting part of DEC's piece describes two current efforts at standardization: a user-driven effort in which the Japanese telecom giant NTT demanded that its suppliers transition from proprietary to open standards, and an industry-driven effort (the SPIRIT project), in which American, European, and Japanese telecom and IT companies are collaborating on such standards. Novell, on the other hand, emphasizes its Visual AppBuilder, in which software components will enable vendors or end users to design complex custom business applications in a single day.

Conclusion

The problem with these futurist manifestos is that each ends without ever addressing the software problem; these marvelous futures assume a new level of software intelligence without indicating how it will be achieved. Somehow, networked computers are going to lead to natural-language translators, and open standards are going to make object components trivial to use -- but no one explains how. The one article that tries to fill this gap is Gustave Essig's essay on natural-language processing and artificial intelligence. Essig, a philosopher and computer telephony entrepreneur, believes that new knowledge representations, based on functional insights from the neural and cognitive sciences, are about to make fully intelligent "naturalware" possible. We'll see. In the meantime, it's hard to take another round of big promises seriously. I'd settle for some decent usability testing.

DDJ


Copyright © 1997, Dr. Dobb's Journal