SOFTWARE AND THE SINGLE PROGRAMMER

The evolution of programming from telling to showing

T.G. Lewis

Ted Lewis is professor of computer science at Oregon State University, editor-in-chief of IEEE Software magazine, and a governor of the Computer Society of the IEEE. His research interests are in CASE tools for both serial and parallel programs, and for software engineering in general.


I have been in computing long enough to remember when Fortran, Algol, and Cobol were claimed to be the greatest breakthroughs known to programmers. Even in those days, there were the believers and the nonbelievers: the programmers who scoffed at Fortran because it was too slow and inefficient and the managers who quickly accepted certain inefficiencies because of reduced time to market, lowered development costs, and improvements in reliability.

Since those glory days, little has changed in the way software is handcrafted by artisan programmers. It seems we have reached a productivity plateau -- a leveling-off of programmer productivity that should embarrass any programmer aware of the gains made by other knowledge workers. To make things worse, today's software systems are characterized as large and complex, thus requiring time and effort far beyond the applications developed ten years ago. (Recent PC applications require megabytes of RAM and hard disks to load and execute.) What is the single programmer to do?

CASE (computer-aided software engineering) tools offer individual programmers a powerful lever for producing large applications quickly. CASE represents a quantum leap in power through software development techniques such as computer-assisted programming, automatic user-interface generation, fourth-generation-language (4GL) application development, and automatic code generation. CASE has desirable side effects: It reduces the drudgery of documentation, improves communication between programmers and customers, and increases software quality. In short, it is a way for programmers to do their job faster and better.

The Long Winding Road

Researchers have long been looking for alternatives to programming. These approaches can be roughly classified as follows:

The linguistic approach has a rich history culminating in several contemporary camps: Ada in the military camp; object-oriented languages such as C++ in the maverick camp; and Prolog et al. dug in at the logic/AI/declarative-languages camp.

Small but significant productivity gains have been reported for Ada vs. Fortran applications, but there is still controversy over the productivity value of Ada. I will sidestep the controversy here because this is not my main point; suffice to say that the reported gains from programming in Ada are on the order of twofold. I am looking for techniques that reward programmers with a 1,000-fold increase because this is the quantum leap I believe will overcome the inertia of contemporary practice.

Object-oriented programming in languages such as C++ requires a shift in paradigm that will take a while to occur, but it is clear that this approach has advantages. Object-oriented programming can, for example, be combined with reusable-component technology to realize a fivefold increase in productivity after a suitable collection of reusable objects has been constructed.

The field of AI and logic programming has not yet produced measurable productivity gains, but the idea of a declarative programming style remains promising. In this style, a program is expressed as a declaration of facts and constraints rather than as a detailed prescription for how to solve the problem at hand. This is a powerful idea, and it, as well as the other approaches mentioned previously, has ramifications for prototyping.

All these languages are based on the idea that character strings contain enough expressive power to represent all that we want computers to do. In fact, the linguistic approach has been highly successful, leading to contemporary CASE tools based on data-flow diagrams and data-dictionary storage that are designed to work with these character-string languages. (Examples of these CASE tools are described elsewhere in this issue.)

Regardless, it would seem that the linguistic approach has run out of steam and something more powerful is needed. Instead of twofold or fivefold improvements, we need to invest in technologies that promise 1,000- to 1 million-fold improvements. To get such incredible leverage, we need to study methods of software production that transcend the written word.

The systematic approach is only about ten years old and is best represented by application generators and 4GL technology currently enjoying rapid acceptance in the mainframe data-processing world. These systems combine many basic functions into a whole: screen forms generator, report generator, database query language, and processing function generation tools. Typical application generators reduce the time and effort to implement an application by a factor of 10 to 100, but they are restricted to narrow domains such as business data processing and database retrieval and reporting functions.

Prototyping is a novel technology that has been discussed for more than ten years, but little progress has actually been made until recently. The idea of a prototype occurred to software engineers as a method for capturing user requirements early in the life cycle so that the requirements could be examined, tested, and verified before actual coding began. Then, programmers realized they could automatically convert the prototype into running code for even greater productivity. Although a few prototyping systems exist, the advantages and limitations are not well understood. Prototyping, however, offers one of the most fascinating possibilities for vast increases in productivity. (See Figure 1, below.)

Before going on, it is only fair to mention an alternate approach to prototyping that is based on the linguistic paradigm. It is entirely possible to produce prototypes from high level specifications. Such specifications are normally written in a formal specification language. A few experimental systems for prototyping real-time control systems exist. These systems permit the direct execution of specifications to realize the prototype. Although interesting to the research community, executable specification languages are far too mathematically arcane to attract much practical use. I believe they will remain in the laboratory because they do not offer the possibility of giant gains in productivity.

I will illustrate some rudimentary ideas of prototyping using an experimental system, called Oregon Speedcode Universe (OSU), currently under development by my research team. OSU is a software-development system employing on-screen editing of standard graphical user-interface objects, prototype sequencing, program generation, and a novel CASE tool for understanding source code. Programmers use OSU to design and implement all user-interface objects such as menus, windows, dialogs, and icons. These objects are then incorporated into an application-specific sequence that mimics the application during program development and performs the desired operations of the application during program execution.

OSU assumes a fundamental shift in paradigm: the user interface as language. This is a comfortable notion in the world of WYSIWYG and direct manipulation but requires a different perspective for the literate mind. In the world of WYSIWYG, programmers show what the computer is supposed to do instead of telling the computer what to do.

Showing Instead of Telling

Interest in visual programming, object-oriented design, and automated software-design methods has been heightened by the rise of graphical workstations that support windows, icons, menus, and pointing devices such as the mouse. The power of pictures over words has not been wasted on the users of these workstations, either. Rather, graphics-based computing has been combined with idealized models of the world -- paradigms -- to reduce learning curves, increase productivity, and generally remove the burden of program operation from the user. At the same time software developers have slowly come to realize that the user interface paradigm is itself a kind of programming language -- a language that expresses the user's desires in pictures instead of words. This evolution can be characterized as a shift from "telling" to "showing."

Telling is performed by manuals, programming languages, and other written documents that attempt to convey instructions to user and machine alike. Telling involves two cognitive translations: from the idea to its textual representation and then back again from text to idea. To a software developer, these translations take place when a user's requirements are converted into code and then again when the code is executed. Unfortunately, the major problem of software engineering -- getting correct specifications, design, and implementation -- remains difficult and costly largely because of the imperfection of telling.

Showing is the process of doing in the form of direct manipulation of "objects." Showing involves one level of cognitive translation: from the idea to its implementation. There is no linguistic ambiguity in showing because it is direct. Of course, humans train other humans by showing every day. Most people learn to drive a car from practicing rather than reading a book (often to the dismay of pedestrians, passengers, and innocent bystanders). Showing and doing are perhaps the most common forms of learning in the animal world.

Showing a computer what to do is difficult, and at present it is a less successful technology than traditional methods of giving instructions by telling via a programming language. But even an imperfect software tool for programming by showing can have dramatic impact on programming effort. Suppose, for example, that a certain application consists of 80 percent user-interface code and 20 percent calculation code. Further suppose that the 80 percent user interface code is automatically generated using a visual programming tool that captures what the user wants by showing. The effort to produce 80 percent of the code can be ignored, leaving only 20 percent (a fivefold increase in productivity) to be handcrafted by telling. By contemporary standards, any software engineering technology that delivers a fivefold leverage is considered revolutionary.

Great, but how do we program a computer by showing? Standard, graphical, user-interface-management systems based on a paradigm such as the metaphorical desktop provide a "platform" for showing vs. telling. The desktop metaphor of the Apple Macintosh is used here as a platform or prototyping language for expressing sequences of user-machine interactions. The particular user interface platform is not as important as the concept of "interface as language." In the remainder of this article, I'll show how a standard, consistent user-interface paradigm can be used to advantage in showing rather than telling.

The Theory of Prototyping

A prototype Q is a collection of user-interface objects U, a set of actions A, and a mapping function F, as follows:

Q = {U, A, F}

The user-interface objects U are the alphabet of symbols defined by a metaphor. The Macintosh desktop is a metaphorical desktop with an alphabet of icons for trashcan and files, pull-down menus, scrollable windows, and user-interaction dialogs. When used in a consistent manner, they form a language in much the same way that English characters form meaningful words when placed in strings according to the rules of English.

Construction rules for forming "words" in the desktop language can be expressed in English text, but there is a better way. Instead, user-interface objects can be constructed by direct manipulation of graphical "letters" such as menus, icons, and windows. An example of construction by direct manipulation is shown in Figure 2, below. In this example, an input form is constructed as a dialog containing standard letters from the alphabet. The letters are listed in the palette displayed below the dialog while it is being constructed. To insert an OK button in the dialog, simply drag one from the palette; to insert an editable text field into the dialog, drag a blank field from the palette and stretch it to any desired size. Similarly, radio items, check boxes, and icons can be placed wherever desired by doing, rather than by telling.

An immediate criticism of this approach is that the range of possibilities is limited by having a relatively small number of items to chose from. This is true, but it is also true that the expressiveness of a high-level language is limited by its alphabet and rules of program construction. Careful selection of such constraints is what software design is all about. For maximum flexibility, program in machine language. But if we desire reliability, quick development, and maintainability, we must carefully discard some powerful options in favor of more productive ones. When user interfaces are standardized, we lose some flexibility but gain in other areas. One of the areas in which we win is in the ability to rapidly produce new applications that are easy to use. (As John Sculley says, "The programmer must give up control of the machine to the user.")

The actions A are the behaviors defined on the objects of the application program. We say the application is implemented according to principles of object-oriented design when two conditions are met:

    1. Objects are encapsulated in clusters containing state and function -- the state represents data, in general, and the functions define how to manipulate the objects.

    2. The objects are manipulated exclusively by invocation of their functions -- no state transitions are allowed by side effects of functions defined in any other encapsulation. (In a pure object-oriented system, the objects inherit their functions from a class, but I won't quibble about such details here.)

The objects in U that are seen by a user of the application are of immediate concern. These objects are activated by calling their functions. A menu is created and manipulated by function GetNewMenu, for example, and a window is displayed by its GetNewWindow function. In a standard user-interface management system, the behaviors for all user-interface objects are defined and fixed. They constitute the "verbs" in the "language" of prototyping; the state variables of each object constitute the "nouns."

Manipulating a member of U changes the internal state of an object. An open window is closed by calling its CloseWindow function, and a menu is disabled by calling its DisableMenu function. At any time, an interface is in a certain configuration -- for example, one window is open, another is closed, a menu is disabled, and another is enabled. The sum total of the states of U constitute the configuration of the user interface.

The mapping function F is a graph describing state transitions from one user-interface configuration to another. State transitions in F are driven by the behaviors of the objects in the application program. We might, for example, want to show the computer how to display the dialog in Figure 2 by first selecting a menu item OPEN, followed by display of the dialog. This "simulation of the actual program" constitutes a change in the configuration of the user interface. The total collection of such changes in the user interface make up what we call F. Defining F is a challenging problem in practice. We look at the simple solution first and then explain the more difficult method of showing F to the computer.

F as in Finesse

In a mock-up, or vacuous prototype, the application interface is completely simulated but the application does nothing useful. None of the functionality of the application is carried out except the user interface. In terms of the theory, U, A, and F are defined but only for the portion of the application that interacts with the user. The dialog of Figure 2, for example, appears on the screen after the user selects the OPEN menu item and enters both NAME and AGE into the dialog, followed by clicking on the OK button. The application behaves as specified in the vacuous prototype, but the values of NAME and AGE are ignored!

For a standard interface, a vacuous prototype is constructed as follows:

    1. Define all user-interface objects in U; inherit the standard user-interface object's behaviors as functions defined on each object.

    2. Sequence the members of U by invoking the functions defined for objects in U. This gives rise to a set of configurations in F.

    3. Generate code that implements U, A, and F as in steps 1 and 2.

    4. Compile, link, and run the code produced in step 3.

Commercially available tools exist for creating vacuous prototypes -- for example, Bricklin's Demo program for the IBM PC and SmethersBarnes Prototyper for the Macintosh. In addition, application-development systems incorporated into database-management systems such as dBase III and 4th Dimension employ similar tools for creating custom user interfaces to match the application.

Creating vacuous prototypes is relatively easy, but creating full-fledged prototypes for realistic applications is within current technology only when the domain of application is severely restricted. Domain-specific prototyping systems exist for certain real-time control problems and certain classes of business applications, for example. (I might claim that a spreadsheet is a domain-specific prototyping tool, but it might evoke too much negative correspondence.)

Prototyping systems for general applications -- applications that can currently be implemented by telling rather than showing -- are beyond current technology. We claim that wide-spectrum prototyping systems will be achieved in the near term by combining several domain-specific tools. When domain-specific tools are combined with vacuous prototyping, the result is a wide-spectrum prototyping system. Such systems are still experimental, but when they are perfected, programming will be more like flying an F- 14 than writing a term paper for English Lit.

Prototyping at the Edge

OSU is a program-development system based on the notion of a wide-spectrum prototyper. It incorporates several domain-specific tools for automatically creating, manipulating, and "playing back" prototypes. In addition, OSU incorporates some CASE-like features for doing traditional coding, thus retaining the power and flexibility of traditional high-level-language programming. Complete applications are generated from OSU prototypes -- currently in the form of compiled and linked LightSpeed Pascal programs.

The core of OSU consists of four tools for graphically constructing Q={U, A, F} -- ResDez; a graphical sequencer; a program generator; and Vigram, a detailed design and program-comprehension tool.

ResDez (Resource Designer) is used to create and edit all user-interface objects graphically -- menus, icons, dialogs, windows, alerts, error messages, prompts, and associated information. These objects are "painted" on the screen exactly as they initially appear in the finished application. Therefore, ResDez not only creates each object but it also defines the initial internal state of the object. (See Figure 3, page 23.)

A second tool, called a graphical sequencer, is used to create A and F -- all configurations and the actions for transforming elements of U from one state to another. The graphical sequencer is used by a programmer to "play out" the application by doing rather than writing instructions in the form of a script or textual language. The actions of A are the behaviors of the desktop objects defined by the standard user interface.

A screen dump of the sequencer in action is shown in Figure 4, page 23. The application's menus are in the menu bar, and miniatures of the user-interface objects are shown along the bottom of the screen. Actions are shown to the application by pointing and clicking, just as you would in the actual application. The sequencer is itself operated from the buttons on the right side of the screen in Figure 4.

A prototype can be created, sequenced, and played back like a movie. Each action is initiated by actually doing it, but often additional information is needed to clarify the action. When needed, additional information is obtained through OSU dialogs that ask for details such as the state of a menu item after it is selected (checked, disabled, and so forth) or the disposal of a dialog. Figure 5, page 24, shows a sequence created within OSU.

The third tool is a program generator that automatically writes compilable Pascal source code equivalent to the prototype defined by U and F. Several alternative methods of code generation might have been employed in OSU: direct compilation, translation into intermediate code, and direct interpretation. We chose source-code generation because it takes advantage of compiler code optimization, gives the programmer access to a maintainable version of the program, and the resulting prototypes are easily combined with other program components taken from libraries and other languages.

Given a graphical sequence, the code generator writes a program that carries out the steps described by the sequence. The code generator works behind the scenes and is not part of the user interface of OSU.

Finally, Vigram (VIsual proGRAMming), is a tool for constructing graphical views of the source-code text generated by the code generator and/or by hand-coding the old-fashioned way. Vigram serves two purposes: detailed design of individual procedures and automatic construction of a detailed design schematic of an existing procedure so that it can be understood at a glance. Figure 6, page 24, shows a Vigram "picture" of a Pascal procedure.

As you can see, OSU does not try to do everything automatically. A large part of the coding task is automated only if a program can be constructed from standard user-interface parts and standard program functions. But if a particular algorithm or technique must be hand-coded, OSU permits an escape by way of "Vigramming." Yet productivity gains are minor when you compare Vigramming with traditional text editing. The real advantage of Vigram is in comprehending existing programs for the purpose of reusing them. An existing program is entered into Vigram, turned into a picture, studied, and adapted to its new purpose. Thus, Vigram is a tool for adapting reusable components.

The core of OSU permits the construction of standard data-processing applications. These programs are controlled by a system of menus, data-entry dialogs, and limited amounts of graphics. They do not contain novel algorithms for word processing, interactive graphics, sound, telecommunications, and so on. Of course, these are exactly the kinds of things we want to do with computers, though! To make prototyping interesting to a wider audience of programmers, OSU must cover a wider array of applications.

A wide-spectrum prototyping system must be flexible enough to generate applications in many problem domains. The system must be able to prototype and generate applications in document processing, interactive graphics, sound, telecommunications, and a diversity of data-acquisition and control-applications. To do this, we need a large family of domain-dependent tools called software accelerators.

Software Accelerators

Software accelerators accept direct manipulation of various objects as inputs and produce object-oriented-code modules as output. Here object-oriented means that data and the operations that can be performed on the data are encapsulated in the form of a Pascal unit. Every Pascal unit contains an interface part that defines the constants, types, and procedures for operating on the encapsulated data structure. These units are automatically generated by showing rather than telling. The units are then plugged into the prototype using their interface parts as the "plugs." Briefly, the tools, which are shown in Figure 3 are:

I do not have space to explain each of these tools in detail.

The uniformity of code interfaces across all objects in the system enable OSU to incorporate functionality from any domain-specific tool. Thus, OSU is not limited to these tools because of the uniformity of the code interface. Additional domain-specific tools can be incorporated into OSU as long as they conform to the code-interface restrictions placed on the object-oriented units. In this way, we can grow from a limited, vacuous prototyping system to a robust, wide-spectrum prototyping environment in which programmers realize orders of magnitude of improvement in productivity. But then, this is the subject of another article!

Conclusion

The goal of CASE is to improve productivity in all phases of the software life cycle -- planning, design, coding, testing, maintenance, and management. Prototyping is a radically different approach that tries to eliminate rather than support the phases of the life cycle. Prototyping compresses design, coding, testing, and maintenance into a single step. Modifications to an existing system are made simply by regenerating the complete application because generation is totally automated.

These notions are difficult to accept because they come from a different perspective -- a paradigm shift is prerequisite to appreciation of prototyping. Caution is in order, however, because prototyping is an immature technology. Much more fundamental research is needed to make it a practical reality.

Acknowledgments

OSU is a long-term research project designed and implemented by a large number of students, including the following outstanding members of the research team: Fred Handloser III, Sharada Bose, Sherry Yang, Chi-Chia Hsieh, Kritawan Kruatrachue, Jagannath Raghu, and Jim Armstrong. Details on these projects are available as technical reports from the Computer Science Department, Oregon State University, Corvallis, OR 97331; 503-754-3273 (ask for Pat).