LETTERS

Alive and Apparently Well in Louisville

Dear DDJ,

In his April 1990 "Structured Programming" column, Jeff Duntemann asked readers to drop him a postcard if they saw his book Assembly Language From Square One. Elvis was sighted browsing through a copy of Assembly Language From Square One in Hawley-Cooke Booksellers of Louisville, Kentucky.

David Rush

Louisville, Kentucky

A Plus for Patents

Dear DDJ,

I work in the research division of a major pharmaceutical company where I and my colleagues strive to invent new drugs. The research and development process for new pharmaceuticals typically requires seven years and averages $125 million. As such, patents are a necessary early part of the commercialization process. Without the exclusivity a patent grants, commercialization would be a far riskier venture.

My experience with patents is as an inventor. Although I am no legal expert, I do have a working knowledge of the patenting process. I also have a perspective on the patent issue brought on both by my vocation and by an involvement with software development. I would like to clarify some issues raised in your March 1990 editorial.

The enormous hue and cry generated by the issue of software patents has left me puzzled as to the reasons behind the concerns. I believe this reason is a fear that the access to ideas will be prevented in an unfair manner, the result of which is stifled innovation. Actually, the real cause is a fundamental misunderstanding of the purpose of patents and how they should fit into software development. A component of the problem is the apparent suddenness at which patents are issued. The solution to the problem will be to expose the details of software patents and educate the professional programming community accordingly. They will have to understand that the long delay between filing and issuance is part of the patent process and builds in risk for those who use patentable ideas without having patent protection. Programmers may not like software patents, but they should also know that software patents will not go away.

Algorithms, as such, are not patentable. Algorithms that have an application can be considered inventions and are patentable. In short, any idea which can be shown to be novel and useful can be patented. An issued patent gives the inventor a period of exclusive control over the invention, which, as memory serves, is 17 years, in exchange for full disclosure of the details of the invention. After the exclusivity period is over, the inventor has no control and anyone is free to use the invention. A valid application is an improvement over existing inventions, referred to as "prior art." As such, the first electronic spreadsheet may have been patentable as an invention (although it is "obvious") because of the improved performance. General patents that cover specific patents are considered to "roof" the specific patents and restrict the use of the application by the inventor. An example of roofing would be a general patent on the basic concepts of a spreadsheet -- it would restrict the application of a patent for automatic recalculation of spreadsheet cells. Finally, patents make general and specific claims as to what constitutes the invention, the so-called scope of the invention. For a software patent, any new idea not previously claimed for a patent is in itself patentable. If the LZW data compression algorithm was claimed to be only useful for telecommunications, the same algorithm could be patented for, say, archival database data compression and Unisys could do nothing about it.

As for litigation, the patent law is stacked entirely on the side of the inventor. There is nothing necessary about needing to outwait those with less resource: if the infringement is clear (an important point), the infringer loses. The problem here is not so much willful infringement, which is really intellectual property theft, but confusion caused by the delay in patent issuance. Programmers are quick to adopt good ideas, some of which constitute patentable inventions, with the result that several products may emerge into the marketplace before the patent is issued to one company -- and the others lose. This gives the impression of restricting the market and can be dissatisfying to consumers. One company, whose name has escaped me, has even gone as far as sending letters to the owners of a competing product to inform them that it was about to be rewarded with the patent covering the product, and all those who had the other products would be liable for damages. Now that is disgusting and should be discouraged. The solution to this is for companies and programmers to be more patent aware. This could be accomplished if those filing for patents publicly state their intent, thus forewarning others. Could it happen? Probably not, because lawyers control the dissemination of product development information, the nature of which is typically confidential.

Does patenting stifle innovation? No, not at all. Only those working on comparable projects are affected and, naturally, the use of the algorithm for the claimed applications is protected in all its forms. This is far better protection than one can get by copyrighting code, which only protects the expression of the algorithm, not the actual algorithm. Therefore, if someone invents a novel algorithm, one with commercial potential, patent protection is the proper course. The use of patented inventions is usually regulated by lawyers in the form of licensing and royalty arrangements. If a patented algorithm is essential to the success of another software project, the appropriate legal arrangements can usually be made to use the invention and still make money.

Does the exclusivity time of 17 years stifle innovation? No. Patents are subject to technology changes, just as everything is. If the Basic interpreter that was DDJ's charter project 15 years ago had critical elements patented, the exclusivity would still have two years to go. Just looking in the pages of DDJ over the last seven years, one wouldn't know Basic existed. The facts are that times change and patents can become obsolete. However, if some lucky inventor hit the right idea, one that consumers wished to consume, he would have the luxury of no competition, could enjoy the fruits of his labors, and perhaps even improve the invention. Does that mean he can charge whatever he wants and the consumers must pay? No. Patent rights do not translate to automatic sales. The natural forces of the marketplace are always at work. If an inventor asks too high a price, he quickly finds that most consumers can live without his invention. The result is that patents do not free the inventor from normal marketing considerations, but do remove the competition for a time.

An issue that will affect software patents, both in review time and quality, is an apparent lack of patent examiners with expertise in the area of software patents. With few examiners, the review time increases from two years to many years. If their expertise isn't fully up to snuff, poor quality patents can slip through. Poor quality can mean many things, ranging from the actual invention to the scope of the invention, which would cover more general applications of the invention.

Personally, I would like to see those who are complaining loudly go off and invent something new. The world needs perhaps one less spreadsheet program or database and perhaps more radical, innovative, and new programs. But we will never know what those new programs are until someone invents the fundamental algorithms to power them. I would also like those who do manage to patent new software inventions to do the next step, which is to bring that invention to consumers in the form of running programs. That is the intent of the patent law (which is as old as this country) -- to encourage inventors to invent and bring those inventions to market. It was good then, it is just as good now.

My bottom line is that programmers should consider an issued software patent an opportunity to work another area, and be inventive in their own right. Patents should provide protection for the inventor, and patented inventions should be brought to market. And some form of information exchange needs to be developed so that those applying and receiving patents can let their intentions be known.

Barr Bauer

Bloomfield, New Jersey

Here We Go Again

Dear DDJ,

In your assembly language issue (March, 1990) you do your readership a disservice. You allow Michael Abrash to propagate the myth that code produced by compilers cannot match code produced by good assembler language programmers. The truth is the compilers used in his comparison are not worthy of being called "optimizing."

Consider his example of CopyUppercase. A C programmer might write the body as that in Example 1. I would expect any compiler to construct the intermediate form in Figure 1. I would then expect an optimizing compiler to identify common subexpressions and perform dataflow analysis resulting in that in Figure 2. During memory allocation I would expect an optimizing compiler to recognize that the idioms in Figure 3 can be implemented using string instructions if the DS,SI and EI,DI registers are associated with the appropriate subexpressions. Without such recognition, string instructions will never be generated. Recognition of these idioms leads to the allocation for subexpressions to machine locations as in Example 2, which leads to the code shown in Example 3. Since these ideas are at least 10 - 15 years old, any optimizing compiler should incorporate them. Michael Abrash made a mistake in regarding the Microsoft and/or Borland compilers to be optimizing compilers. By the way, I am The OPG Co., a firm that performs research into compiler writing tools.

George H. Roberts

Broken Arrow, Oklahoma

Example 1

  do {
    x = *a++;
    x += (('a' <=x) && (x<='z'))?  ('A'-'a'): 0;
    *b++ = x;
  } while (x);

Example 2

  subexpression name  register name  register contents
  @a, offset          DS, SI         address for source byte
  @b, offset          EI, DI         address for destination byte
  x, x+0, x+32        AL             source/destination byte value

Example 3

       ...       optimized prologue
       les       si, [bp+a_pointer]
       les       di, [bp+b_pointer]
  Convert_and_Copy_Loop:
       lodsb
       cmp       al, 'a'
       jb        Save_Upper
       cmp       al, 'z'
       ja        Save_Upper
       add       al, 'A'-'a'
  Save_Upper:
       stosb
       and       al, al
       jnz       Convert_and_Copy_Loop
       ...       optimized epilogue

Michael responds: And so once again we come to the difference between "what is" and "what should be." Mr. Roberts' letter reminds me of the debate between the RISC and CISC people; the RISC people keep saying that RISC has the potential to be two,four, even ten times faster than CISC, and the CISC people keep sighed and pointing out that today's CISC software, running on today's CISC computers, is just about as fast per dollar cost -- and there's a heck of a lot more of it available. In the high-level versus assembly language arena, it's much the same. I remember a lively debate three or four years ago about the relatively low quality of code generated from C source, with the C proponents insisting that the critics were mistaking poor compiler implementations for a poorly optimizable language. "Just wait untit real optimizing C compilers arrive!" they protested.

Well, here we are, years later, and 95 percent of the world uses two compilers that Mr. Roberts claims aren't optimizing compilers at all. I note that Mr. Roberts doesn't actually name a compiler that generates the code he lists; even if there is such a compiler, I suggest that since almost no one is using that compiler, whatever it is, it's a moot point. Mr. Roberts may be entirely correct in that a good optimizing compiler will generate the code he lists; I suggest that today, given currently used PC tools, that's pretty much irrelevant. There's another point to be addressed here. After claiming that I "propagate the myth that code produced by compilers cannot match code produced by good assembler [sic] language programmers," Mr. Roberts follows with the non-sequitur that the compilers I used aren't worthy of being called optimizing compilers, as if the latter is evidence for the former. Even if Turbo C and Microsoft C aren't optimizing compilers, that doesn't mean that "worthy" compilers can generate code as good as assembly language programmers can. Consider Mr. Roberts's own example: His hypothetical "optimized" code is indeed much better than the code Microsoft C produced -- but it's a good 50 percent slower than the hand-optimized code in my article! Anyone who believes that a compiler can match top-notch assembly language for small, well-defined tasks is kidding themselves. (There just isn't enough information bandwidth from the programmer to the compiler in a high-level language for this not to be true.) Assembly language isn't appropriate for most tasks, but when you need maximum performance, it is the only choice.

Errata

Please note the following changes to the source code listing on page 86 in last month's "Building a Hypertext System" by Rick Gessner (DDJ, June 1990).

	44:
	45:
	
	112: If Line [j] <>Null then
	113:      Inc(i) else Inc(j,2);
	114:    Inc. (j);
	115:  end;
	116: Determine_Actual_Line_Pos:=J;
	117: end; (Determine actual line pos)
	
	140: IfOrd(Line[LinePos])>127 then
	141: Begin
On page 167 of the same issue, change the line of source code accompanying "LZW Revisited" by Shawn Regan from:

  if ( num_bits == MAX_BITS > max_code ) {

to

  if ( num_bits == MAX_BITS ) {

DDJ apologizes for the confusion.