The Economics of the Internet

Technology is only part of the challenge

Jeffrey K. MacKie-Mason and Hal Varian

The authors are faculty members in the University of Michigan's department of economics. This article is adapted from "Economic FAQs about the Internet," which was first published in the Journal of Economic Perspectives (Summer 1994). The authors can be contacted at Hal.Varian@umich.edu and jmm@umich.edu.


The Internet is a world-wide network of computer networks that use a common communications protocol, TCP/IP (Transmission Control Protocol/Internet Protocol). TCP/IP provides a common language for interoperation between networks that use a variety of local protocols (NetWare, AppleTalk, DECnet, and others).

In the late 1960s, the Advanced Research Projects Administration (ARPA), a division of the U.S. Defense Department, developed the ARPAnet to link universities and high-tech defense contractors. TCP/IP technology was developed to provide a standard protocol for ARPAnet communications. In the mid-1980s, the National Science Foundation (NSF) created the NSFNET to provide connectivity to its supercomputer centers and other general services. The NSFNET adopted the TCP/IP protocol and provided a high-speed backbone for the developing Internet.

From 1985 to January 1994, the Internet has grown from about 200 networks to well over 21,000 and from 1000 hosts (end-user computers) to over 2 million. Of U.S. sites, about 640,000 of these hosts are at educational sites, 520,000 at commercial sites, and 220,000 at government/military sites; most of the other 700,000 hosts are elsewhere in the world. NSFNET traffic has grown from 85 million packets in January 1988 to 46 billion packets in December 1993. (A packet is about 200 bytes.) This is more than a 500-fold increase in only six years. The traffic on the network is currently increasing at a rate of 6 percent a month. (Current NSFNET statistics are available by anonymous ftp from nic.merit.edu.)

Probably the most frequent use of the Internet is e-mail, followed by file transfer and remote login. In terms of traffic, about 42 percent of total traffic is file transfer, 17 percent e-mail, and 24 percent other services--including information-retrieval programs such as gopher, Mosaic, and World Wide Web. People search databases (including the catalogs of the Library of Congress and scores of university research libraries), download data and software, and ask (or answer) questions in discussion groups on numerous topics.

In terms of organization, the Internet is a loose amalgamation of computer networks run by many different organizations in over 70 countries. Most of the technological decisions are made by small committees of volunteers who set standards for interoperability.

Internet Structure

The Internet is usually described as a three-level hierarchy. At the bottom are local area networks (LANs); for example, campus networks. Usually the local networks are connected to a regional, or mid-level network. The mid-levels connect to one or more backbones. The U.S. backbones connect to other backbone networks around the world. There are, however, numerous exceptions to this structure.

Regional networks provide connectivity between end users and the NSFNET backbone. Most universities and large organizations are connected by leased line to a regional provider. There are currently about a dozen regional networks, some of which receive subsidies from the NSF; many receive subsidies from state governments. A large share of their funds is collected through connection fees charged to organizations that attach their local networks to the mid-levels. A large university, for example, will typically pay $60,000--$100,000 per year to connect to a regional.

The regionals are generally run by a state agency, or by a coalition of state agencies in a given geographic region. They are operated as nonprofit organizations.

As of January 1994, there are four public fiber-optic backbones in the U.S.: NSFNET, Alternet, PSInet, and SprintLink. The NSFNET is funded by the NSF, and is the oldest, having evolved directly out of ARPAnet, the original TCP/IP network. The other backbones are private, for-profit enterprises.

Due to its public funding, the NSFNET has operated under an Acceptable Use Policy that limits use to traffic in support of research and education. When the Internet began to rapidly grow in the late 1980s, there was an increasing demand for commercial use. Since Internet services are unregulated, entry by new providers is easy, and the market for backbone services is becoming quite competitive. (Transport of TCP/IP packets is considered to be a value-added service, and as such, is not regulated by the FCC or state public-utility commissions.)

Nowadays the commercial backbones and the NSFNET backbone interconnect so that traffic can flow from one to the other. Given that both research and commercial traffic is now flowing on the same fiber, the NSF's Acceptable Use Policy has become pretty much a dead letter. The charges for these interconnections are currently relatively small lump-sum payments, but there has been considerable debate about whether usage-based settlement charges will have to be put in place in the future.

Currently the NSF pays Merit Inc. (Michigan Educational Research Information Triad) to run the NSFNET. Merit, in turn, subcontracts the day-to-day operation of the network to Advanced Network Services (ANS), a nonprofit firm founded in 1990 to provide network-backbone services. The initial funding for ANS was provided by IBM and MCI.

It is difficult to say how much the Internet as a whole costs, since it consists of thousands of different networks, many of which are privately owned. However, it is possible to estimate how much the NSFNET backbone costs, since it is publicly supported. As of 1993, NSF paid Merit about $11.5 million per year to run the backbone. Approximately 80 percent of this is spent on lease payments for the fiber-optic lines and routers (computer-based switches). About 7 percent of the budget is spent on the Network Operations Center, which monitors traffic flow and troubleshoots problems.

To give some sense of the scale of this subsidy, add to it the approximately $7 million per year that NSF pays to subsidize various regional networks, for a total of about $20 million. With current estimates at 20 million Internet users (most of whom are connected to the NSFNET in one way or another), the NSF subsidy amounts to about $1 per person per year. Of course, this is significantly less than the total cost of the Internet; indeed, it does not even include all of the public funds, which come from state governments, state-supported universities, and other national governments. No one really knows how much all this adds up to, although research projects are underway to try to estimate the total U.S. expenditures on the Internet. It has been estimated (read "guessed") that the NSF subsidy of $20 million per year is less than 10 percent of the total U.S. expenditure on the Internet.

The NSFNET backbone will likely be gone by the time you read this, or soon thereafter. With the proliferation of commercial backbones and regional-network interconnections, a general-purpose, federally subsidized backbone is no longer needed. In contracts awarded earlier this year, the NSF will only fund a set of Network Access Points (NAPs), which will be hubs to connect the many private backbones and regional networks. The NSF will also fund a service that will provide fair and efficient routing among the various backbones and regionals. Finally, the NSF will fund a very-high-speed backbone-network service (vBNS) connecting its six supercomputer sites, with restrictions on the users and traffic that it can carry. Its emphasis will be on developing capabilities for high-definition remote visualization and video transmission. The new U.S. network structure will be less hierarchical and more interconnected. The separation between the backbone and regional network layers of the current structure will become blurred, as more regionals are connected directly to each other through NAPs, and traffic passes through a chain of regionals without any backbone transport.

Most users access the Internet through their employer's organizational network, which is connected to a regional. However, in the past few years a number of for-profit independent providers of Internet access have emerged. These typically provide connections between small organizations or individuals and a regional, using either leased lines or dial-up access. Starting in 1993 some of the private computer networks (such as Delphi and World) have begun to offer full Internet access to their customers. (CompuServe and the other private networks have offered e-mail exchange to the Internet for several years.)

Other countries also have many backbone and mid-level networks. For example, most western European countries have national networks attached to EBone, the European backbone. The infrastructure is still immature and quite inefficient in some places. For example, the connections between other countries are often slow or of low quality, so it is common to see traffic between two foreign countries routed through the U.S. via NSFNET.

Internet Technology

Since most backbone and regional network traffic moves over leased phone lines, there's little difference at a low level between the Internet and telephone networks. However, there is a fundamental distinction in how the lines are used by the Internet and phone companies. The Internet provides connectionless packet-switched service, whereas telephone service is circuit-switched. The difference may sound arcane, but it has vastly important implications for pricing and the efficient use of network resources.

Circuit switching requires that an end-to-end circuit be set up before the call can begin. A fixed share of network resources is reserved for the call, and no other call can use those resources until the original connection is closed. This means that a long silence between two teenagers uses the same resources as an active negotiation between two fast-talking lawyers. One advantage of circuit-switching is that it enables performance guarantees such as guaranteed maximum delay, essential for real-time applications like voice conversations. It is also much easier to do detailed accounting for circuit-switched network usage.

In packet switching a data stream is divided into packets of about 200 bytes (on average), which are then sent out onto the network. Each packet contains a header with information necessary for routing the packet from origin to destination. Thus, each packet in a data stream is independent.

The main advantage of packet switching is that it permits statistical multiplexing on the communications lines. That is, the packets from many different sources can share a line, allowing for very efficient use of the fixed capacity. With current technology, packets are generally accepted onto the network on a first-come, first-served basis. If the network becomes overloaded, packets are delayed or dropped.

Internet technology is connectionless, meaning there is no end-to-end setup for a session; each packet is independently routed to its destination. When a packet is ready, the host computer sends it on to another computer, known as a "router," which examines the destination address in the header and passes the packet along to another router, chosen by a route-finding algorithm. A packet may go through 30 or more routers in its travels from one host computer to another. Because routes are dynamically updated, it is possible for different packets from a single session to take different routes to the destination.

Along the way, packets may be broken up into smaller packets, or reassembled into bigger ones. When the packets reach their final destination, they are reassembled at the host computer. The instructions for doing this reassembly are part of the TCP/IP protocol.

Some packet-switching networks are connection oriented (notably, X.25 networks, such as Tymnet and frame-relay networks). In such a network, a connection is set up before transmission begins, just as in a circuit-switched network. A fixed route is defined, and information necessary to match packets to their session and defined route is stored in memory tables in the routers. Thus, connectionless networks economize on router memory and connection set-up time, while connection-oriented networks economize on routing calculations (which have to be redone for every packet in a connectionless network).

Most of the network hardware in the Internet consists of communications lines and switches or routers. In the regional and backbone networks, the lines are mostly leased telephone trunk lines, which are increasingly fiber optic. Routers are computers; indeed, the routers used on the NSFNET are modified commercial IBM RS/6000 workstations, although routers custom-designed by companies such as Cisco, Wellfleet, 3Com, and DEC probably have the majority of market share.

Modem users are familiar with recent speed increases from 300 bps (bits per second) to 2400, 9600, and now 19,200 bps. Leased-line network speeds have advanced from 56 Kbps (kilo, or 103 bps) to 1.5 Mbps (mega, or 106 bps, known as T-1 lines) in the late '80s, and then to 45 Mbps (T-3) in the early '90s. Lines of 155 Mbps are now available, though not yet widely used. Congress has called for a 1-Gbps (giga, or 109 bps) backbone by 1995.

The current T-3 45-Mbps lines can move data at a speed of 1400 pages of text per second; a 20-volume encyclopedia can be sent coast to coast on the NSFNET backbone in half a minute. However, it is important to remember that this is the speed on the superhighway--the access roads via the regional networks usually use the much slower T-1 connections.

Economics can explain the preference for packet switching over circuit switching in the Internet and other public networks. Circuit networks use many lines to economize on switching and routing--once a call is set up, a line is dedicated to it, regardless of its rate of data flow, and no further routing calculations are needed. This network design makes sense when lines are cheap relative to switches.

The cost of both communications lines and computers has been declining exponentially for decades. However, around 1970, switches (computers) became relatively cheaper than lines. At that point, packet switching became economical: Lines are shared by multiple connections at the cost of many more routing calculations by the switches. This preference for using many relatively cheap routers to manage few expensive lines is evident in the topology of the backbone networks. In the NSFNET, for example, any packet coming on to the backbone has to pass through two routers at its entry point and again at its exit point. A packet entering at Cleveland and exiting at New York traverses four NSFNET routers but only one leased T-3 communications line.

At present there are many overlapping information networks (telephone, telegraph, data, cable TV, and the like), and new networks are emerging rapidly (such as paging or personal-communications services). Each of the current information networks is engineered to provide a particular type of service, and the added value provided by each of the different types was sufficient to overcome the fixed costs of building overlapping physical networks.

However, given the high fixed costs of providing a network, the economic incentive to develop an integrated-services network is strong. Further, now that all information can be easily digitized, the need for separate networks for separate types of traffic is no longer necessary. Convergence toward a unified, integrated-services network is a basic feature in most visions of the much-publicized information superhighway. The migration to integrated-services networks will have important implications for market structure and competition.

The international telephone community has committed to a future network design that combines elements of both circuit and packet switching to enable the provision of integrated services. The CCITT (an international standards body for telecommunications) has adopted a cell-switching technology called "ATM" (asynchronous transfer mode) for future high-speed networks. Cell switching closely resembles packet switching in that it breaks a data stream into packets, which are then placed on lines shared by several streams. One major difference is that cells have a fixed size, while packets can have different sizes. This makes it possible in principle to offer bounded delay guarantees (since a cell will not get stuck for a surprisingly long time behind an unusually large packet).

An ATM network also resembles a circuit-switched network in that it provides connection-oriented service. Each connection has a set-up phase, during which a virtual circuit is created. The fact that the circuit is virtual, not physical, provides two major advantages. First, it is not necessary to reserve network resources for a given connection; the economic efficiencies of statistical multiplexing can be realized. Second, once a virtual-circuit path is established, switching time is minimized, allowing much-higher network throughput. Initial ATM networks are already being operated at 155 Mbps, while the non-ATM Internet backbones operate at no more than 45 Mbps. The path to 1000-Mbps (gigabit) networks seems much clearer for ATM than for traditional packet switching.

The federal High-Performance Computing Act of 1991 targeted a gigabit per second (Gbps) national backbone by 1995. Six federally funded testbed networks are currently demonstrating various gigabit approaches. To get a feel for how fast a gigabit is, note that most small colleges or universities today have 56-Kbps Internet connections. At 56 Kbps, it takes about five hours to transmit one gigabit!

Efforts to develop integrated-services networks are also on the rise. Several cable companies have already started offering Internet connections to their customers. (Because most cable networks are one-way, these connections usually use an asymmetric network connector that brings the input in through the TV cable at 10 Mbps, but sends the output out through a regular phone line at about 14.4 Kbps. This scheme may be popular since most users tend to download more information than they upload.) AT&T, MCI, and all of the Regional Bell Operating Companies (RBOCs) are involved in mergers and joint ventures with cable TV and other specialized network providers to deliver new integrated services such as video-on-demand. ATM-based networks, although initially developed for phone systems, ironically have been first implemented for data networks within corporations and by some regional and backbone providers.

Internet Pricing Schemes

Until recently, nearly all users faced the same pricing structure for Internet usage. A fixed-bandwidth connection was charged an annual fee, which allowed for unlimited usage up to the physical maximum-flow rate (bandwidth). We call this "connection pricing." Most connection fees were paid by organizations (universities, government agencies, and so on), with users paying nothing directly themselves.

Simple connection pricing still dominates the market, but a number of variants have emerged. The most notable is "committed information-rate" pricing, whereby an organization is charged a two-part fee: one based on the bandwidth of the connection, which is the maximum feasible flow rate, the other based on the maximum guaranteed flow to the customer. The network provider installs both sufficient capacity to simultaneously transport the committed rate for all of its customers and flow regulators on each connection. When some customers operate below the committed rate, the excess network capacity is available on a first-come, first-served basis for the other customers. This type of pricing is more common in private networks than in the Internet because a TCP/IP flow rate can be guaranteed only network by network, greatly limiting its value unless many of the 20,000 Internet networks coordinate on offering this type of guarantee.

Networks that offer committed information pricing generally have enough capacity to meet the entire guaranteed bandwidth. This is a bit like a bank holding 100 percent reserves, but is necessary with existing technology since there is no commonly used way to prioritize packets.

For most usage, the typical packet placed on the Internet is priced at zero. There are a few exceptions at the outer fringes. For example, some private networks (such as CompuServe) provide e-mail connections to the Internet. Several of these charge per message above a low threshold. The public networks in Chile and New Zealand charge customers by the packet for all international traffic.

Coping with Congestion

However, most of the Internet does not price by the packet. Organizations pay a fixed fee in exchange for unlimited access up to the maximum throughput of their particular connection. This is a classic problem of the commons--the externality exists because a packet-switched network is a shared-media technology. Each packet I send beyond the maximum throughput imposes a cost on all other users because the resources I'm using aren't available to them. This cost can come in the form of delay or lost (dropped) packets.

Without an incentive to economize on usage, congestion can become quite serious. Indeed, the problem is more serious for data networks than for many other congestible resources because of the tremendously wide range of usage rates. On a highway, for example, at a given moment, a single user is more or less limited to either putting zero or one cars on the road. In a data network, however, a single user at a modern workstation can send a few bytes of e-mail or put a load of hundreds of Mbps on the network. Within a year, any undergraduate with a new Macintosh will be able to plug in a video camera and transmit live videos home to mom, demanding as much as 1 Mbps. Since the maximum throughput on current backbones is only 45 Mbps, it is clear that even a few users with relatively inexpensive equipment could bring the network to its knees.

Congestion problems are not just hypothetical. For example, congestion was quite severe in 1987 when the NSFNET backbone was running at much slower transmission speeds (1.5 Mbps). Users running interactive, remote-terminal sessions experienced excessive delays. As a temporary fix, the NSFNET programmed the routers to give terminal sessions (using the telnet program) higher priority than file transfers (using the ftp program). More recently, large ftp archives, Web servers at the National Center for Supercomputer Applications, the original Archie site at McGill University, and many other services have had serious problems with overuse.

If everyone just stuck to ASCII e-mail, congestion would not likely become a problem for many years--if ever. However, new multimedia services such as Mosaic and Internet Talk Radio are consuming ever-larger amounts of bandwidth, and although the supply of bandwidth is increasing, so is the demand. If usage remains unpriced, it is likely that in the foreseeable future, the demand for bandwidth will sometimes exceed the supply.

Administratively, assigning different priorities to different types of traffic is appealing. As a long-term solution to congestion costs, however, it is impractical due to the usual inefficiencies of rationing. More importantly, it is technologically impossible to enforce. From the network's perspective, bits are bits, and there is no certain way to distinguish between different types of uses. By convention, most standard programs use a unique identifier included in the TCP header (the port number); this is what NSFNET used for its priority scheme in 1987. However, it is a trivial matter to put a different port number into the packet headers; for example, to assign the telnet number to ftp packets to defeat the 1987 priority scheme. To avoid this problem, NSFNET kept its prioritization mechanism secret, but that is hardly a long-term solution.

What other mechanisms can be used to control congestion? The most obvious approach is to charge some sort of usage price. To date, however, usage pricing for backbone services has not been considered seriously, and even tentative proposals have met with strong opposition.

Many proposals rely on voluntary efforts to control congestion. Numerous participants in congestion discussions suggest that peer pressure and user ethics will be sufficient to control congestion costs. For example, recently a single user started broadcasting a 350--450-Kbps audio-video test pattern to hosts around the world, blocking the network's ability to handle a scheduled audio broadcast from a Finnish university. When a network engineer sent a strongly worded message to the user's site administrator, the offending workstation was taken off the network. This illustrates one problem with relying on peer pressure: The signal was not terminated until after it had caused serious disruption. Also, it apparently was caused by a novice user who did not understand the impact of what he had done; as network access becomes ubiquitous, an ever-increasing number of unsophisticated users will have access to applications that can cause severe congestion if not used properly. And of course, peer pressure may be quite ineffective against malicious users who want to intentionally cause network congestion.

One recent proposal for voluntary control is closely related to the 1987 method used by the NSFNET. This proposal would require users to indicate a priority level for each of their sessions. Routers would be programmed to maintain multiple queues, one for each priority class. Obviously, the success of this scheme would depend on users' willingness to assign lower priorities to some of their traffic. However, as long as one or a few abusive users can create crippling congestion, voluntary priority schemes may be largely ineffective.

In fact, a number of voluntary mechanisms are in place today. They are somewhat helpful, in part because most users are unaware of them, or because they require some programming expertise to defeat. For example, most implementations of the TCP protocols use a slow start algorithm which controls the rate of transmission based on the current state of delay in the network. But nothing prevents users from modifying their TCP implementation to send full throttle.

A completely different approach to reducing congestion is purely technological: overprovisioning, or maintaining sufficient network capacity to support the peak demands without noticeable service degradation. (The effects of network congestion are usually negligible until usage is very close to capacity.) This has been the most important mechanism used to date in the Internet. However, overprovisioning is costly, and with both very-high-bandwidth applications and near-universal access fast approaching, it may become too costly. In short, will the cost of capacity decline faster than the growth in capacity demand?

Given the explosive growth in demand and the long lead time needed to introduce new network protocols, the Internet may face serious problems very soon if productivity increases do not keep up. Therefore, we believe it is time to seriously examine incentive-compatible allocation mechanisms, such as various forms of usage pricing.

Choosing the Right Level of Service

The current Internet offers a single service quality: best-efforts packet service. Packets are transported first-come, first-served with no guarantee of success. Some packets may experience severe delays, while others may be dropped and never arrive.

However, different kinds of data place different demands on network services. E-mail and file transfers require 100 percent accuracy, but can easily tolerate delay. Real-time voice broadcasts require much higher bandwidth than file transfers and can only tolerate minor delays, but they can tolerate significant distortion. Real-time video broadcasts have very low tolerance for delay or distortion.

Because of these different requirements, network-routing algorithms should treat different types of traffic differently--giving higher priority to, say, real-time video than to e-mail or file transfer. But the user must truthfully indicate what type of traffic is being sent. If real-time-video bit streams get the highest quality service, why not claim that all of your bit streams are real-time video?

The trick is to design a pricing mechanism that gives the users the right incentive to ask for the kind of services they really need. If a user wants to send high-priority traffic, then he will have to pay a first-class fare; low-priority traffic, like e-mail, can travel tourist class. Economists have come up with pricing mechanisms that, in theory, give users the right incentives to reveal their true priorities. However, some of these pricing mechanisms are very complicated; ordinary users--or even computer hackers--would probably not want to spend a lot of time and effort figuring out the cheapest way for transferring a file. But just as we use travel agents to figure out the cheapest airfare, we could use "artificial agents"--intelligent computer programs--to figure out the cheapest way to send information across the network. If the network pricing mechanism presents the right incentives to these "artificial agents" then every computer on the network could be working together to optimize network use.

One of the first necessary steps for implementing usage-based pricing (either for congestion control or multiple service-class allocation) is to measure and account for usage. Accounting poses some serious problems. For one thing, packet service is inherently ill-suited to detailed usage accounting because every packet is independent. As an example, a one-minute phone call in a circuit-switched network requires one accounting entry in the usage database. But in a packet network, that one-minute phone call would require around 2500 average-sized packets; complete accounting for every packet would then require about database 2500 entries. On the NSFNET alone, over 40 billion packets are being delivered each month. Maintaining detailed accounting by the packet in a way similar to phone-company accounting may be too expensive.

Another accounting problem concerns the granularity of the records. Presumably, accounting detail is most useful when it traces traffic to the user. Certainly, if the purpose of accounting is to establish prices as incentives, those incentives will be most effective if they affect the person actually making the usage decisions. But the network is, at best, capable of reliably identifying only the originating host computer (just as phone networks only identify the phone number that placed a call, not the caller). The host computer will need another layer of expensive, complex authorization and accounting software in order to track packets to specific user accounts. Imagine, for instance, trying to account for student e-mail usage at a large, public computer cluster.

The higher the level of aggregation, the more practical and less costly accounting becomes. For example, the NSFNET already collects some usage information about each of the subnetworks that connect to its backbone (although this data is based on a sample, not an exhaustive accounting for every packet). Whether accounting at lower levels of aggregation is worthwhile depends on cost-saving innovations in internetwork accounting methods.

Network Usage and Public Funding

Excess capacity (or overprovisioning) has been subsidized heavily--directly or indirectly--through public funding. Providing network services at a zero usage price probably made sense during the research, development, and deployment phases of the Internet. However, as the network matures and becomes widely used by commercial interests, it is harder to rationalize. Why should data-network usage be free even to universities, when telephone and postal usage are not? (Many university employees routinely use e-mail rather than the phone to communicate with friends and family at other Internet-connected sites. Likewise, a service is now being offered to transmit faxes between cities over the Internet for free, then paying only the local phone-call charges to deliver them to the intended fax machine.)

Indeed, Congress has required that the federally developed, gigabit-network technology must accommodate usage accounting and pricing. Furthermore, because the NSF will no longer provide backbone services, the general-purpose public network will be left to commercial and state-agency providers. As the net becomes increasingly privatized, competitive forces may necessitate the use of more-efficient allocation mechanisms. So there are both public and private pressures for serious consideration of pricing. The trick is to design a pricing system that minimizes transactions costs.

Pros and Cons of Pricing

Standard economic theory suggests that prices should be matched to costs. There are three main elements of network costs: connecting to the net, providing additional network capacity, and congestion. Once capacity is in place, direct-usage cost is negligible, and is almost surely not worth charging for by itself, given the accounting and billing costs.

Charging for connections is conceptually straightforward: A connection requires a line, a router, and some labor effort. The line and the router are reversible investments and can reasonably be charged for on an annual lease basis (though many organizations buy their own routers). This, essentially, is the current scheme for Internet connection fees.

Charging for incremental capacity requires usage information. Ideally, we need a measure of the organization's demand during the expected peak period of usage over some period of time, to determine its share of the incremental-capacity requirement. A reasonable approximation might be to charge a premium price for usage during predetermined peak periods, as is routinely done for electricity. However, casual evidence suggests that peak-demand periods are much less predictable for the Internet than for other utility services. One reason is that it is very easy to schedule some activities for off-peak hours, leading to a shifting-peaks problem. (The single, largest current use of network capacity is file transfer, much of which is distribution of files from central to local archives. Just as some fax machines allow faxes to be transmitted at off-peak times, large data files could easily be transferred at off-peak times--if users had appropriate incentives to adopt such practices.) In addition, so much traffic traverses long distances around the globe that time-zone differences are important.

Pricing Congestion

When the network is near capacity, a user's incremental packet imposes costs on other users in the form of delay or dropped packets. Our scheme for internalizing this cost is to impose a congestion price on usage that is determined by a real-time auction, or "smart market."

The basic idea is simple. Much of the time the network is uncongested, and the price for usage should be zero. When the network is congested, packets are queued and delayed. The current queuing scheme is FIFO. We propose instead that packets should be prioritized based on the value that the user puts on getting the packet through quickly. Each user would assign his or her packets a bid that measures willingness-to-pay for immediate servicing. At congested routers, packets would be prioritized based on willingness-to-pay. The packets with the highest bids would be admitted first. If the router can handle all the packets arriving in a given time slice, then there is no congestion and no reason to charge any packets for access. However, if the router reaches capacity, only packets with bids higher than some cutoff value would be admitted to the network. Each admitted packet is then charged a price of admission equal to the cutoff value--which is guaranteed to be lower than any admitted packet's bid. It can be shown that this pricing system provides the right incentives to the users to reveal their true priorities.

This scheme has a number of nice features. In particular, not only do those with the highest cost of delay get served first, but the prices also send the right signals for capacity expansion in a competitive market for network services. If all of the congestion revenues are reinvested in new capacity, then capacity will be expanded to the point where its marginal value is equal to its marginal cost.

Prices in a real-world smart market cannot be updated continuously. The efficient price is determined by comparing a list of user bids to the available capacity and determining the cutoff price. In fact, packets arrive not all at once, but over time. It would be necessary to clear the market periodically based on a time-slice of bids. The efficiency of this scheme, then, depends on how costly it is to frequently clear the market and on how persistent the periods of congestion are. If congestion is exceedingly transient, the state of congestion may have changed by the time the market price is updated.

Some network specialists have suggested that many customers--particularly not-for-profit agencies and schools--will object because they will not know in advance how much network utilization will cost them. We believe that this argument is partially a red herring, since the user's bid always controls the maximum network-usage costs. Indeed, since we expect a zero congestion price for most traffic, it should be possible for most users to avoid ever paying a usage charge by simply setting all packet bids to zero. (Since most users are willing to tolerate some delay for e-mail, file transfer, and so forth, most traffic should be able to go through with acceptable delays at a zero congestion price. Time-critical traffic will typically pay a price.) When the network is congested enough to have a positive congestion price, these users will pay the cost in units of delay rather than cash, as they do today.

We also expect that in a competitive market for network services, fluctuating congestion prices would usually be a wholesale phenomenon, and that intermediaries would repackage the services and offer them at a guaranteed price to end users. Essentially, this would create a futures market for network services.

Problems with auctions must also be solved. Our proposal specifies a single network entry point with auctioned access. In practice, networks have multiple gateways, each subject to differing states of congestion. Should a smart market be located in a single, central hub, with current prices continuously transmitted to the many gateways? Or should a set of simultaneous auctions operate at each gateway? How much coordination should there be between the separate auctions? These problems need not only theoretical models, but also empirical work to determine the optimal rate of market-clearing and interauction information sharing, given the costs and delays of real-time communication.

Another serious problem for almost any usage-pricing scheme is how to correctly determine whether the sender or receiver should be billed. With telephone calls, it is clear that, in most cases, the caller should pay. However, in a packet network, both sides originate their own packets, and in a connectionless network there is no mechanism for identifying which of party B's packets were solicited as responses to a session initiated by party A. Consider a simple example: A major use of the Internet is file retrieval from public archives. If the originator of each packet were charged for that packet's congestion cost, then the providers of free public goods (the file archives) would pay nearly all of the congestion charges induced by a user's file request. (Public file servers in Chile and New Zealand already face this problem: Any packets they send in response to requests from foreign hosts are charged by the network. Network administrators in New Zealand are concerned that this blind charging scheme is stifling the production of public-information goods. For now, those public archives that do exist have a sign-on notice pleading with international users to be considerate of the costs they are imposing on the archive providers.) Either the public-archive provider would need a billing mechanism to charge requesters for the (ex post) congestion charges, or the network would need to be engineered to bill the correct party. In principle, this problem can be solved by schemes like 800 or 900 numbers and collect phone calls, but the added complexity in a packetized network may be too costly.

Consider the average cost of the current NSFNET: about $106 per month, for about 42,000x106 packets per month. This implies a cost per packet (around 200 bytes) of about 1/420 cents. If there are 20 million users of the NSFNET backbone (10 per host computer), then full cost recovery of the NSFNET subsidy would imply an average monthly bill of about $0.05 per person. If we accept the estimate that the total cost of the U.S. portion of the Internet is about ten times the NSFNET subsidy, we come up with 50 cents per person per month for full cost recovery. The revenue from congestion fees would presumably be significantly less than this amount. (If revenue from congestion fees exceed the cost of the network, it would be profitable to expand the size of the network.)

The average cost of the Internet is so small today because the technology is so efficient: The packet-switching technology allows for very cost-effective use of existing lines and switches. A video e-mail message could easily use 104 more bits than a plaintext ASCII e-mail with the "same" information content, and providing this amount of incremental bandwidth could be quite expensive. Well-designed congestion prices would not charge everyone the average cost of this incremental bandwidth, but instead charge those users whose demands create the congestion and need for additional capacity.

Pricing Information

Our focus thus far has been on the technology, costs, and pricing of network transport. However, most of the network value lies not in the transport itself, but in the value of the information being transported. For the full potential of the Internet to be realized, it will be necessary to develop methods to charge for the value of the information services available on the network.

Vast troves of free, high-quality information (and probably equally large troves of dreck) are currently available on the Internet. Historically, there has been a strong base of volunteerism to collect and maintain data, software, and other information archives. However, as usage explodes, volunteer providers are learning that they need revenues to cover their costs. And of course, careful researchers may be skeptical about the quality of any information provided for free.

Charging for information resources is quite a difficult problem. A service like CompuServe charges customers by establishing a billing account. This requires that users obtain a password, and that the information provider implement a sophisticated accounting-and-billing infrastructure. However, one of the advantages of the Internet is that it is so decentralized: Information sources are located on thousands of different computers. It would simply be too costly for every information provider to set up an independent billing system and give out separate passwords to each of its registered users. Users could end up with dozens of different authentication mechanisms for different services.

A deeper problem for pricing information services is that traditional pricing schemes are not appropriate. Most pricing is based on the measurement of replications: We pay for each copy of a book, each piece of furniture, and so forth. This usually works because the high cost of replication generally prevents us from avoiding payment. If you buy a table we like, we generally have to go to the manufacturer to buy one for ourselves; we can't just simply copy yours. With information goods, the pricing-by-replication scheme breaks down. This has been a major problem for the software industry: Once the sunk costs of software development are invested, replication costs are essentially zero. The same is especially true for any form of information transmitted over the network. Imagine, for example, that copy shops begin to make course packs available electronically. What is to stop a young entrepreneur from buying one electronic copy and selling it at a lower price to everyone else in the class? This is a much greater problem even than that which publishers face from unauthorized photocopying, since the cost of electronic replication is essentially zero.

A small body of literature on the economics of copying examines some of these issues. However, the same network connections that exacerbate the problems of pricing information goods may also help to solve some of these problems. For example, Brad Cox describes the idea of superdistribution of information objects, in which accessing a piece of information automatically sends a payment to the provider via the network. (See "Superdistribution and Electronic Objects," DDJ, October 1992.)

Electronic Commerce and the Internet

Some companies have already begun to advertise and sell products and services over the Internet. Home shopping is expected to be a major application for future integrated-services networks that transport sound and video. Electronic commerce could substantially increase productivity by reducing the time and other transaction costs inherent in commerce, much as mail-order shopping has already begun to do. One important requirement for a complete electronic-commerce economy is an acceptable form of electronic payment. (In our work on pricing for network transport, we have found that some form of secure electronic currency is necessary for the transaction costs of accounting and billing to be low enough to justify usage pricing.)

Bank debit cards and automatic-teller cards work because they have reliable authentication procedures based on both a physical device and knowledge of a private code. Digital currency over the network is more difficult because it is not possible to install physical devices and protect them from tampering on every workstation. (Traditional credit cards are unlikely to receive wide use over a data network, though there is some use currently. It is very easy to set up an untraceable computer account to fraudulently collect credit-card numbers; fraudulent telephone mail-order operations are more difficult to arrange.) Therefore, authentication and authorization will most likely be based solely on the use of private codes. Another objective is anonymity, so individual buying histories cannot be collected and sold to marketing agencies (or Senate confirmation committees).

A number of recent computer-science papers have proposed protocols for digital cash, checks, and credit. Each of these has some desirable features, yet none has been widely implemented thus far. The seminal paper "Security Without Identification: Transaction Systems to Make Big Brother Obsolete," by D. Chaum (Communications of the ACM 28[10], 1985) proposed an anonymous form of digital cash that requires a single, central bank to electronically verify the authenticity of each "coin." In their paper, "Netcash: A Design for Practical Electronic Currency on the Internet" (Proceedings of the First ACM Conference on Computer and Communications Security, ACM Press, 1993), Medvinsky and Neuman propose a form of digital check that is not completely anonymous, but is much more workable for widespread commerce with multiple banks. Similarly, Low, Maxemchuk, and Paul suggest a protocol for anonymous credit cards in their paper "Anonymous Credit Cards" (AT&T Bell Laboratories Technical Report, 1994, available at ftp://research.att.com/dist/anoncc/anoncc.ps.Z).

Regulatory Issues

The growth of data networks like the Internet is increasingly important motivation for regulatory reform of telecommunications. A primary principle of the current regulatory structure, for example, is that local phone service is a natural monopoly, and thus must be regulated. However, local phone companies face ever-increasing competition from data-network services. For example, the fastest-growing component of telephone demand has been for fax transmission, but fax technology is better suited to packet-switching networks than to voice networks, and faxes are increasingly transmitted over the Internet. As integrated-services networks emerge, they will provide an alternative for voice calls and video conferencing, as well. This "bypass" is already occurring in the advanced, private networks that many corporations, such as General Electric, are building.

As a result, the trend seems to be toward removing the barriers against cross-ownership of local phone and cable-TV companies. The RBOCs have filed a motion to remove the remaining restrictions of the Modified Final Judgment that created them (with the 1984 breakup of AT&T). The White House, Congress, and the FCC are all developing new models of regulation, with a strong bias towards deregulation.

Internet transport itself is currently unregulated. This is consistent with the principle that common carriers are natural monopolies, and must be regulated, but services provided over those common carriers need not be. However, this principle has never been consistently applied to phone companies: Services provided over phone lines are regulated. Many public-interest groups are now arguing for similar regulatory requirements for the Internet.

One issue is universal access--the assurance of basic service for all citizens at a very low price. But what is "basic service?" Is it merely a data line, or a multimedia, integrated-services connection? And in an increasingly competitive market for communications services, where should the money to subsidize universal access be raised? High-value uses which traditionally could be charged premium prices by monopoly providers are increasingly subject to competition and bypass.

A related question is whether the government should provide some data-network services as public goods. Some initiatives are already underway. For instance, the Clinton administration has required that all published government documents be available in electronic form. Another current debate concerns the appropriate access subsidy for primary and secondary teachers and students.

The Market Structure of the Information Highway

If different components of local phone and cable-TV networks are deregulated, what degree of competition is likely? Similar questions arise for data networks. For example, a number of observers believe that by ceding backbone transport to commercial providers, the federal government has endorsed above-cost pricing by a small oligopoly of providers. Looking ahead, equilibrium market structures may be quite different for the emerging integrated-services networks than they are for the current specialized networks.

One interesting question is the interaction between pricing schemes and market structure. If competing backbones continue to offer only connection pricing, would an entrepreneur be able to skim off high-value users by charging usage prices, but offering more-efficient congestion control? Alternatively, would a flat-rate-connection price provider be able to undercut usage-price providers, by capturing a large share of low-value-base load customers, who prefer to pay for congestion with delay rather than cash? The interaction between pricing and market structure may have important policy implications, because certain types of pricing may rely on compatibilities between competing networks that will enable efficient accounting and billing. Thus, compatibility regulation may be needed, similar to the interconnect rules imposed on RBOCs.

References

Bohn, R., H.W. Braun, K. Claffy, and S. Wolff. "Mitigating the Coming Internet Crunch: Multiple Service Levels via Precedence." Technical Report, San Diego Supercomputer Center and NSF, 1993.

Braun, H.W., and K. Claffy. "Network Analysis in Support of Internet Policy Requirements." Technical Report, San Diego Supercomputer Center, 1993.

Chaum, D. "Security Without Identification: Transaction Systems to Make Big Brother Obsolete." Communications of the ACM, 28(10), 1985.

Cocchi, R., D. Estin, S. Shenker, and L. Zhang. "A Study of Priority Pricing in Multiple Service Class Networks." Proceedings of Sigcomm '91 (available from: ftp:ftp.parc.xerox.com/pub/net-research/pricing-sc.ps).

------. "Pricing in Computer Networks: Motivation, Formulation, and Example." Technical Report, University of Southern California, 1992.

de Prycker, M. Asynchronous Transfer Mode: Solution for ISDN. New York: Ellis Horwood, 1991.

Goffe, W. "Internet Resources for Economists." Technical Report, University of Southern Mississippi. Journal of Economic Perspectives Symposium. Fall, 1994 (available at gopher:niord.shsu.edu).

Huberman, B. The Ecology of Computation. New York: North-Holland, 1988.

Low, S., N.F. Maxemchuk, and S. Paul. "Anonymous Credit Cards." Technical Report, AT&T Bell Laboratories, Murray Hill, NJ, 1994 (available at ftp://research.att.com/dist/anoncc/anoncc.ps.Z).

MacKie-Mason, J.K., and H. Varian. "Some Economics of the Internet." Technical Report, University of Michigan, 1993.

------. "Pricing the Internet." in Brian Kahin and James Keller, Public Access to the Internet. Englewood Cliffs, NJ: Prentice Hall, 1994.

Markoff, J. "Traffic Jams Already on the Information Highway." New York Times (November 3, 1993).

Medvinsky, G., and B.C. Neuman. "Netcash: A Design for Practical Electronic Currency on the Internet." Proceedings of the First ACM Conference on Computer and Communications Security. New York: ACM Press, 1993 (available at ftp://gopher.econ.lsa.umich.edu/pub/Archive/netcash.ps.Z).

Partridge, C. Gigabit Networking. Reading, MA: Addison-Wesley, 1993.

Shenker, S. "Service Models and Pricing Policies for an Integrated Services Internet." Technical Report, Palo Alto Research Center, Xerox Corp., 1993.

Tanenbaum, A.S. Computer Networks. Englewood Cliffs, NJ: Prentice Hall, 1989.


Copyright © 1994, Dr. Dobb's Journal