VoxEU Column Productivity and Innovation

Crowding out in a neutral Internet

Most of the Internet’s infrastructure is neutral with regard to data packets’ contents, but some groups are pushing for legislation to protect “network neutrality.” This column says that strictly neutral data transmission would cause inefficiencies and reduce the availability and quality of Internet services. It defends prioritising quality-sensitive data packets and argues that network neutrality legislation would be a heyday for lawyers and lobbyists.

After months of controversial debate on the new telecom package, the Commission, the Council, and the Parliament of the EU now seem to be close to a compromise. "About 95% of the package has been agreed", said Viviane Reding, Commissioner for Information Society and Media, after the latest negotiating round between the three bodies of the EU. A decent amendment of the Internet and telecoms regulation in Europe could pave the way towards more investment in desperately needed broadband and mobile infrastructure, more innovation and a higher availability of sophisticated applications and services, growth in the ICT sector and elsewhere, and – not least – enhanced competition.

However, the debate on “net neutrality” is ongoing, and it seems that it is dominated by competing lobby groups. Although the European debate has been a bit more balanced and not as harsh as in the US, it has found new momentum during the last couple of weeks, steamed up by some left and green MEPs and big content providers. In order to bring the discussion back to a well-founded basis, it seems to be necessary to sort out some issues from the spaghetti bowl of arguments regarding the debate on net neutrality.

First, as Hahn and Litan (2007) conclusively show, strict net neutrality is a myth and has never existed in the narrower sense. Neither the treatment of data packets nor Internet usage or access pricing have been strictly neutral in the past and will not be neutral in the future. To give just one example, downloading has been privileged over uploading, given its greater bandwidth. Given the preferences and usage habits of the vast majority of users, this regime of “discrimination” makes sense – even in Web 2.0, most users download much more content than they upload.

Second, and most important, a strictly neutral treatment of every single data packet might cause inefficiencies, reduce the availability and quality of Internet services, and reduce overall welfare. There can be no doubt that various applications on the Internet transmit data packets with varying characteristics and delivery requirements. They differ in data rate and the respective bandwidth consumption, priority, quality of service sensitivity, and economic value. That is why it could be perfectly reasonable from an economic and technical perspective to treat these different data packets differently. Since the massive increase in data volume is already causing congestion in the form of delays, jitter, and even data loss, the different characteristics of certain data packets become relevant.

Basically, one could distinguish between data packets that are very sensitive regarding the quality and speed of transmission and data packets far more elastic. For instance, normal web browsing, email, and peer-to-peer (P2P) file sharing do not suffer any loss of quality of service due to milliseconds of delay. Even data packet loss does not interfere very much with the end product since lost data packets will be delivered again by the original source so that the user may not even realise the packet loss. On the other side of the spectrum, there are applications – live broadcasting, interactive lectures, real-time voice conversations, and online games – where the quality of the service significantly suffers in case of delay and data packet loss. Table 1 shows different characteristics of some typical Internet applications.

Table 1. Different characteristics of Internet services

Service Quality sensitivity Bandwith consumption Value/willingness to pay
P2P file sharing low very high (no limit) low
YouTube low (buffered) medium (320 or 600 Kbps) low
Email low very low low
VoIP medium-high low (30 to 80 Kbps) medium
Online gaming high low – medium medium
E-lectures high medium high
Telemedicine high medium-high (up to 8 000 Kbps) high

In a strictly neutral Internet, low-value, elastic applications such as P2P file sharing or YouTube videos are likely to crowd out quality-sensitive services because the demand for high-value, quality-sensitive applications will decrease if the quality of service cannot be maintained due to congestion. Figure 1 shows the crowding out of a quality sensitive service. Suppose there are two different services with S1 being a highly quality sensitive application (e.g. e-learning, telemedicine) and S2 an application that is rather elastic with respect to delays and data packet loss (e.g. file sharing). The Y-axis shows the individual demand or willingness to pay for the respective service given the total bandwidth consumption X. With bandwidth consumption below X1 where no congestion occurs, the index of the demand for all three applications will be 100. If the data volume exceeds X1, congestion in the form of delay and jitter might occur. This causes the demand for the highly quality-sensitive service S1 to decrease. The demand for S2 will not be affected by the slight reduction in quality of service. If the total bandwidth consumption exceeds X2, the high value, quality-sensitive service S1 will be totally crowded out because the marginal (potential) consumer of this service will not accept the low quality any more.

Figure 1. Crowding out of quality-sensitive services

Certainly, excess capacities are needed to deal with the peak-load problem of data traffic, but they are very expensive and, beyond a certain level, economically inefficient. More important, even huge overcapacities cannot solve the congestion problem sufficiently because 99% quality is simply not enough for some applications. Peak-load pricing or volume-based pricing models could partially remedy the problem for services with medium quality-sensitivity, but they are not able to solve the problem with respect to highly quality-sensitive services. Such pricing models could be extremely complex and – not least – inconvenient for consumers. Therefore, a different treatment of different data packets according to their quality sensitivity and economic value could be an efficient solution. Such regimes are often referred to as Quality of Service models or traffic management.

Figure 2 shows how such a model could work. Data packets of the highly quality-sensitive service (red) will be labelled and get priority when it comes to the interconnection between providers or to data transmission via routers or data knots. These data packets will be transmitted immediately whereas data packets with lower quality sensitivity (blue) might wait for a couple of milliseconds if the router is very busy at a certain time. As a consequence, quality loss for the quality-sensitive service could be avoided. The very application will not be crowded out and remains in the market. The partial quality loss that might occur in the case of the blue application will not affect the perceived value of this service for the customer because this service is very elastic regarding such small reduction in transmission quality. In most cases, the end user of this application won’t even recognise the partial quality loss which – by the way – occurs in a strictly neutral first-in-first-out regime as well.

Figure 2. Prioritising quality-sensitive data packets

In a strictly neutral and “dumb” first-in-first-out regime, quality-sensitive applications and services are likely to be crowded if congestion in the form of delay, jitter, or data loss occurs. In times of a massive growth in data traffic due to P2P file sharing and video sites, the probability and scope of congestion is likely to increase. Intelligent business models that take into account the specific needs of different applications, content providers, and user groups by offering distinguished quality of service could be very efficient in overcoming the congestion problem and would be very convenient for the vast majority of users at the same time (for a more comprehensive analysis see Pehnelt 2008).

Any ex ante regulation in the sense of net neutrality or certain “standards” regarding the quality of service would add another chapter of ridiculous over-regulation and would open a Pandora’s box of endless and expensive disputes and lawsuits. Who is going to define the minimum standards and on what basis? Who is going to monitor these standards? How shall such standards be adapted to the tremendous technological and economic change in a dynamic sector as information and communication technology (ICT)? An ex ante net neutrality regulation would be nothing else than a huge job-creation measure for hundreds of bureaucrats and lawyers. The costs of this kind of regulation would be huge and the economic and social benefits would be – in the short run – at best zero and – from a dynamic perspective – most definitely negative.

That is why we do not need net neutrality regulation at all but rather an effective competition policy that guarantees a sufficient level of competition along the whole ICT value chain. The proposed amendment together with the already existing competition and anti-trust regulation within the EU could be good fundament for investment, innovation, and growth in the ICT sector and the whole economy. Negotiations over the telecoms package between the Parliament, Commission, and Council should come to an end pretty soon in order to give investors and consumers certainty about future Internet regulation in Europe and to encourage investment in infrastructure and sophisticated applications that are desperately needed, especially in these times of financial and economic crisis.

References

Hahn, R.W. and Litan, R.E. (2007): ‘The Myth of Network Neutrality and What We Should Do About It’, International Journal of Communication 1 (2007), pp. 595-606.
Kruse, J. (2007): ‘Crowding-Out bei Ueberlast im Internet’, University of the Federal Armed Forces Hamburg, Diskussionspapier Nr. 72, November 2007.
Pehnelt, G. (2008): ‘The Economics of Net Neutrality Revisited’, Jena Economic Research Papers in Economics No. 2008-080, Friedrich-Schiller-University Jena, Max-Planck-Institute of Economics, October 2008.

1,155 Reads