VoxEU Column Competition Policy Productivity and Innovation

Analysing market responses

A key task for economists is predicting how markets will respond to complex changes in environment. This column discusses recent empirical developments that allow for a deeper understanding of such market dynamics. Game theory has informed conditional pricing models that take account of products marketed and their production costs. Likewise, dynamic models of productive efficiency allow for analyses of the role of market structure in inducing competitive efficiencies.

Recently, empirical tools have been developed that enable a deeper understanding of market responses to complex environmental changes, like mergers in which input prices are a result of a bargaining process (e.g. the proposed merger between Aetna and Humana in the US health insurance markets), or the market restructuring that results from deregulation or tariff reform. The empirical tools followed developments in game theory and its application to the study of markets. The game theory literature illustrated how different phenomena could occur. The empirical work uses theoretical concepts, data, and institutional knowledge to move from a description of what could occur to what is likely to occur in different environments.

This column outlines what those tools are, and when and why they are likely to do better than the tools that were available before. I begin with the analysis of prices conditional on the products marketed and their costs of production, and then move to production efficiency and product development.

Prices and product repositioning

Pricing analysis requires demand and cost systems, and an ‘equilibrium assumption’. Prior analysis of demand largely relied on analysing the relationship between each good marketed and the prices of all competing goods. Many markets have more than 50 competing goods, so even the simplest model would require on the order of fifty price coefficients for each of the 50 goods; far too many to estimate precisely from our typical datasets. The new research assumes that the product a household chooses is a function of the relationship between a product’s characteristics and those of the household (e.g. household income and product price, household size and car size). Each household chooses the product that best suits them and aggregate demand is obtained by summing over the choices of the different consumers. The parameters estimated are those that determine the importance of the interactions between the consumer and product characteristics, and their number no longer depends on the number of competing products. This ‘characteristic space’ analysis also enabled us to analyse the demand for new products before they were introduced, and to seamlessly integrate McFadden’s (1974) seminal analysis of individual choices with more readily available aggregate data on market shares and product characteristics (Berry et al. 1995, 2004). A Nash equilibrium assumption is used to predict prices for a given demand. In retail markets the Nash condition states that each firm’s price maximises its profits given the prices of competing products. Few believe that a market’s response to a change is to immediately ‘jump’ to a new equilibrium. However most believe that firms will keep changing their price until the Nash condition is satisfied. So the Nash condition is viewed as a rest point, and without a generally acceptable notion of how agents learn and adapt to changes, it is natural to analyse changes by computing this rest point.

The resulting price for a single-product firm equals marginal cost plus a markup which depends inversely on the sensitivity of demand for price. In a recent paper, I asked Tom Wollman to predict the prices for commercial trucks using only the markup predicted by his demand estimates and some cost covariates (Pakes 2016). Predicted prices accounted for between 85% and 95% of the variance in truck prices and the estimated markup coefficient was within one standard deviation of one – the coefficient predicted by the Nash condition. This is an exceptional fit for a behavioural model in the social sciences.

To see how useful this type of analysis is, consider analysing a retail merger. The Nash condition implies that a single-product firm will keep increasing its price until the extra profits from a price increase obtained from those who continue to buy the product just offsets the loss in profits generated by the consumers who leave the product due to the price increase. Now assume our single-product firm merges with another single-product firm. Then when the firm increases its price on the original good, some of the consumers that leave substitute to the merged product, and the firm internalises the profits from the second product. So the new Nash condition predicts an increase in price. The extent of the increase depends on the fraction of those who leave the first product to substitute with the merged product, a fraction calculated from the demand system.

Above I focused on retail markets and did not allow for product repositioning in response to the environmental change. Many markets for ‘inputs’ are markets with a small number of buyers and sellers. In a retail market a consumer either purchases or does not purchase at a given price. In a market with a small number of buyers and sellers (say hospitals selling services to insurers) a buyer (the insurer) can decline to buy at the given price but offer to buy at a lower price. Once the input price is set, the buyer remarkets its output to consumers (the health insurer markets a package of providers). There is less general agreement for the equilibrium notion in these markets, though there is agreement that each agents’ ‘outside option’ ( the profits it would earn were it not to enter the agreement) are a determinant of both which agents contract with whom, and the split of the total profits between the two firms (total profits are the revenue of the buyer minus the costs of the seller). Horn and Wolinsky (1988) propose an equilibrium condition based on Nash bargaining in the ‘input’ market which can be combined with our Nash condition in the consumer market to produce a full analysis of responses to changes in these cases. In its initial empirical application Crawford and Yorukoglu (2013) found that if the cable networks were forced to offer programs on an a la carte basis rather than in tiers, the new equilibrium would require price increases by the content providers that the networks would pass on to consumers, nullifying any consumer gains resulting from increased choice. Recent work on hospital (Gowrisankaran et al. 2015) and health insurance (Ho and Lee 2016) mergers use this framework to analyse issues that have arisen during the current reorganisation of the US health care industry.

In some industries, incumbents can ‘reposition’ their products about as fast as they can change prices. Nosko (2015) explains that it is easy for firms to lower the performance of computer chips below the best performing chips of its generation. Eizenberg (2014) notes that computers with older chips can be withdrawn from the market at will and Wollman (2015) points out that the modularity of truck production makes it easy to connect different cab types to different trailers and market the resulting combinations. The implication is that in these industries even very short run responses to environmental changes require taking account of product repositioning. To do so requires an additional primitive; the fixed cost of adding or dropping a product. It is estimated by a ‘revealed preference’ argument; the incremental profits from marketing a product are expected to be greater than the fixed costs when the product is marketed, and less when a marketable product is not marketed. These assumptions, and estimates of counterfactual profits using the techniques described above, provide upper and lower bounds to the needed fixed costs (Pakes 2010, Pakes et al. 2015). Using these estimates, Nosko (2015) shows that the returns to the research that went into Intel’s Core 2 Duo chip came primarily from its ability to empty out the space of middle-priced chips and charge large markups for the high end Duo (and a similar process would likely occur were Intel to merge with AMD). Wollman (2015) shows that the impact of allowing GM and Chrysler’s truck divisions to exit (rather than bailing them out) is not large if one allows the remaining competitors to reconfigure their truck offerings. Recall that the demand analysis generates different implications for a given environmental change for different consumer characteristics. Eizenberg (2014) uses this fact to analyse the distributional impacts of withdrawing older chips from the PC market when faster chips become available. Poorer households are hurt by this process, even though the fixed cost of marketing the older products are greater than the benefits derived from them.

Perhaps the biggest weakness of the above framework is that no consideration is given to the impact of the environmental changes on longer run investment and product development decisions. For example a hospital merger that increases the share of the total profits that accrues to hospitals might also generate socially desirable investment incentives, as it would enable the hospitals to:

  1. Internalise more of the cost savings from cost reducing investment; and
  2. Discount the business stealing effects that might lead to a ‘technological arms race’ without the merger.

I turn now to the analysis of productivity and dynamics.

Productivity changes and dynamic analysis

Productivity research has seen major advances recently. This is partly due to public agencies providing secure access to data on production units (firms or plants). As a result we can now separate the analysis of the sources of productivity changes into those related to cost decreases within the plant from those due to a more efficient allocation of output among plants. This enables an analysis of the role of market structure in inducing competitive efficiencies.

Productivity is defined as a measure of output divided by an index of inputs. The input index is typically associated with a production function which needs to be estimated. Since firms with higher productivity will typically both employ more inputs and be less likely to exit, the estimation of the production function coefficients must be able to account for the relationships between productivity and input demands, and between productivity and the likelihood of exit. Olley and Pakes (1996) show how to use the equilibrium assumptions from a dynamic model to accomplish this, and then provide a decomposition of industry productivity into:

  1. An unweighted average of the productivities of the firms in the industry; and
  2. A measure of allocative efficiency (the covariance between productivity and the share of output).

Olley and Pakes (1996) studies the evolution of productivity in the telecommunication equipment industry over the period of the breakup of AT&T and the divestment of its wholly-owned subsidiary, Western Electric. They find large increases in productivity associated with the reallocation of output to more productive firms caused by the exit of large, less-productive firms and the entry and growth of more productive establishments. Similar results have been found in a series of studies of the impacts of lowering tariffs in developing countries; a literature that starts with Pavnick’s (2002) paper on Chile.

The ‘output’ measure available in most of these studies is revenue from sales. Though it is of interest to know how environmental changes affect sales per unit of inputs, it would be of more interest to understand how those changes effect markups and costs separately. To do this we would need to construct separate quantity and price measures for the firms’ outputs. De Loecker and Warzynski (2012) use the input equilibrium conditions for a perfectly variable factor of production to add a step to the estimation procedure that allows them to separate changes in revenue into changes in markups and changes in costs. Using this, De Loecker et al. (2016) are able to allow for firm-specific markups in their study of how costs and markups vary after trade liberalisation in India. The tariff declines lowered both input and output prices. The input price declines resulted in disproportionate increases in markups rather than reductions in consumer prices. That is, there was limited pass through of the fall in producer costs to consumer prices during the period they analyse. An analysis of why this occurred would require more detail on the equilibrium generating the pass through that they find. However their study is clearly an important step in understanding how a major change in the economic environment impacted a large share of the world’s population.

A deeper understanding to the costs and benefits of product development and productivity improvements requires a dynamic analysis of the incentives and impacts of longer term investments. The dynamic model underlying the estimation algorithm for productivity is based on a Nash condition for investments; each firm choses investments to maximise its expected discounted value of future net cash flow conditional on the investments of other firms (standard references are Maskin and Tirole 1988 for the equilibrium notion, and Ericson and Pakes 1995 for the framework used in applied work). Though this framework has been helpful in guiding estimation algorithms and in developing computer simulations that explore the logic of various arguments, it often becomes unwieldy when confronted with incorporating the institutional background that seems relevant for actual empirical work. Notable exceptions are Benkard’s (2004) analysis of competition for wide bodied aircraft (an industry with a small number of competitors), and Kalouptsidid’s (2014) analysis of fluctuations in demand on bulk shipping where an active second hand market generated good approximations to the expected discounted values that determine investment policies.

The complexity of the policies needed for these equilibria in more general settings both impose strong cognitive demands on the agents making investment decisions and computational bottlenecks for researchers trying to analyse them. As a result there is a developing literature which emphasises equilibrium notions that are less demanding of agents (Fershtman and Pakes 2014), and/or employ approximations to more demanding notions (Benkard et al 2008). The jury is still out on how much further these developments allow us to go in analysing the evolution of actual industries.

References

Benkard, L (2004) “A dynamic analysis of the market for wide-bodied commercial aircraft”, Review of Economic Studies, 71(3): 581-611.

Benkard, L, B Van Roy  and G Weintraub (2008) “Markov perfect industry dynamics with many firms”, Econometrica, 76(6): 1375-1411.

Berry S, J Levinsohn and A Pakes (1995) “Automobile prices in market equilibrium”, Econometrica, 63(4): 841-890.

Berry S, J Levinsohn and A Pakes (2004) “Estimating differentiated product demand systems from a combination of micro and macro data: The market for new vehicles”, Journal of Political Economy, 112(1): 68-105.

Crawford, G and A Yurukoglu (2013) “The welfare effects of bundling in multichannel television markets”, American Economic Review, 102(2): 643-85.

De Loecker J and Warzynski (2012) “Markups and firm-level export status”, American Economic Review, 102(6): 2437-2471.

De Loecker, J, P Goldberg, A Khandelwal and N Pavcnik, (forthcoming) “Prices, markups and trade reform”, Econometrica.

Eizenberg, A (2014) “Upstream innovation and product variety in the United States home PC market”, Review of Economic Studies, 81(): 1003-1045.

Ericson R and A Pakes (1995) “Markov perfect industry dynamics: A framework for empirical work”, Review of Economic Studies, 62(1): 53-82.

Fershtman C and A Pakes (2012) “Dynamic games with asymmetric information: A framework for applied work”, The Quarterly Journal of Economics, 127(4): 1611-1662.

Gowrisankaran, G, A Nevo and R Town (2015) “Mergers when prices are negotiated: Evidence from the hospital industry”, American Economic Review, 105(1): 172-203.

Horn H and A Wolinsky (1988) “Bilateral monopolies and incentives for merger”, RAND Journal of Economics, 19(3): 408-419.

Kalouptsidi, M (2014) “Time to build and fluctuations in bulk shipping”, American Economic Review, 104(2): 564-608.

Maskin, E and J Tirole (1988b) “A theory of dynamic oligopoly, II: Price competition, kinked demand curves, and Edgeworth cycles”, Econometrica, 56(3): 571-599.

McFadden D (1974) “Conditional logit analysis of qualitative choice behavior”, in P Zarembka (ed) Frontiers in econometrics, Academic Press, New York.

Nosko C (2014) “Competition and quality choice in the CPU market”, Chicago Booth working paper.

Olley, S and A Pakes (1996) “The dynamics of productivity in the telecommunications equipment industry”, Econometrica, 64(6): 1263-1298.

Pakes A (2010) “Alternative models for moment inequalities”, Econometrica, 78(6): 1783-1822.

Pakes A, J Porter, K Ho and J Ishii (2015) “Moment inequalities and their  application”, Econometrica, 83(1): 315-334.

Pakes A (forthcoming) “Empirical tools and competition analysis: Past progress and current problems”, International Journal of Industrial Organization.

Pavnick N (2002) “Trade liberalization, exit, and productivity improvements: Evidence from Chilean plants”, Review of Economic Studies, 69: 245-76.

Wollmann, T (2015) “Trucks without bailouts: Equilibrium product characteristics for commercial vehicles”, Chicago Booth working paper.

1,785 Reads