AdobeStock_106380734.jpeg
VoxEU Column Monetary Policy

Monetary policy in times of uncertainty: a reappraisal of the Brainard principle

Dealing with uncertainty about the state of the economy is one of the main challenges facing monetary policymakers. In recent years there has been an extensive debate on the value of some of the deep parameters driving the economy, such as the natural rate of interest and the slope of the Phillips curve, estimates of which are quite uncertain. This column argues that when facing uncertainty on the structural relationship among macroeconomic variables, central banks should adopt a pragmatic and data-dependent approach to adjusting their monetary policy stance. 

At the press conference on 7 March 2019, ECB President Draghi, explaining the decisions of the Governing Council, stated that “[t]he fact that the climate has become more uncertain doesn't mean that one has to stay put. You just do what you think is right and you temper, however, what you are doing with a consideration there is uncertainty. In other words, in a dark room you move with tiny steps”. This is indeed a popular argument among monetary policymakers, frequently referred to as the ‘Brainard conservatism principle’ (Brainard 1967). However, such a principle, which calls for a more cautious policy in the face of parameter uncertainty, has been challenged in the last two decades by a growing literature. The main argument brought against the Brainard principle hinges on the consideration that “it will pay to make sure current inflation is very stable by reacting more aggressively to shocks” (Walsh 2004).

Whether monetary policy should be more aggressive or more cautious when the structural relationship between macroeconomic variables is uncertain is indeed an important question for central bankers. Macroeconomic models, which are used to shape and inform policy action, not only necessarily provide a simplified and incomplete representation of reality, but are also surrounded by great uncertainty about the value of the deep parameters defining fundamental economic relationships. This is the case for the slope of the Phillips curve (Bulligan and Viviano 2017), for example, or the level of the natural rate of interest (Fiorentini et al. 2018, Neri and Gerali 2018).

We revisit the role of uncertainty in optimal monetary policymaking in a simple (two-equation) forward-looking New Keynesian framework, assuming that some parameters are time-varying and imperfectly observed by the central banker (Ferrero et al. 2019). Compared to the existing literature (Söderström 2002, Kimura and Kurozumi 2007), we are able to provide a complete analytical characterisation of the solution of the model and of the monetary policy reaction function. Our method is sufficiently general to account for uncertainty on any subset of parameters of the model. As an application, we focus on two parameters, namely the persistence of total factor productivity (TFP) shocks, which governs the dynamics of the natural rate of interest, and the slope of the Phillips curve.

We reach two main results. First, when uncertainty involves the natural rate of interest, the reaction of monetary policy to exogenous shocks should be as in the full-information case. The rationale for this result is that the uncertainty on the persistence of TFP shocks – the main driver of the natural rate in such models – does not interact with the uncertainty on any of the other variables that play a role in the monetary policymaking process. Therefore, the central bank does not face a trade-off between the stabilisation of output and that of inflation, and accordingly does not change the optimal monetary policy with respect to the perfect information case.

When uncertainty concerns the slope of the Phillips curve instead, it is optimal for the central bank to alter its policy compared to the case of full information. The reason is that when the slope of the Phillips curve (i.e. the response of inflation to a change in the output gap) is uncertain, the central bank has to form joint expectations about both the output gap and its coefficient in order to infer the impact of its policies on inflation. In such a case, optimal monetary policy can be either more cautious or more aggressive than in the full information case, depending on the degree of persistence of the shocks affecting the Phillips curve.

This result is shown in Figure 1, where the reaction of monetary policy (on impact) is plotted for different levels of persistence of the shocks (on the x-axis) and for various levels of uncertainty on the slope of the Phillips curve. For low levels of persistence, the optimal policy should be more cautious than in the perfect information case. For high levels of persistence the opposite becomes true – the optimal monetary policy should be more aggressive than in the full information case, and the degree of aggressiveness should increase as uncertainty becomes larger. 

Figure 1 Optimal on impact monetary policy reaction to a cost-push shock

Source:  Ferrero et al. (2019).

The intuition for this fairly general result is the following. The larger the uncertainty, the higher the probability that a more aggressive monetary policy response to shocks may move inflation and output away from target (due to the interaction between the slope parameter and the output gap) and so the larger the welfare cost. However, if the shock is not very persistent, the risk for the central bank of going off-target is temporary and offset by the benefits of being patient and waiting for the fog of uncertainty to dissipate. This is indeed the argument behind the Brainard principle. On the other hand, when shocks are more persistent, the Brainard principle is reversed – in such case a gradual reaction would imply bearing its costs for a long period in the future, and so it pays to be more aggressive, since in so doing the central bank keeps longer-term expectations well anchored. 

Our analysis therefore provides theoretical support to the view that a pragmatic, data-driven approach to monetary policy (Praet 2018) should be pursued in a changing economic environment where the central bank is uncertain about the true shape of macroeconomic relationships and the transmission mechanism.

Authors’ note: The views expressed in this column are those of the authors and not necessarily those of the Bank of Italy or the Eurosystem.

References

Brainard, W C (1967), “Uncertainty and the Effectiveness of Policy”, American Economic Review 57(2): 411-425.

Bulligan, G, and E Viviano (2017), “Has the wage Phillips curve changed in the euro area?”, IZA Journal of Labor Policy 6(1): 9.

Ferrero, G, M Pietrunti, and A Tiseno (2019), “Benefits of gradualism or costs of inaction? Monetary policy in times of uncertainty”, Bank of Italy, Economic Research and International Relations Area, no. 1205.

Fiorentini, G, A Galesi, G Pérez-Quirós, and E Sentana (2018), “The Rise and Fall of the Natural Interest Rate”, CEMFI Working Papers.

Kimura, T, and T Kurozumi (2007), “Optimal monetary policy in a micro-founded model with parameter uncertainty”, Journal of Economic Dynamics and Control 31(2): 399-431.

Neri, S, and A Gerali (2018), “Natural rates across the Atlantic”, Journal of Macroeconomics.

Praet, P (2018), “Assessment of quantitative easing and challenges of policy normalisation”, Speech at the “The ECB and Its Watchers XIX Conference”, Frankfurt am Main, 14 March.

Söderström, U (2002), “Monetary policy with uncertain parameters”, Scandinavian Journal of Economics 104(1): 125-145.

Walsh, C E (2004), “Implications of a Changing Economic Structure for the Strategy of Monetary Policy”, Center for International Economics, UC Santa Cruz, working paper no. 03-18.

5,144 Reads