Policy analysis in a post-truth world

Charles Manski

24 December 2016



Estimating the impact of almost any economic policy change is fraught with uncertainty. As Alan Auerbach put it: “In many instances, the uncertainty is so great that one honestly could report a number either twice or half the size of the estimate actually reported” (Auerbach 1996). I couldn’t agree more. I have, over the past five years, repeatedly criticised the prevalent practice of economic policy analysis with incredible certitude (see Manski 2011a, 2011b, 2013, 2014 2015).  I have specifically commented on official governmental practices in the US.  Exact predictions of policy outcomes and estimates of the state of the economy are routine.  Expressions of uncertainty are rare.  Yet predictions and estimates often are fragile, resting on unsupported assumptions and limited data.  So the certitude of exact predictions and estimates is not credible.

The US now approaches the beginning of an administration headed by a new president who appears incapable or unwilling to distinguish fact from fiction.  I worry that the incredible certitude of past governmental policy analysis will soon seem a minor concern relative to what lies ahead.  Whereas analysis with incredible certitude makes predictions and estimates that are possibly true, analysis in a post-truth world makes predictions and estimates that are clearly false.

To explain my concerns, I first reiterate some of my criticism of existing practices.  Considering prediction of policy outcomes, I have called attention to the influential Congressional Budget Office (CBO) predictions, called 'scores', of the federal debt implications of pending legislation.  CBO staff are dedicated civil servants who are aware that the budgetary impacts of complex changes to federal law are difficult to foresee.  Yet Congress has required the CBO to make point predictions ten years into the future, unaccompanied by measures of uncertainty.

Considering estimation of the state of the economy, I have documented incredible certitude in the official economic statistics published by leading federal statistical agencies including the Bureau of Economic Analysis, the Bureau of Labor Statistics, and the Census Bureau.  These agencies report point estimates of GDP growth, unemployment, and household income without accompanying measures of error.  Agency staff know that official statistics suffer from sampling and non-sampling errors.  Yet the practice has been to report official statistics with only occasional measurement of sampling errors and no quantification of non-sampling errors.

Seeking to explain why incredible certitude has been the norm, I have found it natural as an economist to conjecture that analysts respond to incentives.  Many policymakers and members of the public resist facing up to uncertainty, so analysts have motivation to report certitude.  A story circulates about an economist’s attempt to describe his uncertainty about a forecast to US President Lyndon B. Johnson.  The economist is said to have presented the forecast as a likely range of values for the quantity under discussion.  Johnson is said to have replied: "Ranges are for cattle. Give me a number."

Although President Johnson did not want to hear the economist forecast a range of values, I think it safe to assume that he wanted to hear a number within the range that the economist thought plausible, perhaps the center of the range.  I expect that the economist interpreted Johnson's request this way and complied.  Likewise, I feel comfortable conjecturing that the government economists who produce CBO scores and official economic statistics have generally intended them to express the central tendency of their beliefs about the quantities in question.  Some corroborative empirical evidence is available in the Survey of Professional Forecasters, which elicits both point and probabilistic predictions of GDP growth and inflation from members of its panel of forecasters.  Analysis of these data shows that the point predictions typically are close to means or medians of the probabilistic ones (Engelberg et al. 2009).

Looking ahead, I am deeply concerned about the future practice of policy analysis in the Trump administration.  So much has already been written about the tenuous relationship between the president-elect and reality that I shall not attempt to document the phenomenon afresh.  Instead, I will cite the clear and frightening writing of Ruth Marcus, who recently opened her periodic column in the Washington Post as follows (Marcus 2016):

“Welcome to – brace yourself for – the post truth presidency. ‘Facts are stubborn things’, said John Adams in 1770, defending British soldiers accused in the Boston Massacre, ‘and whatever may be our wishes, our inclinations, or the dictates of our passion, they cannot alter the state of facts and evidence.’  Or so we thought, until we elected to the presidency a man consistently heedless of truth and impervious to fact checking."

Marcus went on to comment that Trump had an incentive not to respect truth. She wrote:

“The practice of post truth – untrue assertion piled on untrue assertion – helped get Donald Trump to the White House. The more untruths he told, the more supporters rewarded him for, as they saw it, telling it like it is.”

I have two worries about how the new administration will regard policy analysis.  One is that it will severely cut back funding for the regular data collection that makes possible the publication of official economic statistics.  The other is that the analysts who staff federal agencies, who have had a strong reputation for political neutrality and integrity, will be pressured to cook findings to suit whatever the president believes. Coherent policy discussion, which has already become difficult in an increasingly partisan governing environment, may become impossible when the White House considers even basic facts to be malleable.

A constructive way to mitigate the potential damage may be to establish research centres and statistical agencies outside the executive branch of the federal government that can provide honest and well-informed predictions of policy outcomes and estimates of the state of the economy.  Perhaps the Federal Reserve Board and Congress can provide part of what is necessary, but I expect that part will have to come from non-governmental entities.  The US presently does not have the requisite institutions.  A suitable exemplar may be the Institute for Fiscal Studies in the UK.

However we strive to provide honest and well-informed policy analysis, I continue to believe that our society would be better off we were to face up to uncertainty.  Many of our contentious policy debates stem in part from our failure to admit what we do not know.  We would do better to acknowledge that we have much to learn than to act as if we already know the truth or can infinitely manipulate it.


Auerbach, A (1996), “Dynamic Revenue Estimation”, Journal of Economic Perspectives 10: 141-157.

Engelberg, J, C Manski, and J Williams (2009), “Comparing the Point Predictions and Subjective Probability Distributions of Professional Forecasters”, Journal of Business and Economic Statistics 27: 30-41.

Manski, C (2011a), “Policy Analysis with Incredible Certitude”, The Economic Journal 121: F261-F289.

Manski, C (2011b), “Should official forecasts express uncertainty? The case of the CBO”, 22 November. 

Manski, C (2013), Public Policy in an Uncertain World: Analysis and Decisions, Cambridge: Harvard University Press.

Manski, C (2014), “Facing up to uncertainty in official economic statistics”, VoxEU.org, 21 May. 

Manski, C (2015), “Communicating Uncertainty in Official Economic Statistics: An Appraisal Fifty Years after Morgenstern”, Journal of Economic Literature 53: 631-653.

Marcus, R (2016), "Welcome to the Post-Truth Presidency," Washington Post, 2 December.



Topics:  Frontiers of economic research Politics and economics

Tags:  post-truth, economic predictions, forecasts, data collection

Board of Trustees Professor in Economics, Northwestern University