Rich non-responders in surveys

Martin Ravallion 24 June 2021

a

A

Today we see the top end of the income distribution studied with a very different data tool than that used for the bottom and middle. Household surveys have long been used for measuring poverty and inequality, and for related tasks in distributional analysis – including policy evaluation. The sampling theory developed in the early 20th century gave surveys a firmer statistical foundation for such purposes. Yet, in the mid-20th century US, when Simon Kuznets wanted to measure income shares at the top end of the distribution, he turned instead to income tax records and national accounts (Kuznets and Jenks 1953). 

In the new millennium, tax records have been widely used in a way that resembles Kuznets’s method, in what can be dubbed the ‘top incomes literature’. Piketty’s (2003) study of top income shares in France was influential, as was the review by Atkinson et al. (2011). Compilations of top income shares, such as those described in Atkinson and Morelli (2014) and found in the World Inequality Database, have relied heavily on income tax records rather than surveys. Yet global assessments of poverty have relied mainly on surveys, such as in the World Bank’s PovcalNet

There are a number of concerns about how reliable surveys are for measuring top incomes. Those who agree to be interviewed may understate their income or simply refuse to respond to a sensitive question on some income component. There are imputation/matching methods that can address such problems of ‘item nonresponse’ (as it is called in statistics) by drawing on the questions that are in fact answered.

Another problem is becoming a serious concern almost everywhere; namely, the low compliance rates among rich households to initially randomised assignments (often at the second stage of a two-stage sample design, with the first stage being the sampling of geographic areas). In a new paper (Ravallion 2021), I critically examine the various ways this problem of ‘unit non-response’ can be resolved. The context includes developing countries with weak statistical systems, as well as rich countries with ‘state-of-the-art’ systems.

We should be clear first about what measurement theory tells us. Finding that richer households are less likely to participate in surveys when sampled does not imply that we under-estimate inequality by any standard measure (satisfying the usual Pigou-Dalton transfer principle). The effect on measured inequality is an empirical question. We can be more confident on theoretical grounds that standard poverty measures will be overestimated when there is a positive income effect on survey compliance. It is less clear how much.

What can be done about the problem of missing top income recipients? 

It will be obvious to any user of micro survey data for empirical analysis that it is desirable to correct for bias in the top-income shares internal to the survey data. Doing so can retain both the statistical integrity of the survey design (with implications for statistical inference) and the great many applications for micro-data files in distributional analysis. How can that be done?

Non-compliance with survey sampling can take the form of outright refusal or not being home for interviews. Either way, it is an outcome of the constrained choices made by those sampled. I argue that thinking about behavioural responses helps to clarify concerns about the non-behavioural methods underlying current statistical practices, and helps in thinking about better methods and measures. 

Statistics offices often try to correct for selective nonresponse ex ante in the field; for example, by encouraging interviewers to return to nonrespondents. This assumes that there is a random (non-selective) failure to comply initially, which seems implausible behaviourally. Another option is to look for ‘observationally similar’ replacements for the non-compliers, but that is clearly challenging when the (observable or not) characteristics are covariates of compliance. Some major statistics offices have a practice of paying survey respondents, which probably increases response rates but could also increase bias due to income-selective responses. 

As has long been recognised, income-selective non-compliance with an initially randomised assignment can be corrected ex post under certain conditions by reweighting the survey data. Economists have given too little attention to how survey weights are estimated; indeed, it seems that the weights are almost always taken for granted. However, past methods of calculating these weights do not always take proper account of the behaviour of those selected for a sample, and thus do not deal fully with the survey compliance problem. Newer methods are available that can do a better job. 

In Korinek et al. (2007), my co-authors and I proposed a method of correcting for selective compliance using the geographic distribution of survey response rates (the proportion of the original random sample for each area that agreed to be interviewed) to infer how the household-level probability of agreeing to be interviewed varies with own-income and other covariates. One postulates a micro-level compliance function, treating the probability of responding if sampled as a function of household income (and potentially other covariates). Under certain identifying assumptions, a unique compliance function can be retrieved econometrically from the data. A key identifying assumption is that location does not matter to response rates independently of income and other covariates. Another important assumption is that the specific sample drawn includes at least some rich respondents (even though very few may be found in the final sample). Thus, the sample distribution has the same support as the true distribution. We can call this common support.  

By identifying a behavioural micro model of compliance, our method can improve upon past, behaviourally ad hoc methods of reweighting survey data, so as to better reflect the missing top income recipients. Nor is the method computationally difficult, especially given the introduction of our command in Stata, as developed by Muñoz and Morelli (2021). The latter paper describes the programme, how to use it, and provides an application. There are implications for the data provided by statistics offices, notably on response rates.

Does it make a difference? Using the geographic spread of response rates in modelling compliance with randomised assignments in the Current Population Survey (CPS) for the US, we find that the probability of being interviewed falls steadily as income rises, from 95% or higher for the poorest decile to only 50% for the richest, and under 20% among top incomes. Thus, the observations one has on rich households need to be weighted up quite a lot relative to those for poor households. 

We compare our estimated weights to those used by the Census Bureau to address nonresponse bias and find that the ‘canned’ CPS weights do not adequately adjust for the (strong) income effect on survey nonresponse implied by the behavioural model. The reweighting method we propose adds around 0.05 to the Gini index of income inequality in the US, when compared to the usual CPS data (Korinek et al. 2006). More recent estimates by Morelli and Muñoz (2019) indicate even larger upward revisions; over the period 2016–18, the mean CPS Gini index for the US is 0.46 without correcting for selective compliance, but rises to 0.53 with the corrections (using our method). Poverty measures are not significantly impacted.

When selective nonresponse is not dealt with properly, the usual standard errors for distributional shares and inequality measures can be very deceptive about the true level of imprecision. In my new paper (Ravallion 2021), I report simulations illustrating how the true income share belonging to the rich can be well outside the 95% confidence interval that is obtained if the bias is left uncorrected.

What about when common support fails to hold?

The literature on top incomes has assumed (often implicitly) that common support does not hold – that the rich never participate in surveys. We generally don’t know if that is correct, since there is very little evidence to which one can point. It may well be that the bias in income shares could be adequately corrected by reweighting the survey data. 

If common support does not hold, then income tax records offer help. Much progress has been made obtaining public-use tabulations of the distributions of taxable income and taxes paid, and in carefully using these data to construct income distributions, sometimes in combination with survey data. 

Nonetheless, there are also reasons for caution. Tax data come with their own concerns, such as tax avoidance/evasion, weak coverage of informal sectors, and illicit incomes – including capital flight, which is a serious problem for some poor countries as well as rich ones. There are also some poorly understood concerns about construct validity, given the limitations of taxable income as a basis for interpersonal comparisons of economic welfare. Unlike methods that work internally with the surveys, there is also a question about how to calculate the variance of estimates based on merging with external sources, such as income tax records. Further work is needed on these issues.

How much all this matters to policy remains to be seen. Just as top incomes are hard to reveal for measurement purposes – and we may never know just how unequally income and wealth are distributed – they can be hard to tax for financing and redistributive purposes. The hope remains that progress in advancing measurement will also help foster better policies. 

References

Atkinson, A B, T Piketty and E Saez (2011), “Top Incomes in the Long Run of History”, Journal of Economic Literature 49(1): 3–71.

Atkinson, A B and S Morelli (2014), “The Chartbook of Economic Inequality”, VoxEU.org. 26 March.

Korinek, A, J Mistiaen and M Ravallion (2006), “Survey Nonresponse and the Distribution of Income”, Journal of Economic Inequality 4(2): 33–55.

Korinek, A, J Mistiaen and M Ravallion (2007), “An Econometric Method of Correcting for Unit Nonresponse Bias in Surveys”, Journal of Econometrics 136: 213–235.

Kuznets, S and E Jenks (1953), Shares of Upper Income Groups in Income and Savings, Cambridge, Mass.: National Bureau of Economic Research.

Morelli, S and E Muñoz (2019), “Unit Nonresponse Bias in the Current Population Survey”, Stone Center, City University of New York, mimeo. 

Muñoz, E and S Morelli (2021), “KMR: A Command to Correct Survey Weights for Unit Nonresponse using Groups’ Response Rates”, The Stata Journal 21(1): 206-219.

Piketty, T (2003), “Income Inequality in France, 1901–1998”, Journal of Political Economy 111(5): 1004–42.

Ravallion, M (2021), “Missing Top Income Recipients,” NBER Working Paper 28890.

a

A

Topics:  Poverty and income inequality

Tags:  Income distribution, income surveys, survey compliance, measuring incomes

Edmond D. Villani Chair of Economics, Georgetown University

Events

CEPR Policy Research