Home | Index
About APSA | What's New | Publications | Meetings | Teaching | Jobs/Careers | Opportunities
Search this site:


This essay was first published in PS: Political Science and Politics, December 2000.


CONFERENCES

GRANTS

ORGANIZATIONS

ANNOUNCEMENTS
CALLS FOR PAPERS



online exhibit hall



join now


1903 - 2003: APSA's Centennial Campaign The
Centennial Campaign



PSonline

go
Partisan Politics at Work:Sampling and the 2000 Census

Margo Anderson, University of Wisconsin, Milwaukee
Stephen E. Fienberg, Carnegie Mellon University

In his preceding article, Brunell offers some background on Census 2000, the use of sampling and adjustment in a census context, the magnitude of the differential undercount from prior censuses, and the debate over the use of sample-based adjustments to the count. Much of his description of Census 2000 is correct, as far as it goes. It is what he leaves out that is problematic. His omissions lead him to draw the wrong conclusions about the undercount and the methodology for correcting it.

The origins of the peculiar American institution of the decennial census can be traced to the Founding Fathers and the federal Constitution of 1787. When the leaders of the American Revolution met in Philadelphia in the summer of 1787 and decided to apportion seats in the new House of Representatives among the states "according to their respective numbers," they invented a fundamental new instrument of republican government. The infant U.S. government of the Confederation Era had trouble raising taxes and making decisions, in part because representatives in the Continental Congress voted by states and the states were of very disparate sizes and populations. The framers recognized the need for another policy-making mechanism that took account of the fact that states deserved different numbers of representatives and, hence, votes in the House and Electoral College. The answer was the census, a periodic count of the population and consequent redistribution of House seats and economic resources to reflect the relative sizes of the populations of the states.

The framers realized that counting the population would be difficult to do. Even in the eighteenth century the country was big, diverse, and growing rapidly. The count needed to be done using uniform national procedures so it would be deemed fair to everyone. Fairness was key, since, as with those who lose elections, losers in the population growth game have to concede to shift power to the winners. The census is an essential element of the American political system, which must be seen as equitable to the variety of political, regional, and demographic communities of the nation. But what if the census is deemed to be "unfair" to a particular demographic or political group? How can it serve its political functions of distributing power and money if people are uncounted, double counted, or counted in the wrong location? This is the real story behind the debate over adjustment.

In the following sections we address some of the issues raised by Brunell, correct some of the most egregious errors in his argument, provide references to balance those he gives, and suggest that he has misread the history of the census and the technical aspects of the debate over adjustment.

How Accurate is the Census?

As Brunell suggests, planning for the 2000 Census was mired in controversy and contention because of dissatisfaction with the quality and accuracy of previous censuses, and the inability of Republicans and Democrats to agree on what to do about improving census results.

In 1990, political officials who oversaw the census loudly trumpeted that the census accurately counted 98.2% of the residents of the U.S., and they spoke of a 1.8% undercount. This misrepresentation of the accuracy of the 1990 Census has taken on mythical proportions (see Anderson and Fienberg 1999). Brunell perceptively notes that it is not the net undercount that matters, but the differential undercount for subgroups in American society. But he stops short of explaining the magnitude of the accuracy problem and its implications.

The reality is that in 1990, approximately 25 million (or 1 in 10) people in the country were not properly counted, with the omissions in some locations being "balanced" by erroneous enumerations and other counting errors elsewhere. The sum of omissions and erroneous enumerations is the gross error to which Brunell refers in a note but whose magnitude he omits. The burden of being missed in the census fell disproportionately on members of minority groups–blacks, Hispanics, Asian Americans, and American Indians–while the erroneous enumerations occurred in excess numbers among nonminority Americans. Statisticians, demographers, and survey experts who evaluate census methodology expect that errors of a similar order of magnitude will occur in Census 2000.

The Census Bureau discovered the differential undercount of minorities in the 1940s and has meticulously documented it in every census since. The Bureau has worked to develop methods to supplement the enumeration and improve the count using a carefully conducted and executed sample survey. Beginning in 1950, and in every census since, the Bureau has used a sample of households to check on census coverage, and over the past two decades it developed special tools and methods to use sample-based data to adjust the census counts for both erroneous enumerations and omissions. From our perspective, sampling in a census context is not new in 2000, as Brunell suggests. What is new is the careful plan to integrate its use with the enumeration results to produce sample-adjusted counts of improved accuracy.

Who could oppose the production of better census numbers for the nation? The answer is political officials who believe that their interests are not best served by a more accurate count. Brunell notes that "a principal concern of the Republicans is that Democrats will gain seats if statistical methods are used." He acknowledges that "it is not perfectly clear that the Republicans will suffer electorally with an adjusted census," but he adds that "they clearly feel that the probability is sufficiently high to warrant a battle over the use of sampling." We do not see a clear partisan winner to improved accuracy. We find the Republican claim that the Constitution requires an "actual enumeration" a misreading of constitutional history, and the claim that sampling is unscientific and will be manipulated even more bizarre. Every federal court that has reviewed the matter over the past decade has argued in support of this use of sampling, and every group of scientists assembled to review census methodology has supported the broad structure of the Census Bureau's plan for the use of sampling to supplement the count, including four panels of the Committee of National Statistics at the National Research Council.1

Finally, we note that statistical sampling and estimation methods have been used for more than 20 years to measure and correct for census coverage error in other Western industrialized nations, including Australia and England (e.g., Choi, Steel, and Skinner 1998; Diamond and Skinner 1994; Steel 1994), without the political bashing we have seen in the United States.

Did Adjustment Work in 1990?

Brunell argues: "The adjustment process did not work in the 1980 Census; it did not work in the 1990 Census." What is the evidence to support this contention?

First, while there was an effort to use sample data to assess census coverage in 1980, the effort was not an integral part of the census process. Nor was there any intent on the part of the Bureau to use this evaluation for adjustment; the director, Vincent Barabba, announced this in advance of the 1980 Census. Thus, to say that adjustment didn't work in 1980 is to raise a straw man.

When the problems with the 1980 Census became clear, Bureau statisticians launched a major research effort to devise a special sample design integrated with the census structure, and to develop methods for correcting for both erroneous enumerations and omissions. They devised a post-enumeration survey (PES) to be conducted with a sample of over 300,000 households in about 10,000 census blocks nationwide. The Bureau had been prepared to proceed with this plan when, in 1987, Undersecretary of Commerce Robert Ortner announced that the 1990 Census count would not be adjusted. New York City and a coalition of other government and civil rights organizations sued to reverse Ortner's decision. In the summer of 1989, the Commerce Department and the litigants signed a Stipulation and Agreement to reinstate a PES of 150,000 households and consider the matter of adjustment de novo in 1991. The Bureau successfully implemented the PES and, in the spring of 1991, concluded that the sample-adjusted counts were superior to the raw census enumeration counts, and recommended that the adjusted counts be used as the official census results. That recommendation was overturned by the Republican-appointed Secretary of Commerce Robert Mosbacher, in July 1991.2

Brunell's critique of the 1990 methodology draws selectively on papers by Breiman (1994), Stark (1999), and Brown et al. (1999), as well as selective statements in the report of the Bureau's Committee on Adjustment for Postcensal Estimates (CAPE) (1992a). What it ignores is all of the original evaluations of adjustment by the Bureau (see Anderson and Fienberg 1999, app. E), the summary of the key analyses in Mulry and Spencer (1993), published responses and critiques of his own sources (e.g., Anderson et al. 2000; Belin and Rolph 1994; Ericksen, Fienberg, Kadane 1994), and the important addendum to the CAPE report that appeared later that year (see CAPE 1992b).

What are we to make of the errors in the PES cited by Brunell, quoting Breiman (1994) and Stark (1999)? The claim that 50 to 80% of the population adjustment for 1990 resulted from error is utter nonsense. It compares errors attributable to the PES to the net census error of 5.3 million instead of the gross census error of 25 million. As Ericksen, Fienberg, and Kadane (1994) and others have noted, it is possible to have a net census error of essentially zero at a national level if large errors of omission and erroneous enumeration balance. The argument in favor of adjustment would then be especially compelling, but the Breiman-Stark-Brunell position would be that all of the adjustment resulted from error.

Brunell's description of the 4 cells of the capture-recapture model is not quite right. The people "not in the first but in the second" count are not "the undercount," but rather the counts of directly measured omissions. This count needs to be added with N=22, which represents additional estimated omissions. Further, his description of the quantity N=21 as "unresolved" unfairly portrays the raw census totals as "the real count" and the post-enumeration sample findings as some fraudulent artifact of statisticians. But at least his description is better than Stark's (1999), which erroneously states that the Bureau's methodology does not include N=21 in the adjusted estimate!

Brunell makes a big point about the discrepancies between the PES-adjusted counts and those resulting from the method of demographic analysis (DA). What he glosses over is the level of bias and uncertainty associated with the DA counts that could easily swamp the reported differences (see Anderson et al. 2000). He also fails to acknowledge that while there may be an issue of sex ratios, both PES methods and DA would almost certainly produce comparable numbers in all demographic groups if analysts were able to use census numbers corrected for erroneous enumeration as the basis for analysis.

%Brunell mentions the restratified adjusted counts produced by the Bureau in 1992 during its deliberations regarding intercensal estimates. He cites two examples of discrepancies at low levels of geography, but fails to note that in neither case was the adjustment large and that the revised figures have corrections for the computer error, which essentially make them noncomparable.3 At any rate, most statistical analysts who have examined the PES evaluations with care have concluded that both versions of the adjusted counts were superior to the original census counts.

What did the Census Bureau actually conclude in its own evaluation of the adjusted counts, originally in 1991, and in the 1992 CAPE reassessment? On this, Brunell is silent. The original loss function analyses carried out by the Bureau fully supported the conclusion that the adjusted counts were demonstrably superior to the raw census counts at the national, state, and some substate levels. The 1992 CAPE report echoed this conclusion for national and state estimates but raised questions about substate areas, particularly about areas with fewer than 100,000 residents (1992a, 3). In a November 1992 addendum, CAPE members indicated that adjustment would also improve the distribution of population shares for large areas with 100,000 or more residents compared to the balance of the state. No statistical analyses showed that the enumeration counts were superior to the adjusted counts at any level of geography for either distributive or numerical accuracy.4

In offering defenses of the accuracy and reliability of the 1990 PES and the resulting sample-adjusted counts, we do not mean to suggest that either were without error. This is far from the case. Errors of matching, heterogeneity, and correlation bias were all of major concern, and these concerns were raised in the Bureau's assessments of the accuracy of adjustment. Brunell mentions correlation bias, but does not mention the theoretical results that show that, in the presence of correlation bias, the methodology used by the Bureau moves the census counts in the correct direction, but just not far enough (see, e.g., Kadane, Meyer, and Tukey 1999). Thus, when correlation bias was included in the Bureau's assessments of accuracy, as in Mulry and Spencer (1993), the accuracy of sample-adjusted counts appears to be even greater. At any rate, the work at the Census Bureau during the 1990s was largely focused on making improvements to the PES design, as we discuss next.

From 1990 to 2000

Brunell describes the method of capture-recapture, which is the basis of the Bureau's dual systems approach, and outlines the design of the post-enumeration survey for the 2000 Census, now named the Accuracy and Coverage Evaluation (ACE) survey. He claims that despite some changes, including doubling the sample size, "serious questions remain" about the adjustment model. He goes on to quote Stark (1999) on the possibility of serious errors in the new scanning process (a concern that, as best we can determine, has proved to be groundless). He also leans on Brown et al. (1999), who cite numerous concerns (cf., Anderson et al. 2000). And that's the evidence he presents.

What is the reality? The doubling of the sample size for ACE has major implications for the accuracy of sample-adjusted counts, both in terms of reducing the sampling error and allowing development of a post-stratification scheme that most agree provides for a variety of improvements over 1990. The ACE design includes many additional enhancements intended to control nonsampling error, all of which are described in Bureau documentation and summarized in Prewitt (2000). They include:

  • Enhancements to the matching process, including the use of new automated matching systems, changes in the treatment of people who have moved since Census Day that simplify the matching process, and the use of extended search areas.

  • New computer processing controls for software validation and verification that will protect against computer errors.

  • Refined field operations guidelines designed to minimize the occurrence of missing data.

  • The use of telephone interviewing and computer-assisted personal interviewing (CAPI), which will result not only in improved efficiency and data quality, but will also shorten the elapsed time between census enumeration and the ACE interviews.

These changes in the methods for sampling and sample-based adjustments are far from cosmetic. Further, the adjustment process for 2000 has been spelled out and documented in advance so that there is little room for ad hoc decision making (one of Brunell's expressed concerns) and virtually no room for manipulation (the allegation constantly raised by Republican political officials). The possibility for nonsampling errors associated with ACE and the adjustment process remains a concern at the Bureau, as it should. But unlike Brunell, we see considerable grounds for optimism.

Resolving the Tensions over Census Taking

There have been two debates over Census 2000: a political one and a technical one. The technical debate has focused on how best to measure the errors inherent in census taking and those associated with the statistical tools proposed to correct for the shortcomings of the enumeration process. It is here that sampling and adjustment have risen to the fore, but there still remain disagreements over many technical details. The political debate is about which party wins or loses in the allocation of seats in the House of Representatives, and in the drawing of political boundaries for congressional state legislative districts. The two debates have been joined because politicians have looked to technical arguments to bolster their claims for political gain.

We believe, as does much of the statistical community, that sample-adjusted census counts will prove superior to the raw census enumeration counts in 2000, as they did in 1990. Nevertheless, this is an empirical issue and should be judged on empirical grounds rather than political ones, as suggested by Skerry (2000). As of this writing, it remains to be seen whether we will, as a nation, agree that the 2000 Census is successful.

Only once in the history of the republic was the census so challenged it was not used for its intended purpose. That was after the 1920 count, when Congress let stand the 1910 House apportionment until 1932 (Anderson 1988; Anderson and Fienberg 1999; Eagles 1990). The 1920 Census provided strong evidence of demographic trends that were not to the liking of the Republican majority and they could not muster support for any specific reapportionment bill. In the 1920s, Congress argued fruitlessly about apportionment formulas, counting procedures, the size of the House, and the population to be counted. Meanwhile, the population distribution continued to diverge from the distribution of power in the House. The looming constitutional crisis was averted in 1929, when Commerce Secretary Herbert Hoover became President Herbert Hoover. Hoover called Congress into special session and put sufficient pressure on recalcitrant members of his party to pass a census and reapportionment bill for the 1930 Census.

The political paralysis that followed from the reapportionment stalemate of the 1920s also ultimately led Congress to delegate authority over census taking to the Bureau in 1929. Perhaps the time has come to do this again. The 1999 Supreme Court decision on the issue has taken the use of sample-based adjusted census counts for apportionment off of the table for 2000. But the debate over their use for all other purposes rages. Rather than accepting this as appropriate, something we read in Brunell's analysis, we would argue for legislation to insulate the Census Bureau from efforts at political manipulation and restore to it the authority to manage the technical details of how to fulfill the constitutional mandate for a census. Census taking is an inherently statistical activity and to do it well in the context of modern society may well require the use of sampling and other statistical tools. The use of sampling to improve the "traditional" census enumeration process should not be a partisan issue but a professional decision based on professional expertise and judgment. This expertise resides largely in the Census Bureau and in the professional groups that regularly advise it.

In a political maneuver in June, the Secretary of Commerce promulgated a proposed regulation delegating authority over the use of sampling to adjust the census to the Bureau director and his senior technical staff.5 The regulation is designed to constrain the next administration from reasserting control over the adjustment decision after the November election. In the event that the Republicans regain the presidency in the November elections, they would have to rescind the regulation if they wished to reassert the authority of the incoming commerce secretary over the adjustment decision in early 2001.6 For our part, we believe that the time has come to foreclose the opportunity to air political responses to technical statistical issues, and would prefer that Congress pass a law authorizing the census director to make such technical decisions. Nevertheless, we also recognize that passing such legislation is not practical in the current legislative environment, and that, in the short term, passage of the regulation would at least allow census officials to make the decision regarding the 2000 count.

In the meantime, we agree with Brunell that the partisan political debate over the accuracy of Census 2000 and the use of sampling for adjusting census results will continue to be played out in Congress, in the media, and, ultimately, in various courts across the land.

Notes

1. See Citro and Cohen (1985), Cohen, White, and Rust (1999), Steffey and Bradburn (1994), Edmonston and Schultze (1995), and the 1992 report from the Census Bureau Committee on Adjustment for Postcensal Estimates (CAPE 1992a), which Brunell selectively cites.

2. For further details, see Anderson and Fienberg (1999) and Choldin (1994).

3. If one were to take the revised methodology from 1992 as the appropriate one to consider as a benchmark for what to expect in 2000, the discrepancy between the PES undercount figures and the DA ones is much diminished (CAPE 1992b).

4. A recent reanalysis of CAPE results by Obenski and Fay (2000) reaffirms this conclusion and offers some evidence for the reliability of adjusted counts at relatively low levels of geography.

5. For the announcement of the regulation, see Daley (2000; cf., Anderson and Fienberg 2000). For the text of the regulation and public comment on it, see the link on the Census Bureau web site (www.census.gov/dmd/www/feasibility.htm).

6. This article was written before the outcome of the election was known.

References

Anderson, Margo. 1988. The American Census: A Social History. New York: Yale University Press.

—–Beth O. Daponte, Stephen E. Fienberg, Joseph B. Kadane, Bruce D. Spencer, and Dwayne L. Steffey. 2000. "Sampling-based Adjustment of the 2000 Census: A Balanced Perspective." Jurimetrics 40(3): 341-56.

—–, and Stephen E. Fienberg. 1999. Who Counts? The Politics of Census-Taking in Contemporary America. New York: Russell Sage.

—–, and Stephen E. Fienberg. 2000. "Census 2000 Controversies." Chance 13(4).

Belin, Thomas R., and John E. Rolph. 1994. "Can We Reach Consensus on Census Adjustment?" Statistical Science 9(4): 486-508.

Breiman, Leo. 1994. "The 1991 Census Adjustment: Undercount or Bad Data." Statistical Science 9(4): 458-75.

Brown, Lawrence D., Morris L. Eaton, David A. Freedman, Stephen P. Klein, Richard A. Olshen, Kenneth W. Wachter, Martin T. Wells, and Donald Ylvisaker. 1999. "Statistical Controversies in Census 2000." Jurimetrics 39(Summer): 347-75.

Brunell, Thomas L. 2000. "Using Statistical Sampling to Estimate the U.S. Population: The Methodological and Political Debate over Census 2000." PS: Political Science and Politics 33(December).

Choi, C.Y., D.G. Steel, and T.J. Skinner. 1998. "Adjusting the 1986 Australian Census Count for Undernumeration." Survey Methodology 14:173-89.

Choldin, Harvey. 1994. Looking for the Last Percent: The Controversy over Census Undercounts. New Brunswick: Rutgers University Press.

Citro, Constance F., and Michael L. Cohen, eds. 1985. The Bicentennial Census: New Directions for Methodology in 1990. Washington, DC: National Academy Press.

Cohen, Michael L., Andrew A. White, and Keith F. Rust, eds. 1999. Measuring a Changing Nation: Modern Methods for the 2000 Census. Washington, DC: National Academy Press.

Committee on Adjustment of Postcensal Estimates. 1992a. "Assessment of Accuracy of Adjusted versus Unadjusted 1990 Census Base for Use in Intercensal Estimates, 1992. Report of the Committee on Adjustment of Postcensal Estimates, August 7, 1992." Washington, DC: Department of Commerce, Bureau of the Census.

—–. 1992b. "Additional Research on Accuracy of Adjusted versus Unadjusted 1990 Census Base for Use in Intercensal Estimates, 1992. Addendum to Report of the Committee on Adjustment of Postcensal Estimates, November 25, 1992." Washington, DC: Department of Commerce, Bureau of the Census.

Daley, William M. 2000. U.S. Commerce Secretary William M. Daley Delegates Decision to Census Bureau on Adjusting Census 2000. Washington, DC: Department of Commerce.

Diamond, Ian, and Chris Skinner. 1994 "Comment on Three Papers on Census Adjustment." Statistical Science 9(4): 508-10.

Eagles, Charles. 1990. Democracy Delayed: Congressional Reapportionment and the Urban-Rural Conflict of the 1920s. Athens: University of Georgia Press.

Edmonston, Barry, and Charles Schultze, eds. 1995. Modernizing the U.S. Census: Panel on Census Requirements in the Year 2000 and Beyond. Washington, DC: National Academy Press.

Ericksen, Eugene P., Stephen E. Fienberg, and Joseph B. Kadane. 1994. "Comment on Three Papers on Census Adjustment." Statistical Science 9(4): 511-15.

Kadane, Joseph B., Michael M. Meyer, and John W. Tukey. 1999. "Yule's Association Paradox and Ignored Stratum Heterogeneity in Capture-Recapture Studies." Journal of the American Statistical Association 94:855-59.

Mulry, Mary H., and Bruce D. Spencer. 1993. "Accuracy of the 1990 Census and Undercount Adjustments." Journal of the American Statistical Association 88(September): 1080-92.

Obenski, Sally M., and Robert E. Fay. 2000. "Analysis of CAPE. Findings on PES Accuracy at Various Geographic Levels, 2000." Accuracy and Coverage Evaluation: Statement on the Feasibility of Using Statistical Methods to Improve the Accuracy of the Census, auth. Kenneth Prewitt. Washington, DC: Department of Commerce, Bureau of the Census.

Prewitt, Kenneth. 2000. Accuracy and Coverage Evaluation: Statement on the Feasibility of Using Statistical Methods to Improve the Accuracy of the Census. Washington, DC: Department of Commerce, Bureau of the Census.

Skerry, Peter. 2000. Counting on the Census? Race, Group Identity, and the Evasion of Politics. Washington, DC: Brookings Institution Press.

Stark, P.B. 1999. "Differences between the 1990 and 2000 Census Adjustment Plans, and Their Impact on Error." Technical Report 550. Berkeley: University of California, Berkeley.

Steel, David. 1994. "Comment on Three Papers on Census Adjustment." Statistical Science 9(4): 517-19.

Steffey, Duane L., and Norman M. Bradburn, eds. 1994. Counting People in the Information Age. Washington, DC: National Academy Press.



Readers may redistribute this article to other individuals for noncommercial use, provided the text and this notice remain intact and unaltered in any way. This article may not be resold, reprinted, or redistributed for compensation of any kind without prior permission from the American Political Science Association. Questions regarding permissions should be directed to Edward Lamb at elamb@apsanet.org or by phone at (202) 483-2512 or Fax at (202) 483-2657.



CONFERENCES | GRANTS | ORGANIZATIONS | ANNOUNCEMENTS | CALLS FOR PAPERS






© Copyright 2002   American Political Science Association (APSA)
1527 New Hampshire Ave, NW   Washington, DC 20036-1206
Ph: (202) 483-2512   Fx: (202) 483-2657   E-Mail: apsa@apsanet.org