Search:           


Florida 2000 and Washington 2004

A Study of Two Elections

A few basic principles of statistics are enough to reveal the flaws in this argument. For starters, Miami-Dade is not even remotely representative of Florida's general population. As reflected in the CVF, Florida was roughly 11 percent black and 79 percent white in 2000--a proportion that has changed little since. Miami-Dade County however was about 21 percent black and 32 percent white. Its felon population is also different from the rest of the state. Any argument based on the demographic breakdown of either, including valid ones, would not extrapolate to the rest of the state. Thernstrom and Redenbaugh referred to Miami-Dade simply because the USCCR did (the USCCR focused on Miami-Dade only because it provided the clearest example of the issues they wanted to highlight--not because they considered it representative of Florida in general). Having chosen a decidedly unrepresentative county, they proceed to use inadequate data and a statistical measure that is not relevant to their point. Their estimate of the felon list's error rate is based only on a survey of those who appealed their listings successfully. We already saw that this ignores those who did not appeal, which is likely a significant number, and also examples of counties such as Leon that didn't even contact a majority of those on their lists. The data for a more thorough and representative comparison was available to them had they desired one. Their dissenting statement was released in June 2001. At that time the April 2001 CVF, and county level data on how the list was used were both available. Furthermore, the CVF does contain racial data that the felon list alone did not, which allows for error rates across the state to be segregated by race. Though not exact, the actual retentions by race in counties that used the list would have provided them with a far more believable estimate of the list's error rate than a handful of appeals in a county that bore little demographic resemblance to the rest of the state. This was not done.

Stuart (2004) did provide this breakdown in a year 2002 preprint of his paper. Table 10 of his 2002 preprint shows the distribution of voters on the 2000 felon list and the April 2001 CVF by race and how the list was used. His Table 12 provides the same data for counties that made heavy use of the list, with and without Miami-Dade County included. Stuart provides Miami-Dade data alone for comparison precisely because he recognized what Thernstrom and Redenbaugh did not--that Miami-Dade was an outlier in felon list purge demographics and not representative of the state in general, or even of the sum total of all counties that relied on the list.

For Miami-Dade Table 12 shows 816 listings for blacks of which 108 were retained. The corresponding figures for whites are 209 and 30 (these figures will not correlate directly with those used by Thernstrom and Redenbaugh as they reflect only the year 2000 list rather than those for 1999 and 2000 combined). Because retention rates will reflect errors discovered by those using the list using a variety of methods they will provide a more inclusive look at errors than appeals alone and will likely capture many who did not appeal their listings along with those who did. Applying Thernstrom and Redenbaugh's method to this data yields racial "error rates" of 14 percent for whites and 13 percent for blacks contrary to Thernstrom and Redenbaugh's claim. For the entire state--the real figures of interest--comparable figures are provided in Table 10 for all Florida counties that used the list. There we have 7,727 listings for blacks of which 2,255 were retained, and 10,529 and 3,655 respectively for whites. This yields a rate of 35 percent for whites as compared to 29 percent for blacks--higher but again, hardly by a factor of two.

In fact, none of these figures is meaningful because Thernstrom and Redenbaugh are using a bogus statistic. Racial percentages alone are more likely to reflect felony convictions rather than errors. For error rate we must consider how the list was created. We saw earlier that the felon list was derived mainly from the FDLE Database. Likewise, the end result of list driven purges will be manifested in the April 2001 CVF. The FDLE Database and the CVF are both drawn from the same pool--the general Florida population. If the errors in the felon list truly are errors, they will be unbiased. Given this, and the fact that the sample sizes in question are large (>> 1000), they will reflect the same demographics as the CVF. Note that because it is the errors that interest us--not the list makeup in general--this will be true regardless of the racial distributions of felony conviction rates by county or state. If the list truly is biased racially we would expect wrongful listings to diverge from the racial demographics of the general population as reflected in the CVF--around 14 percent black and 86 percent white omitting latinos--regardless of whether wrongful listings for whites are more numerous than those for blacks. Inspection of the USCCR data for Miami-Dade, Stuart's data for high purge counties and all of Florida, and other estimates of felon list error rates shows clearly that this is not the case. No matter how the numbers are worked, blacks are disproportionately reflected in erroneous listings at far higher than the 14 percent rate we would expect statewide.

Thernstrom and Redenbaugh point out that the committee did not interview anyone who had been barred from voting due to an erroneous listing. Their wording implies not only that few if any voters were actually disenfranchised due to a wrongful listing, but that they personally did not know of anyone who had been. Yet their own source names no less than five people who were wrongly listed and specifically describes how three of them had been barred from voting (Hiaasen et al., 2001). The first five paragraphs of the article tell the story of one of them. Surely these present a picture of the human impact of wrongful disenfranchisement. Thernstrom and Redenbaugh could not have read this article thoroughly and honestly without seeing this, making their attempts to minimize the issue evasive at best, if not downright callous. Furthermore, even if we grant their estimate of voters disenfranchised by the felon list, we're still left with 1,104 law abiding citizens who were robbed of their rightful vote for president. This represents more than twice Bush's margin of victory. It has already been shown that close to two-thirds of those wrongly listed were Democrat voters, implying that if those people had not been denied their rightful vote, they may well have swung the election. I doubt Thernstrom and Redenbaugh would be so cavalier if the roles were reversed. The WSRP invested over 5 months and millions of dollars into challenging the Gregoire/Rossi race on a similar number of actual felon voters and fewer than 200 wrongful felon listings, none of whom were denied their vote--while countless Far-Right forums nationwide decried the travesty of it all. This is not a credible argument.

3)   Ballot Spoilage was not Racially Biased.

Of all the USCCR's claims, none provoked as much rage as their assertion that the impacts of Florida 2000 fell disproportionately on minorities. The largest part of their racial disparity evidence came from eyewitness testimony and several statistical analyses of ballot spoilage rates. The Commission concluded that statewide, ballot spoilage was nine time more likely to fall on poor and minority voters than whites. Thernstrom and Redenbaugh devoted most of their dissent to a blistering attack on this conclusion, which they said incompetent and entirely political in motive. At times their criticisms border on outright slander. They were especially critical of the statistical analyses that provided the lion's share of the Commission's evidence and devoted literally dozens of pages to debunking it. But in all, their case boils down to a handful of specific claims. In what follows I will present a brief overview of the Commission's analysis, followed by an examination of each one.


The USCCR Statistical Model

The Commission based most of their conclusions on a series of multiple regression analyses by Allan Lichtman of American University (Lichtman, 2001). Lichtman analyzed ballot spoilage in 54 of Florida's 67 counties for which there were separate records of undervotes, overvotes, and total unrecorded votes. These covered 94 percent of all ballots cast in Florida 2000. Independent variables included county voting technology, race, education, income level, percentage of registered black voters by county based on year 2000 voter registration data, and a number of other socioeconomic variables. Miami-Dade, Duval, and Palm Beach Counties were examined at the precinct level because they were statistical outliers for ballot spoilage and racial demographics. Voter registration data for these three was obtained from 1998 county level registration records, which correlated with the comparable year 2000 records at the 0.996 level (near perfect). Unrecorded vote data for them was obtained from Hansen (2001). Data on election returns, voter registration, and voting technology for the remaining counties was obtained from the FDE website (FDE, 2001b). Information on unrecorded votes was obtained from the Governor's Select Task Force report GSTF/CCPP, 2001, pgs. 31-32), court records from Siegel v. Lepore (2000), and CNN and the Associated Press (CNN, 2000). Data for other socio-economic variables was obtained from the 1990 U.S. Census (USBC, 1990). Estimated literacy rates were obtained from CASAS (CASAS, 1996; Reder, 1997).

Lichtman compared county and precinct racial compositions to overall unrecorded votes, overvotes, and undervotes using simple statistical, multiple regression, and ecological regressions. Ecological regressions attempt to infer the behavior individual data points from aggregate samples--in effect, inferring a subsample at "below grid" levels. In this case individual voter behavior by race was inferred from county and precinct level aggregates for which reliable data was available. Though useful if carefully prepared, they can be unreliable if the region being evaluated is strongly heterogeneous in one or more of the regression variables--as the saying goes, if Bill Gates shows up at a town hall meeting in a slum neighborhood, he average income in the room will suddenly go up by $1 million. Scale factors like this can be extraordinarily difficult to control for without introducing spurious correlations, so ecological regressions do not yield deterministic solutions (Schuessler, 1999). But there are ways to test their robustness. Where independent data is available for one or more variables at smaller scales than the aggregate averages being used, it's often possible to get a "clean look" at these variables without inferring anything about their aggregate behavior. A precinct that is 100 percent black or white for instance, will give a clear look at the race variable that can be compared with the larger model even if other variables are present (Duncan, 1953). The method is limited in scope in that it can only be used where conditions allow for such a "clean" look. But where it can be used it provides a reliable robustness check of the model. This particular case, where such an analysis is done on precincts that are unusually homogeneous in one or more variables, is called extreme analysis (Lichtman, 1991; Grofman, et. al., 1994). Lichtman supplemented his ecological regressions with extreme analyses of Miami-Dade, Duval, and Palm Beach County precincts that were 90 percent or more black or non-black. Because the race variable is essentially homogeneous in these areas, no ecological inferences were necessary with respect to other variables, so these provide an independent check of his models. Analyses like these seldom duplicate the results of the model they're testing, but if it's on solid ground in other respects they should be close.

Lichtman made several model runs under varying conditions testing ballot spoilage response to different variables. He found statewide ballot spoilage rates of 14.4 percent and 1.6 percent for blacks and non-blacks respectively (Lichtman, 2001, Table I). Broken out by type, these were 12.0/0.6 percent black/non-black for overvotes, and 2.3/1.2 percent black/non-black for undervotes. As expected, spoilage rates were considerably higher in punch card and centrally recorded optical scan counties than in those that used precinct recorded optical scan methods. Yet even in the latter spoilage rates came in at 2.5/0.2 percent black/non-black for overvotes, 2.1/0.1 percent black/non-black for undervotes, and 5.2/0.4 percent black/non-black overall (Lichtman, 2001, Table I). For the 3 outlier counties the figures are considerably higher, reaching 23.6/5.5 percent black/non-black in the case of Duval County (all three used punch card technologies). In every case blacks were far more likely to suffer ballot spoilage than non-blacks, and overvotes were more prevalent than undervotes.

Thernstrom and Redenbaugh offered several criticisms of Lichtman's model and methods. These can be condensed to the following claims.


The Model's conclusions are invalidated by the ecological fallacy.

The "ecological fallacy" is a well known statistical principle which says states that the individual behavior of some variable cannot be predicted from larger averages. In this case, we would say that the voting behavior of any black or non-black voter cannot be predicted based on average voting trends in his or her county or precinct. According to Thernstrom and Redenbaugh, the fact that Lichtman's conclusions were based on ecological regressions of county and state level averages, we cannot draw any meaningful conclusions from them. On page 13 they state that,

"The majority report argues that race was the dominant factor explaining whose votes counted and whose were rejected. But the method used rests on the assumption that if the proportion of spoiled ballots in a county or precinct is higher in places with a larger black population, it must be African American ballots that were disqualified. That conclusion does not necessarily follow, as statisticians have long understood. This is the problem that is termed the ecological fallacy.

We have no data on the race of the individual voters. And it is impossible to develop accurate estimates about how groups of individuals vote (or misvote) on the basis of county-level or precinct-level averages."

(Thernstrom & Redenbaugh, 2001)

This statement is incorrect on at least two counts. First, the analysis did not assume race as the sole factor in ballot spoilage. Variables for literacy, education level, and other socioeconomic factors were also included and tested separately to evaluate their contributions to the response. This was quite clear from Lichtman's statement of data and methods (Lichtman, 2001). Even if he had not, the characterization of this as an ecological failure is both wrong and irrelevant. The principle actually says that we cannot predict the behavior of individuals—in this case, the behavior and experiences of individual voters)—from larger aggregate averages. We can however, estimate the most likely behavior or experience of "groups of individuals" with common demographic characteristics from larger averages, particularly if independent information exists for that group that isolates the variables of interest. If Thernstrom and Redenbaugh are to be believed, it would be "impossible to develop accurate estimates" of whether young Hollywood movie stars are more likely to vote liberal, or caucasian, Idaho Panhandle farmers are more likely to vote conservative, from any sort of aggregated data. This is patent nonsense.

It's also beside the point. The USCCR never once claimed that all blacks were individually discriminated against. They claimed that minorities, as a community, were more likely to experience unintentional ballot spoilage than whites for any reason. The behavior or experience of individual minority voters has no bearing on this point. Lichtman was explicit about his use of extreme analysis in precincts that were nearly homogeneous by race as an independent check on his ecological methods. He provided details of how the method is applied, the absence of inferences in its use, and its relevance to ecological regressions (Lichtman, 2001; Lichtman, 1991; Grofman, et. al., 1994). Yet Thernstrom and Redenbaugh claim that,

"The report ignores the fact that the county-level and precinct-level data yielded quite different results. Ballot rejection rates dropped dramatically when the precinct numbers were examined, even though comparing heavily black and heavily non-black precincts should have sharpened the difference between white and black voters, rather than diminishing it. Dr. Lichtman obscures this point by shifting from ratios to percentage point differences.

Dr. Lichtman’s precinct analysis is just as vulnerable to criticism as his county-level analysis. It employs the same methods, and again ignores relevant variables that provide a better explanation of the variation in ballot spoilage rates."

(Thernstrom & Redenbaugh, 2001)

Once again they misunderstand the methods and results. Extreme analysis does not use "the same methods" as ecological regression. The elimination of inferences involving the variable of interest alone significantly reduces the impact of ecological fallacies, which is why the method is useful as an independent robustness check. Nor is it true that comparing heavily black and heavily non-black precincts must sharpen the difference between white and black voters. Regression analyses like Lichtman's (or their own consultant's) make no statements about causality, they quantify correlations. Precinct level numbers would only be expected to sharpen racial disparities if the root causes were more sharply defined at the precinct rather than the county or state level. This is not at all clear. There is considerable evidence for instance, that voting technology is a significant factor in theses trends, and in fact may even account for most of the disparities (Tomz & Van Houweling, 2003). Minorities are far more likely to be concentrated in poor areas than whites (NRC, 1989; 1990; 2001)--a fact that Thernstrom and Redenbaugh acknowledge on page 26. These areas are less likely than their wealthy counterparts to have access to state-of-the-art voting technologies and technical support. The impact of something like this would not be resolved at the precinct level. In any given precinct all voters will be using the same technology regardless of race, so a difference in spoilage rate between say, precinct recorded optical scan and punch card technologies would be unresolvable. But at the county or state level the difference would show up like searchlight on a clear night. Given the importance of this variable alone, we expect lower trends at the precinct level. Something would have been wrong with Lichtman's analysis if he had come up with anything else.

It's important to note that Lichtman did not perform precinct level extreme analyses to validate his county or statewide ballot spoilage rates. He did so to determine whether his ecological regressions had properly isolated race from other variables. If they had, we should see good agreement between his precinct level ecological and extreme analysis results. In fact, they did agree very well, and Thernstrom and Redenbaugh's own figures demonstrate this. On page 25 they show a table of estimated racial disparities in ballot rejection rates taken from Lichtman's precinct level ecological and extreme analyses. They made much of the fact that the ratio of black/non-black ballot spoilage was lower at the county level, yet there is no reason to believe they shouldn't be. The relevant information is in how well the precinct level results compare for the two analysis methods. Of the six sets of comparable values they show for each, all show agreement to within 1.5 percentage points for the two methods, and four of the six agree to within 0.3 percentage points. This is exceptional agreement.


The analysis did not adequately account for non-race related variables.

Thernstrom and Redenbaugh claim that Lichtman's analysis assumed race to be the dominant variable in ballot spoilage rates without accounting for other variables. On page 16 they state that,

"The Commission’s report assumes race had to be the decisive factor determining which voters spoiled their ballots. Indeed, its analysis suggests that the electoral system somehow worked to cancel the votes of even highly educated, politically experienced African Americans.

In fact, the size of the black population (by Dr. Lichtman's own numbers) accounts for only one-quarter of the difference between counties in the rate of spoiled ballots (the correlation is .5). And Dr. Lichtman knows that we cannot make meaningful statements about the relationship between one social factor and another without controlling for or holding constant other variables that may affect the relationship we are assessing.

Although Dr. Lichtman claims to have carried out a 'more refined statistical analysis,' neither the Commission's report nor his report to the Commission display evidence that he has successfully isolated the effect of race per se from that of other variables that are correlated with race: poverty, income, literacy, and the like."

(Thernstrom & Redenbaugh, 2001)

We saw earlier that Lichtman's analysis did cover for a number of socioeconomic variables including literacy. Thernstrom and Redenbaugh are aware of this. In fact, they reference a table from Appendix I of Lichtman's report where this information is presented, but dismiss it all on the grounds that they had no "proof" he actually used it--in essence, calling him a liar. On page 18 we're told that,

"Appendix I of Dr. Lichtman's report gives county level values for such variables as median income and percent living in poverty, and the reader naturally assumes that all of these were examined in his "more refined statistical analysis." Perhaps they were, but since Dr. Lichtman does not provide the actual results of the regression analyses, it is impossible to tell....

The 'refined statistical analysis' provided by Dr. Lichtman, we conclude after careful study, consists of nothing more than adding two measures of education (very inadequate measures, we shall argue below) and controlling for voting technology. And we have to take Dr. Lichtman's word about even those results, since he does not supply the details....

The supposed refinements in Dr. Lichtman's regression analysis did not include using poverty rates as a variable, as far we can tell. Nor did they include measures of median family income, population density, proportions of first-time voters, or age structure, to name a few about which census data is readily available."

(Thernstrom & Redenbaugh, 2001)



Top

Page:   << Previous    1    2    3    4    5    6    7    8    9    10    11    12    13    14    15    16    17    18    19    20    21    22    23    24    25    26    27    28    29    30       Next >>
The Far-Right
Issues & Policy
Endangered Species
Property Rights & 'Wise Use'
DDT & Malaria
Terrorism Policy
Neoconservative Media
Astroturfing
Christianity & the Environment
Climate Change
Global Warming Skeptics
The Web of Life
Managing Our Impact
Caring for our Communities
Ted Williams Archive