Search:           


Florida 2000 and Washington 2004

A Study of Two Elections
  • Percent of population that is English speaking and mono-lingual
  • County population density
  • Increase in percent of registered voters who are black from 1996 to 2000
  • Percent of blacks with less than 9th grade education
  • Percent of blacks with less than a high school diploma
  • Percent of blacks with at least some college education
  • The latter three variables would be expected to show some overlap, but Klinker used them across his various runs in a manner that avoided multicollinearity. In all, Klinker used all of Lichtman's original data, and supplemented it from other sources and included more variables than he or Lott. Klinker found that 87.5 percent of all ballot spoilage could be explained by race, voting technology, precinct density and voter turnout at statistical significance levels below 0.1 percent. Of these, race and voting technology carried the bulk of the impact. This confirms Lichtman's analyses at a correlation level 15 percent higher than that achieved by Lott, and at much higher levels of statistical significance (Klinker, 2001).

    Klinker ran one other test not done by Lichtman or Lott which was particularly revealing. His first model runs found that ballot spoilage was likely to be highest in predominately Republican counties with comparatively high percentages of black voters. He pointed out that racial disenfranchisement against blacks that was intentional at any significant level would be least likely to occur in strongly Democratic counties, as they have the least to gain from it. Likewise, we would expect a low incentive for disenfranchising activity in heavily Republican counties with small black populations, as such activity would stand out more starkly if found out (it's easier to hide in a crowd than in a living room). But there would be incentive for disenfranchisement of blacks--or at least a low level of concern for preventing it--in heavily Republican counties that had relatively large black populations that carried a potential threat to the majority constituency. To test this hypothesis, Klinker ran a final model that included an interactive variable for the percent of registered black voters multiplied by the winning vote margin Bush. After all other variables had been controlled for, this variable ended up explaining over 92 percent of all ballot spoilage. This does not prove an intentional Republican conspiracy against black voters in these counties, but it's entirely consistent with at least some level of intentionally disenfranchising activity. Even if there were none, this is exactly what we would expect to find in heavily non-black Republican areas if there were a general lack of concern with ballot spoilage among the black community where they comprise a sizeable part of the electorate, but lack the local political power to ensure that their ballots are counted accurately and fairly.

    Rhetoric about "the myth of a nefarious plot" aside, disenfranchisement through carelessness and apathy is still disenfranchisement. If the sanctity of every vote truly was a top priority (as was repeatedly claimed by Harris, the FDE, and Jeb Bush) it’s difficult to see how 92 percent of all ballot spoilage among blacks could be explained simply by identifying relatively large black minorities in predominately Republican and non-black areas.

    Lichtman was unsuitable as the Commission's sole consultant


    Turning their attention to what they call "procedural irregularities" in the Commission's investigation, Thernstrom and Redenbaugh reach a disturbing low when they escalate from attempts to debunk Lichtman's work to outright character assassination. They accused Lichtman of being highly partisan and unqualified to act as a consultant to the USCCR. Throughout their dissent he is portrayed as professionally questionable, and at several points they even hint that he was dishonest and/or negligent. By contrast they refer to their own consultant, John Lott, as "a first-rate economist" who is portrayed as completely non-partisan and having an exemplary professional record. On page 54 they claim that,

    "The choice of Dr. Lichtman to carry out this work is problematic. When he appeared at the June 8, 2001, meeting of the commission to present his findings, he took pains to present himself as a scholar above party, who had 'worked for Democratic interests... and for Republican interests.' At the time, the American University web site identified him as a 'consultant to Vice-President Albert Gore, Jr.' His partisan commitment was evident in his media appearances throughout the campaign and the period of post-election uncertainty.

    Moreover, although Dr. Lichtman claimed (at the June 8 Commission meeting) that he began his study of possible racial bias in the Florida election with an open—even 'skeptical'—mind, in fact, evidence suggests the contrary. As early as January 11, at the very beginning of his investigation and prior to conducting any detailed statistical analysis of his own, Dr. Lichtman stated publicly that he was already convinced, on the basis of what he had read in the New York Times, that in Florida 'minorities perhaps can go to the polls unimpeded, but their votes are less likely to count because of the disparate technology than are the votes of whites.' He concluded: 'In my view, that is a classic violation of the Voting Rights Act.' Long before he examined any of the statistics, Dr. Lichtman had already concluded that Florida had disenfranchised minority voters and violated the Voting Rights Act.

    A social scientist with strong partisan leanings might conceivably still conduct an evenhanded, impartial analysis of a body of data. Unfortunately, that is not the case in the present instance."

    (Thernstrom & Redenbaugh, 2001)

    It’s surprising that respected members of the USCCR would stoop to this kind of ad-hominem unprovoked. The fallacies in such an argument (not to mention the lack of professionalism) speak for themselves and would require no further comment but for one thing--in this case they reveal much about the one-sided nature of Thernstrom and Redenbaugh's dissent.

    Lichtman they argue, was unsuitable for the Commission's study because of his "strong partisan leanings". Their proof?

    • He "consulted" for Al Gore and,
    • He made statements about racial bias in technology driven ballot spoilage that they consider partisan in some "media appearances” (statements that as we've seen, were correct).

    From this alone they conclude that his analyses were not "evenhanded and impartial". But of course,

    They carefully avoided any similar discussions about their own consultant.

    The severity of the charges and the assumed moral high ground from which they were made merit a closer look. At the time of the USCCR report John R. Lott held a research post at Yale. From 1998-99 he was the John M. Olin Fellow at the University of Chicago Law School and a strong advocate the of Chicago School theories on law and economics, which occupied much of his research energy. The "Olin" in this title is a reference to the Olin Foundation, which was started in 1953 by chemical industry magnate John M. Olin. Since the early 80's it has been one of America's largest and wealthiest far-Right foundations. Over the last 50 years they have funneled millions into a wide range of lobbies and think tanks representing ultra-conservative special interests including polluting and extraction industries, antienvironmental front groups, pro-gun groups, and more. They have consistently denied any ties with the Olin Corporation, but tax and stock sales records reveal many millions in direct financial support and Olin stock holdings. Throughout its history the foundation's board of directors has included numerous Olin executives, and has refused to provide full disclosures of its holdings (VPC, 1999). Today Lott is a resident scholar at the American Enterprise Institute which is one of America's top far-Right think tanks and a key player in many of the same lobbies the Olin foundation supports. There he devotes his work to issues various issues involving econometrics, law and economics, public choice theory (including elections), microeconomics, and environmental regulation. But the issue that has defined his career more than any other is gun-control which he vociferously opposes (not surprisingly, the Olin Corporation also happens to own Winchester firearms).

    Turning to Lott's publishing history, the story gets deeper. In addition to his ultra-conservative political and financial ties, he has a long and well documented history of flawed multiple regression analyses and unethical academic conduct. In 1997 Lott and co-author David Mustard published a controversial study in which they claimed to have proven that in any given population a one percent increase in gun ownership results in a 3.3% decrease in homicide rates (Lott & Mustard, 1997). Their conclusions were based on a series of multiple regressions analyses of "shall issue" concealed weapons laws and crime rates that were almost identical in method and construct as those he did for Thernstrom and Redenbaugh. The study was quickly embraced by the pro-gun lobby as irrefutable proof that guns reduce crime. The NRA and numerous politicians in Congress and across the nation began citing Lott and Mustard's work in support of various efforts to block firearms regulations of all types. Emboldened by this response, in 1998 Lott released the first edition of his book More Guns, Less Crime (Lott, 1998) which was soon followed by a second edition (Lott, 2000). The book has sold over 100,000 copies and has become quite literally, the "scientific" Bible of the pro-gun lobby.

    It wasn't long before numerous problems were found in Lott and Mustard's model including multicollinearity, flawed data, questionable data manipulations--problems that have an eerie similarity to the models Lott did for Thernstrom and Redenbaugh. Within one year it had been shown that simply removing Florida from their dataset made their results vanish (Black & Nagin, 1998). Examination of their national dataset at the county and municipality level revealed numerous creative manipulations of crime rates, and in some cases omission of data that was both available and crucial to demonstrating their claims (Lambert, 2005). One of the more blatant examples was their treatment rural crime rates. Lott and Mustard ran their regressions with logarithmic data for crime rates. The homicide rate h for instance, can be expressed in terms of another "dummy" variable x as,

    h   =   ex

    This technique is commonly used in regression analyses because it lends itself to regression methods better than the raw data. But this technique cannot always be used. Wherever h has a zero value it will yield an x of minus-infinity making model calculations impossible. A significant portion of Lott and Mustard's datasets from rural areas had zero homicide rates. So how was this problem avoided? By assigning murder rates to regions that had none. The assumed rates were small (on the order of 0.1 murders per 100,000 population) but numerous enough to significantly impact their results (Goertzel, 2002). They also used variable inputs that led to demonstrably false conclusions, not the least of which was that crime rates are strongly correlated with the population of African American women but hardly at all with African American men (Wikipedia, 2005e). Numerous other problems turned up with Lott and Mustard's work and with More Guns, Less Crime. In the years since few of Lott's conclusions have been independently reproducible and most have fallen apart as more data became available (Ayres & Donohue, 2003). It was even discovered that there were basic programming code errors in his models. Many of Lott and Mustard's variables Once again, when these were corrected all of his results vanished (Ayres & Donohue, 2003b; Lambert, 2005b). Since then coding errors have also turned up in other analyses of Lott's, including some used to support claims made in his later book The Bias Against Guns (Lott, 2003). After repeated and strident denials, Lott eventually acknowledged the coding errors but claimed they were minor and did not affect his results. But when he provided a "corrected" version of the data to prove this, it had the same errors that his original dataset had. When these were corrected, once again, his results vanished (Mooney, 2003).

    To most social scientists problems like these would be an unmistakable red flag that would send them back to their data and models to make corrections. But not Lott. From the beginning he has vociferously denied any problems with his work, even to the point of slandering some of his critics (Wikipedia, 2005e, Lambert, 2005c). When confronted with outright errors in his published works and numerous op-ed pieces he has blamed them on editing errors by the media or denied them outright. He has been caught denying errors in his online data after he secretly replaced the erroneous data with corrected versions at his web site (this sort of thing can be determined from system level time-stamps on the data files themselves). More seriously, he has been caught fabricating data to support his arguments. In the 1998 edition of More Guns, Less Crime Lott argued that according to his data only 2 percent of all incidents involving defensive use of a gun necessitated a firing of the weapon, either at the perpetrator or as a warning. The claim was tangential to Lott's larger case and only one sentence in the 1998 edition mentions it. But he has since referred to this study result numerous times in print, in public, and in sworn testimony before legislative bodies (Wikipedia, 2005e). The statement raised many eyebrows however because it flatly contradicts a large body of evidence, and in addition to his own data Lott had cited "national surveys" as supporting it--surveys that he did not directly cite and no one else could find (published studies have generally obtained figures of at least 20 percent or more). Furthermore, even if the survey was real, the sample size Lott reported (25 subjects) was too small by at least a factor of ten to resolve the 2 percent result he quotes (which would have required that half a subject would have fired a gun in a crime incident). When confronted with this fact, Lott claimed to have "weighted" his results (method not available of course), but this would have exacerbated the margin of error even further and made the result less, not more statistically significant. All of this should have been obvious to an experienced econometric researcher.

    When pressed to make the data supporting this conclusion available, or at least to cite the national surveys he referred to, Lott became evasive. Eventually, it was discovered that Lott's result were based entirely on a survey he had conducted himself for which no data was available. According to Lott, the data, methodology, analysis work, and results were all lost in a computer crash. The work had been done by volunteer students that Lott himself had recruited and paid in cash out of his own pocket. No cancelled checks or records of any kind existed, nor was there any record of his having claimed the expense as a tax write-off. According to social science protocol and law, the institutional Committee on Human Experimentation was required to review the study. This was never done. Despite a fair amount of national news media coverage, not one person ever came forward claiming to have been either a paid student volunteer or a subject in the study (Wikipedia, 2005e). Lott later claimed to have located some of the students and at least one of the survey subjects. A check of his source revealed that these students had only indicated that they might have worked on a project for him, but none to suggest that it was a survey. The subject he identified, former city of Minneapolis assistant prosecutor David Gross, was active in defending that city's concealed carry law, devoting four years and $1M in lost income towards getting it passed (Lambert, 2005b). In the absence of any records on Lott's part, it is far too coincidental that someone this important to the Pro-gun lobby he represents would've been chosen in a random national survey. Lott has also been caught massaging his data on other occasions as well-both to obtain results he wanted, and to "refute" criticisms that data used to support his conclusions had been flawed (Mooney, 2003; Lambert, 2004).




    Top

    Page:   << Previous    1    2    3    4    5    6    7    8    9    10    11    12    13    14    15    16    17    18    19    20    21    22    23    24    25    26    27    28    29    30       Next >>
    The Far-Right
    Issues & Policy
    Endangered Species
    Property Rights & 'Wise Use'
    DDT & Malaria
    Terrorism Policy
    Neoconservative Media
    Astroturfing
    Christianity & the Environment
    Climate Change
    Global Warming Skeptics
    The Web of Life
    Managing Our Impact
    Caring for our Communities
    Ted Williams Archive