>>
>>
>>
Econometric Modeling as Junk Science
These cases may be enough to persuade most readers that multiple regression is not of much use in proving causal arguments, at least about the historical impact of social policies. In fact, the problem is broader than that, and many specialists doubt that multiple regression is a valid way of settling any kind of theoretical argument. In 1985, Stanley Lieberson (1985: ix), a distinguished professor at the University of California, wrote “I am fully sympathetic with the empirical research goals found in much of contemporary sociology, with its emphasis on rigor and quantification. However…I have reluctantly reached the conclusion that many of the procedures and assumptions in this enterprise are of no more merit than a quest for a perpetual motion machine.” In 1991, David Freedman, a distinguished sociologist at the University of California at Berkeley and the author of textbooks on quantitative research methods, shook the foundations of regression modeling in the social sciences when he frankly stated "I do not think that regression can carry much of the burden in a causal argument. Nor do regression equations, by themselves, give much help in controlling for confounding variables" (Freedman, 1991: 292).
Freedman's article provoked a number of strong reactions. Richard Berk (1991: 315) observed that Freedman's argument "will be very difficult for most quantitative sociologists to accept. It goes to the heart of their empirical enterprise and in so doing, puts entire professional careers in jeopardy.”
The social science community does not have good procedures for acknowledging the failure of a widely used research method. Methods that are entrenched in graduate programs at leading universities and published in prestigious journals tend to be perpetuated. Many laymen assume that if a study has been published in a good, peer reviewed journal, there is some assurance of its validity. The cases we have examined here show that this is not the case. Peer review assures that established practices have been followed, but it is of little help when those practices themselves are faulty.
Finding the flaws in regression studies is difficult. Usually, the only way to be sure of them is to obtain the data set and reanalyze the data. This is more than can be expected of a reviewer from a professional journal. It takes time, usually a year or two, for a multiple regression study to be replicated, and many studies never get replicated because they are not of sufficient interest to anyone. Even if a replication is done and does not confirm a study, journal editors may feel that this is simply a case of normal scientific debate. The problem, however, is that no real progress occurs. We are no closer to having a useful mathematical model for predicting homicide rates than we were when Ehrlich published his paper in 1975.
There are no important findings in sociology or criminology that are so complex that they cannot be communicated with graphs and tables that are intelligible to intelligent laymen and policy makers. It is time to admit that the emperor has no clothes. Multiple regression and other mathematical modeling techniques have simply not lived up to their promise as a method of evaluating the impact of public policies. Studies that claim otherwise are junk science.
References
Ayres, I. and Donohue, J. (1999). "Nondiscretionary Concealed Weapons Laws: A Case Study of Statistics, Standards of Proof, and Public Policy." Am. Law and Ec. Rev 1: 436-470.
Berk, R.A. (1991) "Toward a Methodology for Mere Mortals," Sociological Methodology 21: 315-324.
Black, D. and Nagin, D. (1998). "Do Right-to-Carry Laws Deter Violent Crime?" J. Legal Studies 27: 209-219.
Blumstein, A. and Wallman, J. (eds.), The Crime Drop in America, Cambridge University Press, New York, pp. 13-44.
Top
|