Pain in the Economics Discipline

face of a prisoner

The economics profession has not reformed despite painful failings. For example, much formal econometric work has no influence outside of the economics discipline. Such work has continued despite a sensational declaration of this failing at an important economics conference in 1987. Moreover, economic theory and empirical evidence indicate that leading economic journals have been frequently publishing invalid results. Across about two decades, this knowledge has produced little change in publication practices. A third example: in 1991, a Commission on Graduate Education in Economic, whose members were prominent professors within the discipline, reported that coursework was not encouraging creativity and failing to develop communication skills. The economics discipline essentially has not responded to these failings.

Some aspects of work within the economics discipline have changed significantly. In leading economic journals, the average page size of articles, the number of references, and the lag between submission and acceptance have all roughly doubled over the last quarter of the twentieth century. Economic theory and plausible empirical explanations do not easily or well explain these changes in disciplinary practices.^ These changes do not appear to have increased the average quality of papers. They have made publishing papers a more time-consuming and less enjoyable ordeal.^

It was as if they were in a cage whose door was wide open without their being able to escape. Nothing outside the cage had any importance, because nothing else existed any more. They stayed in the cage, estranged from everything except the cage, without even a flicker of desire for anything outside the bars.^

The communications problem in economics is being registered through surprising channels. In 2004 at a public conference in Washington, DC, Ronald Coase, the winner of the 1991 Nobel Prize in Economic Sciences, declared:

They {economists} don’t study the economic system, they study other economists’ writings. The economic literature consists of a discussion of discussions and so it could go on. And it’s not really dealing with what happens in the real world, it’s dealing with this imaginary world that is economics.^

Economics students in France have engaged in protests, formed an alternate economics collective, and issued public demands:

If serious reform does not take place rapidly, the risk is great that economics students, whose numbers are already decreasing, will abandon the field in mass, not because they have lost interest, but because they have been cut off from the realities and debates of the contemporary world.

We no longer want to have this autistic science imposed on us.^

On March 11, 2004, terrorists in Madrid detonated bombs that killed 191 innocent persons and wounded about 1800 others. Terrorist acts are dramatic, evil forms of mass symbolic communication. One of the leaders of the terrorists came to Madrid to study on an economics scholarship.^ He apparently did not appreciate the usefulness and joy of seeking true economic knowledge and sharing it with others through peaceful communication.

Program Evaluation in Rehabilitation and Education

face of a prisoner

Since Lipton, Martinson, and Wilks surveyed the effects of rehabilitation programs on recidivism, program evaluation has become much more sophisticated. A leading scholar in the field recently presented textbooks in education as an example:

the question that is sometimes left unanswered is, “Do the textbooks make a difference in children’s learning?” … What we want to do here is compare what actually happens with the textbooks to what would have happened without textbooks.^

In considering this type of question with respect to rehabilitating prisoners, Lipton, Martinson and Wilks limited their review of rehabilitation programs to findings of evaluation research. Martinson described evaluation research as:

a special kind of research which was applied to criminal justice on a wide scale for the first time in California during the period immediately following World War II. This research is experimental – that is, offenders are often randomly allocated to treatment and nontreatment groups so that comparison can be made of outcomes.^

This experimental structure remains at the foundations of scientific program evaluation. In terms of leading scholarly evaluation techniques for the hypothetical textbook program:

We argue that we need to follow the example of medicine and set up randomized experiments. Since resources are generally limited at the beginning of a program, it makes sense to select twice the number of people, or schools, and introduce the program to half the sample, randomly selected. In this way, we can be sure that those who benefited from the program are no different from those who did not. If we collect data on both groups and find a difference between those who were exposed and those who weren’t, we can conclude it’s the effect of the program. Everybody can then use this evidence to decide whether to take this program up in other contexts – the knowledge becomes a shared resource.^

Such an evaluation procedure provides a propitious structure for the competitive development of scientific treatment expertise. It has generated highly successful research programs. Unlike the received understanding of Martinson’s evalutation (“nothing works”), more recent program evaluation tends to produce specific, nuanced results that aren’t easily summarized in a slogan. Leading research programs do, however, show clear concern for poverty, inequality, and oppression.

Meaningfully interpreting and using program evaluations in new contexts is difficult. The Nobel Prize in Economic Sciences in 2000 was awarded for work that, among other contributions, discovered “evidence on the pervasiveness of heterogeneity and diversity in economic life.” This work emphasized carefully separating two questions:

(1) “What is the effect of a program in place on participants and nonparticipants compared to no program at all or some alternative program?”

This is what is now called the “treatment effect” problem. The second and the more ambitious question raised is

(2) “What is the likely effect of a new program or an old program applied to a new environment?”

The second question raises the same type of problems as arise from estimating the demand for a new good. Its answer usually requires structural estimation.^

The problem in practice is to distinguish between a “new program” and an “old program,” and the “same environment” and a “new environment.” The pervasiveness of heterogeneity and diversity in economic life underscores exactly this problem. Does new staff make for a new program? Does the passage of time produce a new environment? Such questions are crucial for evaluating program evaluations. An economist rationally answers such questions by calling for more economic research.

Specific actions that can be easily measured daily are more amenable to program evaluation than are broad purposes realized over years. For example, a website owner might seek a greater number of visitors and a higher click-through rate on ads. Content and ad experiments, along with measurements of resulting traffic and ad-click-through rates, can be cheaply and quickly realized. Some such local optimization steps, e.g. duping and exploiting visitors, can generate bad long-term results. Nonetheless, at least the instrumental, short-term effects of the experiments can be meaningfully measured.

Treatment instruments are much more difficult to evaluate with respect to broad purposes realized over years. Reducing recidivism and improving education are purposes with a time horizon of years. Recidivism and education outcomes might might be measured over years using incarceration and earnings records. Doing so would require highly sophisticated controls for changes in circumstances over years. Even if that could be done convincingly and generalizably, the purposes of reducing recidivism and improving education are not merely to keep persons out of jail and earning income. Free, knowledgeable persons are at the core of ideals of persons well-governed personally and collectively. Programs of punishment and education cannot be adequately evaluated using just feasible measurements of their instrumental effects.

Reforming and Rehabilitating Prisoners:
Communicative and Consequentialist Challenges

face of a prisoner

Reasoning about criminal punishment and reformation is challenging. Despite the Age of Enlightenment’s great influence on Western civilization, the eighteenth-century foundations of economic analysis haven’t brought enlightenment to actual practices of punishment and reformation. Penal policy seems to be formed in other ways.

Mass persuasion and folk wisdom are alternatives to progressive reason. Late in the twentieth century, Robert Martinson argued for the importance of mass communication:

here is the public demanding some substantive knowledge about how to reduce crime and all it gets from Palmer is the dry crust of “middle base expectancy” and interminable intramural bickering about the esoteric mysteries of research design and significance tests and such-like oddities. … My neighbors in the 20th precinct are mystified by Palmer’s obscurantism. … Correctional research must get out of the sandbox and speak straight to the American people.^

More recent scholarly research has advocated an common-person attitude:

we offer an attitude rather than an algorithm: one that trusts collective, commonsense judgments, and is humble in the face of uncertainty, steadfast in confronting urgent problems, and committed to fairness within and beyond this generation.^

Formal learning’s contribution to that attitude is probably small, if not negative. Moreover, that attitude probably wouldn’t inspire a difficult program of new intellectual work. It tends to encourage conservatism and a “precautionary principle.”^ Popular interpretation of a precautionary principle in the field of crimes and punishments tends to discourage mercy and second chances. A precautionary principle favors preemptive, discriminatory state control and mass incarceration.

Another alternative to structuring punishment and reformation is credentialism. In a scholarly article entitled “Beyond Correctional Quackery – Professionalism and the Possibility of Effective Treatment, ” three university-based authors forcefully advocated “evidence-based corrections.” They identified “four sources of correctional quackery”: “failure to use research in designing programs,” “failure to use effective treatment models,” “failure to follow appropriate assessment and classification practices,” and “failure to evaluate what we do.” According to the authors, fostering evidence-based corrections requires more appreciation for duly credentialed authority:

To move beyond quackery and accomplish these goals, the field of corrections will have to take seriously what it means to be a profession. In this context, individual agencies and individuals within agencies would do well to achieve what Gendreau et al. (forthcoming) refer to as the “3 C’s” of effective correctional policies: First, employ credentialed people; second, ensure that the agency is credentialed in that it is founded on the principles of fairness and the improvement of lives through ethically defensive [sic] means; and third, base treatment decisions on credentialed knowledge (e.g., research from meta-analyses).^

Ordinary persons typically have a sense of fairness and seek to act ethically, or at least in an ethically defensible way. More insights into the importance of credentials come from “eight principles of effective correctional intervention.” The third principle concerns “management/staff characteristics”:

The program director and treatment staff are professionally trained and have previous experience working in offender treatment programs. Staff selection is based on their holding beliefs supportive of rehabilitation and relationship styles and therapeutic skill factors typical of effective therapies.

Professional training, previous experience, supportive beliefs, and “skill factors typical of effective therapies” all separate credentialed persons from a random sample of ordinary persons. The description of “core correctional practice” describes in technical terms patterns of human interaction:

Program therapists engage in the following therapeutic practices: anti-criminal modeling, effective reinforcement and disapproval, problem-solving techniques, structured learning procedures for skill-building, effective use of authority, cognitive self-change, relationship practices, and motivational interviewing.^

Caring, empathetic persons committed to helping prisoners and who have passed courses of such practices undoubtedly help prisoners. The contribution of credentialed techniques themselves to that effectiveness is far from clear. Credentials and credentialed techniques, in this field as in others, create barriers to entry and raise costs of caring for prisoners. They also devalue prisoners’ ordinary communication with their families and friends.

Correctional experts world-wide in the nineteenth century promoted complete suppression of prisoners’ communication. Competitive fields of knowledge and authority, like markets for goods, can fail badly.