Reading this post makes me so enraged that I must find all bad scientists and kill them!
[Thanks, Ted!]
Media Violence, Aggression, and Policy
There's no solid evidence that violence in media causes violence in society, certainly not at the level that would warrant any kind of policy response. Here at Terra Nova, this has been discussed again and again and again and again and again. Yet the issue will not die, or, more accurately, a misguided conversation continues and at times certain points need to be reiterated. The immediate spurs to this post include a) getting an email about videogame violence effects from an undergraduate at another school, b) seeing one of Indiana's PhD students give a talk on videogame violence, and c) seeing media effects being debated at the International Communications Association meeting in Chicago this past weekend. Researchers continue to pursue evidence for a causal link between violence in media and real-world violence, and important people in the real world still think there's some sort of emergency.
Common sense objections to the agenda and the urgency are legion, best summed up here and here. Yet there are deeper issues, of a scholarly nature, that need to be addressed as well. Research in the field of media violence effects is generally ill-conceived, poorly executed, and result-driven. I have seen few things that I would describe as findings - results that become a permanent part of my view of the world and how it works. Before any more PhD students waste their careers on bad science, let's once again put the cards on the table.
To begin at the end: Scientific research should not be framed as the pursuit of evidence for something. To do so violates the important norm of disinterestedness. You are not supposed to care how the numbers turn out. The proper way to think of things is "What causes Y?" not "Can I find evidence that X might effect Y?" The Y here is violence in society. We know that the main causes of violence in society are parents and peers. A disinterested scholar would stop there. Yet in media violence research, the norm is to go looking for a link. One senses that in most papers, nothing would be sent to the journal until some evidence for the link was found.
How does one get that sense? This is the second major issue: significance. In scholarly writing, the term significance refers to a very specialized statistical feature known in most fields as statistical significance. It is a measure of the accuracy of a finding. It is also widely misunderstood and improperly applied. (How do I know? Training under econometrician Arthur Goldberger.) Look at it this way: You are the captain of the ship. The engineer comes and says that some rivets in the hull are weakening and are about to pop. Yet you can only fix them one at a time. Your first question is, what rivet is weakest? That is where the engineer should start. Oddly, in psychology and the social sciences, one insists that the engineer start working instead on the rivet whose weakness is most accurately measured. "We think rivet 12 is weakest, but we know more about rivet 34, so let's start there. By the way, rivet 34 seems to be pretty strong. [glub glub glub]"
In media violence research, it appears to be a universal practice that the accuracy of an effect's measurement is presented always first, and often exclusively. The size of the effect is considered secondary, if it is considered at all. In my experience of articles and presentations in this field, I have yet to see a sentence in the following form: "All else equal, a 10 percent increase in this measure of media violence leads to an X% increase in this measure of social violence." This is a very simple simulation of effect, and it seems never to be done.
Here's how the first two issues are related: If the research paradigm is to hunt for effects, and the standard of a "finding" is based on statistical significance, it is usually easy to produce the desired result. The nature of statistical signficance is such that if you mess around with the data set enough, eventually some set of controls and procedures causes the computer to pop out an asterisk indicating statistical significance on the media violence variable. This why the paper says "Although no overall media-aggression link was found, a link was found among children who identify with a violent character." Meaning, if you split the data into those-who-identify and those-who-don't, you find the desired link in the former group. In any reasonably complex data set, there will be some sub-group or some tweak that generates statistical significance. It's a mechanical thing in the end. And thus, when a researcher produces an entire career of papers showing the same result over and over, you get the sense that the disinterestedness norm is being violated. This scholar is not in the least disinterested. He knows what he is after and he is going to find it. The only way that disinterestedness could be restored in this field would be for scholars to forget about statistical significance and examine instead the real-world significance of findings, by means of these simple simulation sentences. Let's talk about the rivets that seem weakest. Assertions of real-world significance are not popped out by SPSS. They cannot be cooked. If media effects researchers want to be trusted, they should abandon statistical significance as the measure of truth.
The issue of significance goes beyond statistical significance, however, into the realm of policy significance. The media violence field gets its energy from its ostensible policy relevance. Yet the research questions are not framed in a way that is helpful for policy. The policy question is simple: If we regulate media violence, will social violence fall? But the research asks: If we expose this person to violent media, how will he act in the next hour? The latter is not relevant to the former. Or, there does not seem to be a good theory explaining why the latter is relevant. Yes, there are diagrams of boxes and arrows known as theories, but they are really just conceptual overviews, informal and heuristic, and cannot be used to measure or explain how a social effect emerges from a lab effect. As an example, suppose we use an Aggressometer to measure a person's aggressive mental state, and find that viewing Star Wars increases the Aggressometer by 20 percent. The question now becomes, if we show Star Wars generally in the public, we are generally going to have a 20 percent increase in Aggressometer readings. What theory tells me how this is specifically going to change the crime rate? I need to know that, because I need to evaluate policy in a common sense way. Keeping a million kids from watching Star Wars costs society $7m in lost entertainment value. Is the purported value of crime reduction more or less than that? A box-and-arrow diagram does not help. If the research is going to stay focused on the mind, we need a good theory to connect mind outcomes to policy outcomes - otherwise the research isn't relevant for policy and should be labeled as such: "Warning: Not For Use By Legislators."
Of course, the research could move away from the mind and frame itself where the policy questions live. There are some papers doing that; one piece by some economists and the longitudinal study by Huesmann, Aron and colleagues. These papers are worth of examination, because they state their findings in terms of the issues that motivate the research. But, of course, the findings conflict.
Why would they conflict? Why is it so hard to find answers in this area? Fuzziness. The media violence - aggression field has chosen to study two things that do not admit accurate observation. What is media violence? What kind of a thing is it? The policy debate seems to assume it is a continuous variable that acts as a gloss on a piece of media. Thus, you can apparently make a movie less violent by taking away an explosion. Similarly, what is aggression? It appears to be taken as some sort of negative gloss on a person, such that if you make them more aggressive you make the world a worse place. Needless to say, taking aggression and violence as separable from the whole entities in which they are observed is a fuzzy and probably fundamentally wrong-headed way to approach things. You could, if you wanted, study the relationship of dog's ears to the sounds of motors, but you'll never find solid evidence that dogs get happy when their people drive into the garage. You need to study dogs and people, not ears and motors. In fact the only reason you might study ears and motors separately is that you had some agenda to promote the motor industry by showing that it makes dogs happy. But of course, that wouldn't be disinterested.
I cooked up a silly example. Consider the following report:
"Textiles scholars have studied the effect of softness in cloth on affection. Children rubbed with soft cloth as opposed to scratchy cloth self-report significantly higher levels of affection and exhibit more affectionate behaviors (hugging teddy bears, for example).Responding to these findings, and acting out of a concern about the dramatic declines in affection in recent decades, the American Academy of Pediatrics recommends that children's exposure to soft cloth be maximized. The State of California has mandated that all cloth sold to minors must meet a minimum standard of SS+ (from the industry's cloth softness self-rating system). Unfortunately, the laws have been struck down as an improper extension of government authority, as stated in the 28th amendment ("Congress shall make no law abridging the freedom of the textile manufacturer"). Nonetheless, pressure continues for some sort of government response to the softness-affection crisis."
Ridiculous, of course. The PTA's insistence that school kids wear velvet boots would last one rainy day, and that would be it. But to be more specific about what's wrong here:
1. The research deals with vague value-laden concepts, not objective observables.
2. The findings are not disinterested. Somebody's looking for something.
3. There is no evidence of a crisis at the social level.
4. The pediatricians' recommendation to parents assumes thoroughly incompetent parents.
5. So does the policy.
6. The policy asserts an unrealistic level of measurement and control.
7. The relevance of the findings for the policy is nowhere demonstrated.
8. "Significantly" refers to statistical significance, not real-world significance.
There's not much difference between the cloth-softness debate and the media violence debate, unfortunately.
People and their Art are certainly worthy of study. But if we are going to be scientific about it, there are certain rules that must be followed. Following those rules might mean that some questions simply elude us. They cannot be answered in the way that Science-Capital-S demands. In such cases it is better to pursue other rhetorical strategies.
If you want me to believe that regulating violence in media would make our world a better place, you'll have to walk me around the world and through history, and help me to imaginatively experience a culture in which control of expression led to more happiness. I wander around in history a lot - it's been a hobby for decades - and i don't know of any such culture. Even fantasizing about the future, I am not seeing anything good.
In the end, I suspect that media violence research has been motivated primarily by aesthetic concerns. The Three Stooges are disgusting and vulgar, whereas King Lear is sublime. Why are we watching so much crap? Back in the day, you could make the aesthetic plea directly: Look here, you are watching bad art, and you shouldn't - just because it is bad. Today, aesthetic disgust gets channeled into sciency-sounding condemnations of entire media forms for their "effects." In our free-thinking age, no one can effectively change anyone's mind by asserting that Grand Theft Auto is simply adolescent, an 1812 Overture of bullying and nastiness, of low appeal. But because the age is also utilitarian, you can make the case that Grand Theft Auto has "bad effects:" like cigarettes, you say, its use harms others.
Edward Castronova on May 25, 2009 in Academia | Permalink
best when viewed in low light
In the past...
-
▼
2009
(285)
-
▼
May
(21)
- Missing what moves us
- Too fucking awesome not to re-post
- Agressometer
- Mock, yeah, ing, yeah, bird, yeah
- Fat monkeys diet
- Wii just want to have fun
- Progressing
- Virga
- Airport revelations
- Airport screenings
- Jerry B Games, Inc
- Big Ups David!
- Mars map
- Venus map
- Happy motherf^@%er's day
- Wild wild rockstar
- Show me the money
- Connections
- Desirous
- Influenzing
- Hypnotizing
-
▼
May
(21)
No comments:
Post a Comment