Bechdel vs. Science
A recent video from Feminist Frequency has been making the internet rounds.
The gist of the video is that the majority of best picture nominees for this year’s Oscars fail the Bechdel test. For those new to the idea, the Bechdel test is a way of determining whether or not a movie contains meaningful female characters. It was initially presented at a tongue-in-cheek shot at the gender bias in movies in the comic strip Dykes to Watch Out For. To pass the test, a movie has to have three things: (1)Two [named] female characters that (2) talk to each other (3)about something other than a man.
However, being mostly a joke rather than a carefully constructed, double-blinded study, it lacks a certain amount of critical credibility.
Ms. Sarkeesian seems to have explicitly missed an important failing of the test. Starting at 7:45 in her video she says:
“In response to the Bechdel test I’m often asked ‘Well, what about the reverse?’ Why isn’t there a test to determine whether two men talk to each other about something other than a woman? The answer to that is simple. The test is meant to indicate a problem and there isn’t a problem with a lack of men interacting with one another.”
That right there, that is bad science. And we at Mad Art Lab cannot abide bad art science. Why is it bad science? Let us break it down.
The Bechdel test only checks for female interaction. Failing the Bechdel test is meant to indicate a lack of female character presence. However, it does not necessarily indicate an institutional bias. Simply failing the Bechdel test does not indicate anything except that a movie has failed to meet that standard. However, buried in the discussion of the test is the unstated premise that most movies pass the male version of the test or the ‘Reverse Bechdel.’ This is not logically necessary. It is possible for a movie to pass both the male and female versions of the test, just as it is possible for a film to fail both. For example, March of the Penguins utterly fails both tests simply for there being no dialogue between any characters and Underworld:Awakening manages to pass on both counts. There are lots of ways for a movie to fail the test in either direction and passing the test or its reverse at all isn’t actually that easy.
Claiming that it is a problem that only two Best Picture nominees this year pass the test is equivalent to claiming that your shampoo will increase shininess by 27% or your mouthwash makes your breath twice as fresh. They’re meaningless without some basis for comparison. There needs to be some baseline or benchmark. Therefore, rather than being irrelevant, running the Reverse Bechdel test is necessary in order to have something meaningful against which we can compare. It is technically possible for there to be even fewer films that pass the Reverse Bechdel test. It is pointless to get angry about how few films pass the Bechdel test until a basis for comparison has been established.
Let’s do some science now. We’re going to do the same thing as Feminist Frequency tried to do, but with a bit more scientific rigor and a lot more academic pretention.
Hypothesis: There is a male character bias in films being nominated for Best Picture Academy Awards.
Test Method: Apply the Bechdel method to both male and female characters in Best Picture Nominees and compare the ratio of films that pass each test.
Sample: 2011 Academy Award Best Picture Nominees
Results:
Bechdel – Two Pass easily two ambiguously so. The Descendants and The Help both passed the test easily while Hugo and Midnight in Paris squeaked through with one or two sentence exchanges.
Reverse Bechdel – The only film that manages to clearly fail the Reverse Bechdel Test is The Help. The other eight have the requisite conversation between males (some were young boys) about something besides women.
Well that’s pretty definitive. Eight to two, pitchfork and torch time right? Not quite. Unfortunately we have only taken a very small sample. From these data, we can only really comment on the male-female role balance in this year’s Best Picture nominees. It could easily be an unusual year, an outlier. Before we have the mathematical grounding for making any conclusions regarding trends or patterns, we’d have to do this for several years. The only conclusion that I can confidently make at this point is that this was a pretty rough year for female characters in Oscar-bait films. However, the results of this small study do indicate that there is reason for further investigation and have not given us any cause to reject our hypothesis.
Now that I’ve ranted for this long, I think it’s worth mentioning that I believe that there is a terrible imbalance in Hollywood cinema along the gender lines. But I believe that from personal recollection and through whatever biases I carry. I do not know it to be true. We can only know by actually checking.
Bechdell Test and Minorities
The video above also discusses the Bechdel test as applied to minorities. This, too lacks a bit of mathematical insight.
The Bechdel test is meaningful when comparing male to female characters because it is easy to argue that there should be a pretty clear 50/50 split. Almost exactly half of the population is female so it follows that half of the characters should be as well and consequently we could expect roughly a quarter of non-romantic interactions to be between women. One can argue pretty easily that the same number of movies should pass the Bechdel as the Reverse Bechdel.
This, however, is not true for minority groups. As a group gets smaller, it becomes exponentially unlikely that they will pass their version of the bechdel test. Let’s use African-Americans as an example as that group was directly discussed in the video.
According to the 2010 Census of the United States, roughly 12.4% of Americans identify as black. That is convenient as it means about one in eight people in the US are black and that makes for easy math.
I would argue that to be represented totally fairly, without racial bias, that same proportion should show up in cinema. Thus for every eight movie characters, we can expect one to be black. Therefore in a film with a small cast it isn’t unlikely that there would be no black characters at all. Furthermore, having two named black characters wouldn’t be an expectation until a movie had a cast of around sixteen. In the rare case that a film hits a cast that large, what are the odds that any two characters will have a meaningful exchange within it?
So the Bechdel test is pretty useless for evaluating character presence of minority groups. The group can actually be well over-represented before they’ll start passing the test at all. So if you actually want to evaluate the representation of a particular group, you will need to come up with a different test. I would lean towards something that can be compared against actual presence in the population.
Some examples:
What is the proportion of homosexual protagonists in Oscar nominated films?
What is the proportion of black characters that survive to the end of horror movies?
What is there a trend in the proportion of inter-racial relationships in film in the past 20 years?
These are all things that can be tested and can give mathematically meaningful data. Unfortunately, what cannot be quantified is how fairly any group is being represented. Presence alone does not imply goodness. That discussion involves value judgements and interpretation which lie outside of the safety of basic statistics and therefore are well beyond the scope of this article.
A Final Note
I have shredded Ms. Sarkeesian’s arguments a fair bit here, but it is important to point out that I haven’t refuted her conclusions, just her methods. She may have missed a few mathematical nuances, but that doesn’t mean she’s wrong.
Special thanks to Amy, Smashley, Anne, Maria, Melissa, Donna and Cloe for helping me test the films.
You seem to be applying proportional representations of minorities on a film by film basis. But the problem is systemic. Assuming that representation should be strictly proportional based on population statistics allows EVERY movie with a small all white cast to say, ‘well there’s less than eight of us why do you expect minority representation in such a small cast?’ Then you end up with a raft of movies which all together contain 32 characters total and less than 4 black characters among them. In terms of minority representation I don’t think you’ve established quite the right methodology. Fair representation would involve a number of MOVIES which pass the minority type Bechdel test that is scaled based on the representation of the minority in the general population rather than simply going by appearance of minority characters as you seem to have done.
@ladydreamgirl, the interpretation of statistics you presented above is, indeed, the sort of thing one might hear. But it is entirely incorrect, as you pointed out, and is not what I intended in the article. Let me expand.
The rate of passing the Bechdel test decreases exponentially as the population being tested drops because you need multiple representatives in exclusive interactions. For small minorities, it would not be statistically surprising to never be able to find a major motion picture that would pass the test regardless of whether or not that group is actually present and fairly represented.
Hence the Bechdel test can make it appear that there is a major bias where there may be none. It only really works well for the comparison between equal sized groups. As you said, fair representation of minorities has to be judged across films, but the Bechdel test is applied to films individually.
I’m not sure that even Bechdel testing the totality of Oscar nominated films would give a true picture of whether or not there was a misogynist bias in their selection. You would also need to to test the entire population of films that were available for selection and compare that rate to the rate of failure in the Oscar population.
If the pool from which the Oscars are drawn from has the same rate of failure as the Oscar selections then the problem is even worse. It’s a systemic problem with Hollywood movies as a whole rather than a specific problem with the Oscars. Of course, you then get into causality questions and it gets progressively messier to figure out.
And to be clear; I’m not trying to defend the Oscars and Hollywood movies against the charge of misogyny. It seems pretty likely that they will fail even the most rigorously applied test of that.
On side note: I struggled with the notion that the reverse Bechdel test was necessary for this argument. It was setting off my “What about the Menz!!1!!??” alarum bells. I think that I can see a logical reason for it but I’m very, very wary of privilege induced biases in these things and would love to hear someone who’s much more versed in feminist issues’s take on it.
I hope I was able to explain clearly enough why we need to run the test both ways in order to draw meaningful conclusions from it. I think it’s pretty telling that running the male version didn’t unseat the primary feminist argument even a tiny bit. Rather, I think the extra effort made the point more strongly.
I’ll admit that this initially rang some alarm bells for me too, even if I get the point that’s being made.
Part of the problem with all of this is that the Bechdel test doesn’t really hold up as a valid scientific test of anything when one really examines it. It’s possible for very sexist movies to pass it on a technicality, just as it’s possible for movies that have nothing to do with sexism to fail entirely.
I’d love to see a more rigorous test that would stand up to scrutiny and actually would be able to measure for sexism. It would be great to be able to quantify just how sexist Hollywood is in the stories that it produces. But I think the appeal of the Bechdel test lies in the fact that it’s NOT a real test, that passing it really doesn’t mean a whole hell of a lot, and most movies still can’t manage to pass it anyway.
I think something really important to remember about the Bechdel Test is that doesn’t tell us anything about how feminist a given film or filmmaker is. Just because a film passes doesn’t mean it portrays women positively, and just because it fails doesn’t mean that it is sexist. It really says very little about individual films and more about the general or trends. The fact that so few films pass the test is what’s interesting/shameful, not that any particular film does or doesn’t.
Ryan conducting the reverse Bechdel test demonstrates that that trend is actually meaningful.
@ Ryan
I guess part of the problem is that characters from different movies can’t interact with each other, each film or in the case of film franchises, franchise is self contained so representation at population proportionate levels can occur without the minority characters ever interacting because each one is in the walled garden of a separate film. I don’t think that minorities being present but never having a chance to interact with fellow minority members really counts as representation. Some effort has to be made to have minority characters interact with each other even if it leads to over representation of minorities in the aggregate character population. So long as minority representation is presented as getting the numbers of minority characters in the aggregate character pool to match their proportion in the general population there is STILL a problem with fair representation in my opinion. The stats are useful for indicating when there is a problem, but they aren’t actually a good as a guideline for what should be going on.
@wundergeek:
Determining whether a movie is sexist or not is a HELL of a lot more difficult (and subjective) then determining whether women/POC in movies were represented as important in ways other than their relation to a man/white person. This isn’t meant to study sexism, but casting and character development trends on a systemic level.