A woman stands in an abandoned building with light coming through the window.

Sexual violence in conflict: Part II

Critical State, our foreign policy newsletter, takes a deep dive into the ethics of studying wartime sexual violence.

This analysis was featured in Critical State, a weekly newsletter from The World and Inkstick Media. Subscribe here.

Last week, we began our look at the study of wartime sexual violence through the lens of a review article in the Annual Review of Political Science by field leaders Ragnhild Nordås and Dara Kay Cohen. We read new research on how organizational ideology drives when, where, and why armed groups choose to engage in sexual violence. This week, we’ll take a step back and look at new thinking on the ethics of studying wartime sexual violence and new methods for studying political violence generally.

Related: Sexual violence in conflict: Part I

In the big-data age, there has been a shift in social science toward studying large numbers of small events — individual incidents of sexual violence, for example — in hopes of learning more about how those events are affected by other variables. A recent article in the Journal of Global Security Studies by Cohen and political scientist Amelia Hoover Green examines how those data collections are actually gathered and the ethical considerations that accompany them. 

One of the most appealing things about quantitative studies of large data sets of individual events is how seemingly ethically uncomplicated it is. Compared to traveling to a conflict zone, directly interviewing traumatized survivors, or designing experiments with vulnerable subjects, quantitative desk research seems to be an ethical lay up. Yet, as Hoover Green and Cohen point out, quantitative research doesn’t end research ethics concerns, it just conceals them. 

Data sets come from somewhere. Often, the data are gathered by aggregating media reports, or through large-scale fieldwork done by nongovernmental organizations. By the time data is aggregated into a set and researchers are working with it, there are few opportunities left to influence its accuracy or methods used to gather the inputs. This creates (at least) two types of problems when studying topics as narrow but consequential as wartime sexual violence.

The first is a question of accuracy. In large data sets — say, ones that cover all instances and forms of political violence — the underlying assumption is that the size of the set basically wipes away concerns about random errors in individual datums. Once researchers begin to disaggregate those sets to look at only some categories of violence, such as wartime sexual violence, those errors become magnified. Hoover Green and Cohen report an example in which a typo in a State Department report almost led to a dataset reporting that the Israeli military raped minors in their custody, a charge for which there is no evidence (the typo, however, is hardly exculpatory — the report was supposed to read that Israeli Defense Force troops employed “threats of rape” against minors in their custody). When dealing with acts that shock the conscience like sexual violence, accuracy is crucial.

The second, thornier problem is the methods of data collection. Hoover Green and Cohen cite multiple cases in which journalists acted unethically to produce stories about sexual violence. A Pakistani 17-year-old who was gang raped was, in the words of a contemporary commentator, “hounded by journalists” for details of the attack. Another study, by Midnight Oil alumna Sherizaan Minwalla and Johanna Foster, found that many journalists who wrote about ISIS abuses against Yazidis acted atrociously toward victims of wartime sexual violence. Yazidi women said that eager reporters made empty promises of financial rewards for telling their stories, and some even threatened to identify the women without their consent if they did not grant interviews. When the reports gathered this way are aggregated into data sets, researchers can become implicated in these methods without their knowledge.

Hoover Green and Cohen urge researchers to both learn about the informed consent procedures used by the organizations that gather the data they use and to disclose those procedures to readers. In order to better understand those procedures, interviews with the data gatherers could become a standard part of desk research, which would be an important step toward offering readers and researchers alike a qualitative sense of how quantitative studies get made.


Critical State is your weekly fix of foreign policy without all the stuff you don’t need. It’s top news and accessible analysis for those who want an inside take without all the insider bs. Subscribe here

Do you support journalism that strengthens our democracy?

At The World, we believe strongly that human-centered journalism is at the heart of an informed public and a strong democracy. We see democracy and journalism as two sides of the same coin. If you care about one, it is imperative to care about the other.

Every day, our nonprofit newsroom seeks to inform and empower listeners and hold the powerful accountable. Neither would be possible without the support of listeners like you. If you believe in our work, will you give today? We need your help now more than ever!