Soraya Chemaly: Who Defines Risk and Why It Matters

The following transcript is from Soraya Chemaly’s opening remarks at the launch of the Center for Media at Risk in April 2018. She spoke on April 19 at Perry World House, University of Pennsylvania.


Good evening. First I’d like to thank the Center for Media at Risk and the Annenberg School for inviting me to participate tonight. This is a topic of incredible urgency, and I’m honored to be asked. When Barbie first contacted me and gave me the topic of talking about risk and media, I really had to ask, “well, what isn’t at risk?” We don’t have a lot of time, and there’s so many problems that we recognize now need our urgent attention. And so I thought, alright, well how can we approach this problem and consider the scope of it? And so instead I thought, what if we talk about how risk is perceived and assessed instead?

My own background – I’ll just take a minute to summarize briefly, because it’s relevant. I started as a writer in the early nineties and worked first as an editor and writer for some magazines, and then I started working at the Gannett Corporation, but on the business side. And in that capacity worked on the industry’s first construction of databases both for business-to-business on the advertising side and business-to-consumer on the subscriber acquisition and retention side. At that point, I was also tasked with analyzing what was then called “new media,” which was the Internet, and it was very clear that some very senior people really considered the Internet to be a playground, a place for children. And to that end, we established business practices that essentially gave away content, gave away advertising online, and created a sort of baseline lack of value for the information that we were producing.

Instead, and simultaneously, what grew to be valuable was the data, the data of who was reading our papers, what their behaviors were, so on and so forth. And so, I left Gannett, and I started to work for a company called Claritas, which was a pioneer in the market of customer acquisition, segmentation, targeting, using data – so, what we think of as Big Data now.

So I did that for 10, 15 years in the media and data industries, and then, fast-forward to 2010, started writing again. And once I started writing with a focus on gender, heavy focus on sexual violence, almost automatically harassment online began, and the harassment takes many forms. Sometimes it’s words, gendered slurs and that type of thing you might expect, but other times it’s rape threats and death threats, graphic pornography. I’ve seen live rapes in action, being recorded, and so this was really disturbing and quite shocking to me the degree to which that kind of interaction online was affecting people doing the work that I was doing. And most of them were women, often talking about feminism or sexual violence, and it was disturbing the degree to which women were policing themselves, were taking themselves off of platforms, not writing about certain topics, or absorbing these levels of threat and violence as individuals.

And so, this question of risk perception was front and center both to me as an individual, to me as a professional, and also in considering how institutions were responding to this. How were media institutions responding? Were they recognizing these risks? How were tech companies responding? So the question of how we assess and perceive risk, I would argue, is even more important than naming the risks that we face.

Among the most pressing questions are those related to who defines safety, who defines harm, who defines danger, who defines legitimate threat. So in the case of Facebook, which is the world’s largest distributor of news at this point – well over 2 billion moderators of content a day – that question becomes even more urgent. Because there’s no transparency or accountability in answering those questions.

In 2013, Laura Bates, who was the founder of an organization called Everyday Feminism, and Jaclyn Friedman, who is another founder of an organization called Women, Action and the Media, and I initiated a large-scale global campaign demanding that Facebook recognize gendered violence, the threat of gendered violence, on its platform as a form of hate and violence according to its own guidelines. The company had rules about no depictions of graphic violence, no abusive or harassing speech on the basis of identity, but it really wasn’t policing those when it came to depictions of violence against women. So, for example, there was a page called “I Kill Bitches,” there were pages that joked about rape, there were impersonator pages, there were pages identifying women that should be raped. And all of those were handily dealt with by slapping on a parenthetical that said “controversial humor,” and that was supposed to be a sort of legal out.

So we confronted them in a couple of days, threatened several million dollars’ worth of ad revenue by basically going straight to advertisers and showing them how their ads were sponsoring these pages. But this is important because it really drew attention to the fact that people are making decisions, people are assessing risk, and then they’re deciding how speech is regulated online. And so, the function that moderators and policymakers of large Internet companies like Facebook and social media companies make, mirrors that of editors, judges, and, even in the case of certain situations now having to do with algorithms and algorithmic accountability, linguists and programmers. So people who are arbitrating social norms at immense scale.

And this kind of moderation of language and speech and content, inherent in which is the assessment of risk, has global impact. It’s global, it’s very skewed towards Western patriarchal values. It is, from that perspective, really ethnocentric. There are ways that malevolent actors really can take advantage of the jurisdictional issues that we encounter with global media platforms, and I would add for the purposes of the group of us that are interested in free speech and media, that the products and the programs and the way that a lot of the platforms we use are constructed materializes a very libertarian and atomistic sense of free speech. So, the idea that there is one person standing alone with the right to speak, as opposed to a more relational form of autonomy in which we are speaking in context and we are autonomous in so far as we are in relation to other people.

And we can see those two models, of a more atomistic model and a more relational model, even in the way that dataflows work and the way that people have to report harassment. All of what I just described is about to be put into hyper-drive and is already probably without our really realizing the degree to which it’s true, with algorithms and machine learning. The computational methods that are being developed right now to assess risk and to consider harms or harassment or abuse, are simply insufficient to understanding the nuances behind language and these assessments of risk.

And so, to go back to this question of risk and risk perception and risk assessment, very often we think about technology solutions or the idea that there must be legal mechanisms that can be put in place to help us, and that’s all true, but in fact what we’re talking about are intangibles, and the intangibles are trust and credibility and truth and confidence. Our approach in the U.S. to those things tends to be focused on consumers, not the idea of citizenship. None of those are profit engines. What the profit engines that we are confronting right now are, and that are contributing to our risks, the profit engines are engagement and consumer behavior, which sets up a completely different set of motivations for addressing the problems that we face in media.

So for example, when a person is harassed, particularly if a person’s harassed by a mob or in a trending hashtag, it’s usually, almost in every major case you can think of, a woman. And the harassment, whether it is a group of people who are doing the harassing or a group of people who are trying to fight back against the harassment, all of it content neutral, generates engagement, which then in turn generates revenues and profits. And so, from the platforms’ perspective, and platforms who are largely still not liable for the content produced by users, the nature of the content becomes largely irrelevant because the profit comes just from the engagement and not from the quality of the language or the images being used.

And so, when we assess risk, we need to be doing it at multiple levels at the same time – the individual level, at the institutional level, at the societal level, and right now at each of those levels, we tend to fall back conceptually on a very dominant ordinal frame. It is gendered and binary. So for example, we organize ideas very deeply into categories like male and female, political and personal, public and private. And so, when something happens to a woman that has echoes of traditional gender-based violence, it’s essentially treated as a personal matter and a private matter and not understood as a political matter or something that should be of public concern.

So for 10 years, for example, as women have written about and talked about profuse online harassment, online harassment that tends to be sustained and sexualized and tied to offline abuses such as intimate partner violence or stalking, that entire conversation has been treated in mainstream media as a matter of keeping women safe and not as a matter of the proper function of democracy or their inability to participate civically or politically in the public sphere. And so, those divides have governed the way we have assessed risk and the way we have tried to address it.

So the question is, “what is really worth knowing?” And in our business, for example, in the news business and the media business, we make a distinction between hard news and soft news or hard content and soft content. That again is also very gendered, so even though women make up the majority of journalism students, men still dominate in newsrooms upwards of 65 to 68 percent. If we’re talking about politics, that number of male bylines is even higher. If we’re talking about sports, which is relevant because so often sports has to cover rape and sexual harassment, that number’s up in the nineties, and sort of 90 percent of writers are men. Studies have found that those distributions of men in hard news and women in the sort of softer areas of arts and health and education, those are done by assignment, and they’re done by assignment by editors.

And the higher up the food chain you go, the fewer women and people of color there are, and so there’s a lot of homogeneity at the top in media. And that homogeneity at the top of media is determining what’s worth knowing, what’s worth covering, what language is being used, what headlines are being written, what stories are being covered, who the sources are, what the pictures will be – all of that comes together to give us sort of dominantly-male perspective of problems that we encounter.

And so, what we think of as a threatening political condition often comes down to identity, and social science shows us that that is the case. Identity matters hugely to risk perception, and that’s true whether we are talking about an individual’s assessment of risk or the system’s justifications for decisions made pertaining to risk.

So let’s go back for a minute to individual perceptions. We know that identity matters, and we know that in terms of assessments of physical risk, there is a global gendered safety gap. So women are 20 points more likely to say that they do not feel safe in their own neighborhoods, and it’s roughly in the double digits for the last several years, if not decades. And what that means is if you ask a man and a woman if they feel safe in their neighborhoods, the majority of men will say they do, but the majority of women will say they don’t. And that’s truer, actually, in developed nations and our peer nations than it is in very unstable and militarized areas. In those places, men and women tend to have much closer assessments of risk and safety.

And so, that safety gap becomes relevant in media because, when you are trying to diversify the groups of people who are producing information, are writing stories, are making videos, you are often dealing with experiential differences. So, if there are more women, if there are more ethnic minorities, if there are more religious or sexual minorities, those people have a different sense of safety. So when they are harassed or targeted online, their risk perception is different, and the level of emotional resonance with which they respond might be different.

So, for example, I’ve been in editorial departments where most if not all of the editors tend to be older white men and most if not all of the writers are women and a very diverse group of younger people, and the younger reporters and journalists will explain why online harassment is a problem for them, why, for example, a young woman might not read the comments on a piece that she writes about rape and war, and an older editor will say, “Well, that’s just part of the job; I mean you just have to do that and just kind of get over it.” But in fact, a young woman whose chances of being sexually assaulted are between one in three and one in five has a very different sense of what matters, in terms of risk, than an older man whose chances of being sexually violated are pretty slim. So for men the chance is one in seventy, but that actually refers to men who have mainly been assaulted under the age of 18.

And so, understanding that people are going to respond differently to harassment because of threats of offline violence is really important. It’s important to the vigilance that an institution might engage in, it’s important to their levels of tolerance for this kind of harassment and their commitment to helping diversify storytelling by making sure that people who have different experiences are equally able to tell their stories. And so, we know that women worry more than men, we know that minorities worry more than whites, we know that people who have an egalitarian mindset worry more than people with a strong individualist mindset. And this is all true, not only of a problem like online harassment, but also of other issues like environmental pollution, the use of guns and gun regulation, food toxicity, abortion rights.

And so, a lot of social scientists looking at the systemic ramifications of these differences in risk perception essentially, about 25 years ago, came down to the question, “why is it that white men (as they put it) fear various risks less than women and minorities?” And that assessment has been measured over and over and over again. White men with strong individual orientations and confident, often conservative, world views, have what are thought of as outlier risk assessments. They literally cannot see the risks that other people are saying really matter, and this comes down to something called identity protective cognition. So, generally, a person with a very hierarchical mindset who has a very skeptical orientation, strongly individualistic, confident and relatively conservative, will consistently minimize the risks and threats that others feel are urgent and pressing.

In the U.S., that’s been dubbed “the white male effect,” which is a posture of extreme doubt regarding social danger related to activities that are really integral to social roles and cultural commitments. The short way of saying that is that when a problem is identified, the solution to which might actually reduce status in a hierarchical system, that solution is dubbed unnecessary, and so what we end up with are systems that tend to be extremely insensitive to risk. We see that in tech, and we see that in media.

So if we go back to this question of binaries and divides, and we know that persistently we have been unable to diversify or be inclusive at the highest levels of our own institutions, what do we see? We see right now that we’re worried about risk. And why are we worried about risk? We’re worried about risk because we’ve just had the debacle of Cambridge Analytica. We know that there was a great deal of media manipulation by the Russians through these bots, for example, to sway public opinion, to generate racial discord and gendered animosity. And so people are now really worried about issues, much more so than they were maybe two years ago, related to privacy, surveillance, digital security, the spread of misinformation, the extortion of political and journalistic players, hate speech, provocations to violence, and the erosion of trust and truth.

Now, if we think of all those things, they are literally the public, more masculinized versions of problems that women experience in the private sphere. So surveillance, for example, is stalking. Digital security is identity manipulation and privacy infringements by, for example, abusive spouses. When we think of misinformation, what we’re really talking about is denigration, rumor mongering, gossip, the use of targeted technology for purposes of revenge porn, for example. And so, all of those types of typically abusive tactics used against women in the private sphere are actually infusing our institutions in the public sphere. And the more women that are in those spaces, the greater the risk that they themselves as individuals experience them, but also they as professionals and members of these institutions experience them. So if the institutions don’t understand the risk and harm that come with, for example, the greater ease with which people can harass and abuse women journalists, they won’t be prepared to fend off risks to their institutions or, frankly, to the industry.

And so, this goes back to the question of, “how are we organized?” And we’re organized hierarchically and in meritocracies with a really stubborn lack of inclusivity in leadership. And that’s true in media, in tech, in politics, and in the production of information. So if you think of something like Wikipedia, 85 to 90 percent of Wikipedians are men, they tend to be white men from the U.K. and the U.S., and that informs the structure of the information, what gets shared, what gets considered notable, and yet we don’t really address that. The overall effect is what I would categorize as a series of epistemic voids, both testimonial and hermeneutic. So, we have a lack of language about the problems that we’re encountering, and we tend to not often believe people who come forward with their stories. And I think we’ve seen that through the recent example of “Me Too.” But we also see that with reporting harassment, and we also see it with the solutions that are currently being proposed.

So I’ll give you a good example. Women are much more likely to be harassed through the use of photographs, memes, videos – either photo manipulation, non-consensual sexualization, being turned into memes and then having this content used to extort them or to harass and threaten them. And yet a lot of the dominant mechanisms for dealing with harassment on platforms are almost entirely text-based. So, if we take the example of Wikipedia and Google working together to create a machine-learning tool to help moderate harassment, we can see sort of what this looks like.

A few years ago, Jigsaw, which was formerly Google Ideas, got together to work with a Wikipedia dataset so that they could train machines on recognizing harassment and attacks.

And so they started off with a whole category of comments that came from Wikipedia. And then had an external group of people evaluate that and assign basically a rating that said this content is neutral, this content is at a personal attack, this content is aggressive.

And so in its first iteration, when I looked at it the platform gave you the capability of putting in an expression or some words and then it would give you a score back. And so I was asked to look at it and I did, and one of the first tests that I put in was the expression nice tits.

Because that’s the sort of thing that happens to a woman. If she’s a writer, for example, or if she’s contributed a page, or is being harassed for her efforts on any number of platforms. And in some contexts, some people might think that that’s a compliment.

But in this context, it’s really not a compliment. And so I put in nice tits and it came back as neutral. Then I escalated and I added the phrase you should be raped. And you should be raped initially also came back as neutral. Not as an attack.

And so then I went one step further and I said you’re a dick. And you’re a dick came back pretty much 100% personal attack.

And so that kind of difference really reflects, really, a complex range of things. It has to do with linguistics, it has to do with the scoring of the data, it has to do with context.

But really it also has to do with understanding why a group of people would perceive you’re a dick as more threatening than you should be raped. And those are the types of questions that we need to be looking at.

Now, that language of harassment is also very gendered. So a lot of words used for women, abusively, are also used in a “positive” way.

So on Facebook, for example, the number one use of the word bitch is to say happy birthday, bitch. And so we need systems that can really appreciate nuance and we don’t have them yet.

Similarly, in another case, OpenAI, which was started by Elon Musk, made an arrangement to use all of Reddit’s comments to do some machine learning in natural language acquisition.

I personally think this is really problematic. Reddit, again, is, despite claims to be I think the front door of the internet, is largely all dominated by men in different spaces on the platform, in different forums.

And this is not really natural language acquisition. It’s actually a place that’s fairly hostile when women insert opinions or do ask me anything forums.

So making that assumption that natural human acquisition can be conflated with a particular language and linguistic culture of that platform is really really a problem. The alternative, for example, would be to go to Pinterest. Again, heavily image based. Has a sort of inverse gender makeup as Reddit, so Pinterest is probably, I don’t know, maybe between 72 and 75% women.

And no one, I think, would probably assume that they should take the Pinterest comments and the Pinterest content and train machines on natural language acquisition in the same way. Or, frankly, have dominantly women-run groups evaluating content for what constitutes an attack or an assault.

Those are the types of sensitivities that we need to be aware of when we embrace automation and machine learning tools.

So I’d like to really go from that idea to talking about what it means in terms of risks to some of the institutions that we feel are really at threat right now.

When women move into an industry, we tend to expect them to work and succeed in the way that men traditionally have. In the United States in particular, one of only three countries that doesn’t have mandatory family supportive policies, we have clung to a certain type of life stage that is pegged to men’s life stages and not all men but really a certain kind of idealized worker that’s super outdated. And assumes that someone is doing the care at home.

And so in the media industry, we have very high levels of burnout for women who are getting sort of squeezed at both sides. And even though they make up the bulk of journalism students, they tend to churn out of the industry pretty quickly.

And the same is true in tech. Something like 50% of women in STEM drop out of their respective fields after 10 years.

And so despite those dropping, despite the fact that women are churning out faster, the fact is that we have many many more women in institutions and in public space. And in leadership ranks.

And when that happens, we may be expecting them to arrange their lives the way men traditionally have. But we are not really paying attention to the ways in which institutions, like the media, take on feminized vulnerabilities that are not easily recognized in traditional frameworks or by status quo hierarchies that are primarily made up of elite men.

So when we think of how comments are proliferating and how photos are being used abusively, when it happens to women, it’s still treated as something that women need to handle as a private matter of personal safety. Instead of as critical issues facing the industry or brand related issues to a particular company.

If we think about the way platforms are dealing with verification and authentication, we can see, again, the ways in which our expectations are really pegged to the normative experiences of men.

So, verification for example, that Twitter maybe started off looking at already established hierarchies. Which meant that automatically there was a huge preponderance of men who had job titles of director and above, or who were celebrities or sports figures or prominent media makers.

And so instead of broadening notability requirements or understanding why it might be that women speakers and minority speakers wouldn’t have the kinds of credentials that other people did, the verification process actually reinforced the inequities by … and expanded them by giving verified people tools to protect themselves or build their audiences, that were not available to other people.

And there’s the additional fact that when a woman is verified, there is an actual different affect on her. We know from gaming studies, for example, that when men are engaged in games against other men, and they lose, they lose more gracefully than if they lose to a more skilled and talented woman.

The minute a woman actually wins, the loser’s response is almost automatically quite violent and misogynistic. And so when we verify people with a blue check mark, it’s just very probably that a woman who has been verified actually is visibly higher status in a world in which status generally is taken to accrue to men and that certain men feel entitled to.

And so they respond in a very hostile manner to that signal of verification. Which is a signal of status.

And so if we can think about the ways in which women experience violence more globally and as public issues and having politic content, I think we would be better equipped to evaluate, assess, and deal with the risks that are facing our institutions.

There are many examples of how malevolent actors and players that are interested in disinformation and for example destabilizing elections, how their first point of contact is often women. Women journalists, women politicians, women activists.

Because they can practice on women. Because tolerance for violence against women, as something that is personal and not political, is actually easily leveraged. Whether the actor, the bad actor, is the government or law enforcement or another media company. It’s really sort of irrelevant. We just live in a culture in which this violence is tolerated.

And so it’s easier to use against women. And then to escalate and turn that abuse into an abuse of an institution.

And so we need to step back, I think, and look at our own ideas about what constitutes authority and truth. And how incredibly hierarchical those ideas are. And what it means for our platforms to then take what we think of as valuable news and verified information that is extremely contextualized, in text that is edited, that is hopefully rigorously fact-checked, that takes place in an institutional hierarchy where there are publishers and editors and copy editors and writers.

If we’re lucky, at this stage. But then all of that information goes into platforms where data flows and knowledge production and systems are not hierarchical and nonlinear and quite acontextual. And where information is being spread across distributed networks that are decentralized.

And where the production of news and information is now much more collaborative. In that comments and Tweets and Facebook posts all become part of the knowledge embedded and the information.

And think about the clash of cultures that that represents in terms of how we understand confidence and information and trust. And that kind of brings us back to this idea of text and image.

It is really, I think, one of the greatest risks we face right now. Which is not really thinking hard about what that means for us as storytellers and media makers who really want people to have trust and confidence in what we’re saying.

A lot of work goes into making sure that what we’re saying is going to contribute, for example, to our democratic ideals. And if we don’t have that baseline assumption, we sort of find ourselves in this insecure position that we’re in now.

But the shift from text and image has always been privately presaged by abuses against women. So if we look at our women politicians, they have always been turned into pornography. I don’t really understand why pornography is not considered fake news and a strategically valuable political weapon.

Because the purpose of pornography, especially when you turn a woman nonconsensually into a pornographic object, is to undermine voter confidence in her abilities as a moral person, as a person who can make ethical judgments, as a person who has authority.

But it just doesn’t count as fake news, frankly because it really doesn’t happen to me. And so editors sitting in a room thinking about fake news, having eliminated this entire category of defamation and threat to political actors, I think, should be asking themselves what does it mean that we overlooked the use of that visual medium manipulation in the treatment, for example, of Hillary Clinton. Or of Sarah Palin or of Michelle Obama.

And so there are several other examples along these lines, but we know that there are biases and facial recognition and photo matching. And we know that right now we already have a concern about deep fakes, which are incredibly realistic videos.

That, of course, started by putting celebrities faces on pornography, way before it got to the point where Jordan Peele made a video about Barack Obama, just to show people what deep fakes were.

And so I would just caution that in this sense of what is risk and what are these terrible risks that we need to understand, we need to be introspective and take a step back. And say okay, what have we missed already? What is embedded in this process of risk assessment that is making this worse?

And so to sort of sum up, we have institutionally, across sectors, what I would categorize as the lowest risk assessment scenarios possible at the highest levels. And that’s buttressed by sex segregation in the workplace, by implicit biases in our interactions. By increasing inequality in terms of our digital divide.

We seem to be really reluctant to acknowledge that a lot of the spaces that we’re talking about, whether it’s politics, nationally or internationally, or tech, or finance, or sports, these are fundamentally fraternal spaces. And they’re filled with people who are educated in similar ways, engage with a similar purpose, and often they are ethnically homogenous.

This creates huge blind spots in our perceptions and our understanding. And a lot of the issues that we have when we look at these risks are imbued with a sense of American exceptionalism.

A lot of countries have already dealt with the risks that feel new to us. And we should be looking to them and we should be talking to them about what their responses have been. Both in legal and social terms.

This harassment that I’ve been talking about, it’s not just a problem, obviously, that affects us privately as individuals. But it fundamentally challenges our civil rights and our ability to work and our capacity to work.

It affects whether or not we go to school. It affects our freedom of movement. And yet all of this is not considered a form of political intimidation. When it clearly is political intimidation.

When you want to draw entire categories of people out of the public sphere with the underlying threat of violence, that’s always political.

And I think that that is a really fundamentally gendered and racialized equation. What we think of as political intimidation.

I would also argue that even, for example, when we talk about creeping authoritarianism or the authoritarian beliefs of certain elements of the voting population in the US, what we’re really not saying is how strict rules about gender within the home are at the core of authoritarian beliefs as a feature of a political system.

So when you have, for example, religiously conservative families and households, which in the US tend to be mainly evangelical white Protestants and Roman Catholics, they have a belief system that requires submission and obedience. There are rigid rules, especially about gender and sexuality.

These cultures tend to be highly punitive, both in terms of what’s going on at home but also in domestic policy and international policies.

And this fallback onto authoritarian beliefs is psychologically palliative in times of disorder and inequality. And again, the canary in the coal mine for that, is gender gaps in equality.

So in neighborhoods, in households, in countries where there’s a pronounced gap in gender inequality, you see higher levels of acceptance for authoritarianism. Particularly among women, because women’s responses to being faced with their own inequality is to value a strong man in the public sphere.

And I think we can see that reflected in the election that we just went through.

And so to look back at the intersection of media and technology and these risks, we need to be forward looking. But we also really need to understand at the most fundamental levels that our media is embedded with the values of our makers. And with consumer/producers that we used to make up as our audiences.

And in so far as we don’t really think culturally of women’s issues as political issues, we need to shift that. And we also need to stop thinking in terms of technical solutions and think in terms of sociotechnical solutions.

So understanding media at risk requires asking ourselves why a lack of commitment to inclusivity is not itself seen as a fundamental risk. Why is this not a failure, for example, of journalistic ethics? And recognized as a danger to the integrity of the role that our media plays in ensuring the proper functioning of democracy.

So resistance to risk has to come from introspection and assessment and reorientation of these ideas, that I would say includes first and foremost a dissolution of this stark divide between public and private.

Which is happening online anyway. The entire medium of social media dissolves the public-private divide.

And then we need to be thinking in terms of collaboration and activism. Particularly in supporting people who have been doing this work globally for a long time. And try when we can to leverage their expertise.

So I’m going to close here. I’d like to thank you so much for being here tonight and thank you to Barbie for inviting me to speak to you at the launch of this wonderful program.