Transparent and Trauma-Informed: Notes from Facebook’s 2021 Summit on Non-Consensual Intimate Image Sharing

Today’s social networking platforms bear scars from a decade of trolling, abuse and disinformation. With “toxic technocultures” presenting enormous challenges for the tech industry, how are companies responding to abuse on their platforms? For this installment of our Review series, I report back from Facebook’s 2021 summit on Non-Consensual Intimate Image Sharing (NCII). Often mis-labeled “revenge porn”, this is one of many abuses that networks seek to remove from their platforms. Here I explore how Facebook is approaching this problem, and what we can learn from its efforts.

Sophie Maddocks, PhD Student and Research Fellow, Center for Media at Risk


The Zoom room was flooded with sunrises and sunsets as participants from many time zones logged into Facebook’s virtual NCII summit on June 9th and 10th, 2021. Activists from 24 countries including Malawi, Palestine, Australia, Italy, Hong Kong and Denmark were joined by tech industry professionals from Bumble, Google, Twitter and Facebook for a two-day deep dive into the issue of non-consensual intimate image sharing (NCII). Defined as the non-consensual creation or distribution of intimate images, this harm—commonly referred to as “revenge porn”—has become a major issue for social media users globally and a top priority for social network safety teams. From suspending Donald Trump’s account to implementing fact-checking on user content, Facebook has launched a range of initiatives to respond to bad actors on its platform. But how is it addressing NCII?

Are Social Networks Really Listening?

Five years ago, Facebook launched partnerships with victim-advocacy groups in three countries as part of its efforts to challenge NCII. This summit is one of several convenings that have taken place since, designed with the goal of giving activists and tech professionals the opportunity to collaboratively solve problems related to NCII. In 2017, this collaboration led to the implementation of photo matching technology, in which victims flag images for Facebook to ‘hash’ and permanently remove, as well as machine learning and artificial intelligence used to proactively detect nude or near nude imagery shared without consent.  Facebook’s safety teams are trying to facilitate a direct line of dialogue between victims of NCII and those who design the technology that led to their victimization. This model of partnership is a promising indication that social networks are listening to NCII activists.

Why Are We Talking About NCII?

Every day platforms navigate myriad manifestations of online incivility. From account impersonation and cyberbullying to swatting and trolling, social networks’ safety teams are faced with constantly evolving harms. Why is NCII such a top priority? Is “revenge porn” prevalent enough to warrant so much attention? According to one of the world’s leading experts, Nicola Henry, one in three people report experiencing NCII. Henry points out that the perpetrators of 70% of the NCII victims surveyed in her international sample were a current or former intimate partner. These statistics set NCII apart from other online abuses.  Rather than the work of an anonymous troll, it is more often a tool of intimate partner violence used in the context of a current or former relationship. When we think about NCII as a form of domestic abuse taking place on platforms, its severity and urgency become obvious.

Unlike other forms of intimate partner violence, this harm is public and permanent. Henry’s respondents describe it as “torture for the soul” and a form of “social rupture” that radically disrupts victims’ lives. Like other forms of intimate partner violence, this harm disproportionately targets women. Advocates at the UK Revenge Porn Helpline reveal that only two images of the average male victim are shared non-consensually, but for the average female victim that number rises to 79. Moreover, incidents of NCII have increased significantly during the COVID-19 pandemic. This picture underscores why NCII has become such a priority for social networks. It is a highly prevalent, rapidly escalating and deeply gendered harm that ‘uploads’ some of the worst abuses onto our newsfeeds.

Solutions: Trauma-informed and Transparent

This summit didn’t feel like a ‘tech’ event: there wasn’t lengthy discussion of innovation, platform design or AI tools. Instead, every presentation centered on the experiences of victims, reversing the norm of separating what is typically seen as dispassionate “experts” from emotion-driven “activists.”  Here victims were deemed experts and their experiences recognized as sources of testimony, pieces of evidence and directions for policymaking. The trauma-informed nature of the summit—which means to realize, recognize and respond to human behavior as trauma—highlights the extent of trauma induced by NCII and makes possible its accommodation by policy solutions.  

Expert testimony was both wide-ranging and pointed. Lulú Barrera from Luchadoras in Mexico described the story of an NCII victim without computer access, who undertook the intricate process of photo-matching using her cell phone, while Shmyla Khan from Digital Rights Foundation in Pakistan focused on the cumulative impact of NCII over the years, especially as photos resurface over time. Michelle Gonzalez from Cyber Civil Rights Initiative in the United States told the story of an empowered victim who thought this harm would not impact his life, reflecting a broader pattern in which those with more social power feel this harm less acutely. These experiences raise many questions: Are solutions to NCII being developed that can accommodate the needs and expertise of victims without computer access? Can social networks think longitudinally and track the cumulative impact for users on their platform over time? Can users rebuild and recover their digital identity after experiencing NCII?  Can social networks account for the different social positions of their users when responding to NCII? Sarai Chisala-Templehoff from the Gender and Justice Unit in Malawi emphasized the role of cell phones in spreading NCII, in which the leaking of victims’ cell phone numbers—often called doxxing—led to secondary “cyber-mobbing.” All these examples demonstrate the need for cross-platform collaboration among tech companies. When victims become expert witnesses, we expose the deeper and more complex questions that must be answered to generate truly effective solutions.

Ultimately, summit participants grappled with one of the most pressing questions on their agenda: How can platforms be transparent with victims about removing their images without allowing perpetrators to game the system? Participants stress that victims must receive full transparency about the policies and processes that remove NCII. When policy is led by people who have been silenced and deceived on their platforms, the value of total transparency becomes paramount. 

Can We Reach Consensus?

While the summit constitutes a valuable marker on a horizon that hasn’t been traveled enough, it does come with its own complications. To begin with, tech professionals, activists and academics typically approach issues of digital abuse differently. Activists prioritize justice-seeking and harm reduction. Academics are concerned with how we understand issues and what that tells us about our emerging digital cultures. Tech professionals seek practical clarification on which tools work to prevent abuse and how to implement these tools across contexts.

While all agree that NCII is a serious harm that must be urgently addressed, the issue of terminology resists consensus. While NCII is a morally clear issue with a shared definition, it does not have a universally accepted name, and participants find the development of a common vocabulary challenging. Some tech industry professionals are keen for a specific name, while activists prefer broader terminologies such as image-based sexual abuse, digital sexual violence and online gender-based violence. Use of the word ‘sexual’ is contested. Although for some it connects NCII to existing sex crimes, inducing better legal protections for victims, for others its exclusion removes cis-heteronormative associations about women and reduces sexualizing and victim-blaming. Still others find it easier for victims to communicate their experiences by focusing on privacy and consent, not sex, where even tweaking “sexual” to “sexual violence” can mobilize public empathy. Some suggest avoiding names altogether, instead allowing victims to define and label their own experiences in their own fashion. Although participants were invited to join a Facebook group called “The Interveners,” it is questionable how successful such an intervention can be if we can’t name it.

A Model for Future Intervention

There is no doubt that the trauma-informed approach of this summit has genuine advantages. It centers the needs and expertise of victims in the process of technology development and policy implementation. But this trauma-informed approach also complicates NCII: It cannot be reduced to one terminology or understood in isolation from offline patterns of violence and abuse. This approach also requires total transparency from tech companies, demanding that the “black box” of algorithms, moderation rules and take-down technologies be shared with victims. Centering the experiences of the most marginalized users urgently necessitates renewed creativity, transparency and cross-platform collaboration among tech professionals. While the summit raised more questions than answers, it may be that this response is right for the time. This summit no doubt brings us closer to meaningful change and harm reduction because it is helping to reorient focus from the average user to the most heavily victimized.


Sophie Maddocks is a doctoral student at the Annenberg School for Communication. She is broadly interested in cyber civil rights, gender and sexuality, youth media literacy, and popular culture.

Her current work examines individual, organizational and legislative responses to image-based online abuse. A  Fellow and Steering Committee member at the Center for Media at Risk, Maddocks conducts research that’s qualitative, participatory and youth-led. Maddocks holds an M.A. in Media Studies from The New School where she was a Fulbright Scholar; a Postgraduate Certificate in Education from the University of Warwick; and a B.A. in Social and Political Sciences from the University of Cambridge. Before joining the Annenberg School, Maddocks held various teaching and advocacy roles in the British education system. Follow her on Twitter @Sophie_J_J