Young Mie Kim: Professor & Stealth Media Researcher

With the November 2020 election only weeks away, Young Mie Kim’s research on social media disinformation campaigns is critically important and timely-as-ever. For this Reflection, Center for Media at Risk Steering Committee member Jeanna Sybert spoke with Kim about her work and objectives as the Center’s Fall Visiting Scholar.


Your research has centered on the nature of political communication in media environments and particularly on how it manifests in digitally mediated environments. Some of your most recent work here has been on disinformation campaigns during the 2016 election. Can you give us a sense of the trajectory that brought you to this research?

For my entire career, I have been studying passionate publics who care about political matters intimately because of their values, identities or self-interests and how they find information and organize themselves in the digital media environment. For example, someone like “Joe,” who deeply worries about Muslims coming to America because of his religious values. Joe takes a strong, extreme position on this issue, so he wants a candidate who strongly opposes immigration. Joe describes himself as a Republican, but he puts this issue ahead of the party. Joe doesn’t believe that mainstream media cover the issue fairly. He spends time on Facebook and subscribes to news outlets that he thinks are doing a better job in discussing the issue he cares about. I’d say people like Joe are part of conviction publics.

I find there are a large number of small groups of people like Joe, but with all different issue concerns. Some people care about the abortion issue because of their religious values. Others care about racial issues because of their racial/ethnic identity. Gun owners care about gun rights because the issue has direct implications for their everyday lives.

This means that it is difficult for political groups to find a single issue everyone cares about. Political groups would love to go after people like Joe because they are committed to a political position and more likely to be responsive to their campaigns (including disinformation campaigns). However, it would be incredibility difficult for political groups to identify people like Joe who have such narrow interests because they are usually unorganized, small groups who are scattered all around the country—until now.

With the “big data revolution” over the past decade, our media environment has become more data-driven and algorithm-based. The data-driven, algorithm-based, digitally-mediated environment, in fact, provides political groups—even low-resourced, non-party groups—with increased opportunities to identify, target, and (de)mobilize conviction publics for a campaign’s own interest. Social media platforms like Facebook, for instance, offer convenient tools and drop-down menu options for political groups to identify and micro-target people like Joe, “Joe-lookalike 1,” “Joe-lookalike 2,” “Joe-lookalike 3,” etc.

My research has turned to examining the production- and distribution-side of political communication as well (in addition to the consumption-side). What and how do political groups identify, target, and (de)mobilize people, and how do they utilize the opportunities provided by data-driven, algorithm-based, digitally-mediated environment? How do they identify, target, and (de)mobilize conviction publics and with what messages? What are the strategies and tactics of their campaigns? What does this mean for the functioning of democracy?

In answering those research questions, I discovered the prevalence of targeted disinformation campaigns on social media.

You are the Principal Investigator for Project DATA (Digital Ad Tracking & Analysis). Can you talk a bit about that project and what it is currently working on?

To address the aforementioned research questions, I decided to focus on digital political advertising. By definition, advertising is a manifestation of purposeful and deliberate targeting, messaging, organizing and (de)mobilizing strategies and tactics. So, by examining ads, we can come closer to understanding campaigns’ strategies and tactics. Advertising also has the clear intention to influence people, and it involves money. This means advertising analysis can provide more direct evidence that will help make specific regulatory policies. Independent from digital platform companies, Project DATA (Digital Ad Tracking & Analysis) tracks digital ads with a user-based, real-time, “reverse-engineering” tool and examines targets, sponsors and the content of digital political campaigns. 

Ahead of the 2016 elections, we collected 87 million ads exposed to 17,000 individuals [who] consented to participate in research and who represented the US voting age population.  As a multidisciplinary team, we have been working on various topics with a wide range of analytical methods. For example: unidentifiable, unattributable sponsors’ divisive issue campaigns and their targets; the network structure of covert coordination; targeting patterns of voter suppression campaigns; the effects of voter suppression on election outcomes, and so on.

As part of the project, we were among the first to discover Russian election interference on social media, and since then I have been doing more “forensic” research, trying to develop some methods to detect and identify covertly coordinated foreign election interference while also studying the implications of data-driven, algorithms-based, digitally mediated politics for democracy.

In a recent report published through the Brennan Center for Justice, you discuss your findings that the Internet Research Agency (IRA) – the Kremlin-linked company that interfered in the 2016 U.S. presidential election – is again trying to influence political messaging in the 2020 election, but now its tactics are more sophisticated. What is different about these tactics? What factors have enabled the IRA to refine its approach since the 2016 election? Because this report was published in March, can you tell us anything new you have learned since its release?

The most distinctively different tactics of foreign influence campaigns on social media in this year’s election include the use of domestic campaigns and commercial campaigns. In 2016, while the IRA shared the same or similar narratives with domestic political campaigns posing as domestic political groups or individuals, they tried hard to establish their own (fake) identities. For example, the IRA groups always used their own group names even though they sounded so generic that the groups could be mistaken for domestic political groups in the US. They always put their own logos on the ads and organic posts in 2016 as if trying hard to prove that they had no direct link to or coordination with domestic political campaigns. To the contrary, in 2020, they used domestic materials containing legitimate domestic political groups’ names and logos. Similarly, the foreign groups increasingly use the material of seemingly apolitical, domestic commercial actors who target a particular segment of the population as if they were campaigns by domestic commercial actors.

I don’t know yet whether it indicates some direct link or coordination with those domestic groups, whether it’s identity theft, or if they are simply recycling the material of domestic groups because I have not yet examined large-scale, population-level data in a systematic way with a clear time order. However, at the least, the function of foreign influence campaigns’ use of domestic identities and materials is clear to me: identity and (dis)information laundering. This enables foreign actors to easily get away from the current regulatory or transparency measures that heavily focus on combating foreign sources rather than disinformation campaigns in general. Unfortunately, however, it is even more difficult for people to detect and isolate foreign influence campaigns from domestic political campaigns.

Such tactics are not “new.” A more accurate description is that foreign election interference is continually evolving as it adapts itself to the changing political and media environment. And, I believe, such tactics will be widely used in this election—evolving tactics that exploit the fragmented and polarized political environment in times of uncertainty.

If we assume that scholars, politicians, social media companies and much of the public know about Russian influence on the 2016 election, are we better equipped to combat these efforts even though their methods have become more sophisticated?

Certainly, increased awareness of election interference and an understanding of its strategies and tactics will help us be better prepared. Large tech platforms such as Google, Facebook, and Twitter now have self-regulatory policies, including identity verification and transparency measures. These are notable, important first steps. However, public awareness and self-regulatory measures are not sufficient to combat evolving foreign election interference. We need to be better equipped at all levels. We need consistent, conspicuous, and enforceable regulatory policies that adequately address data-driven, algorithm-based, and digitally mediated campaigns to ensure election integrity in general.

Much of your research is public facing. Not only has your work received coverage by major media outlets, but you have also offered your expertise at FEC hearings and congressional briefings on internet policymaking. What are some tips you can offer those who want to be public scholars and communicate their work to different audiences?

I believe social science research must have both rigor and relevance. Regrettably, because much of student training focuses on methodological and technical rigor, students often pull themselves away from the real world. So, my first and foremost advice for students is to deeply engage in your community and let your own observation and experience inspire and guide your research. It would be also helpful to think about what kind of public scholar you want to be—do you want to be an honest broker or passionate public advocate? Both are valuable models of public scholarship, and it would be helpful for you to have a model of public scholarship as your guide. More technical tips are: set your target audience and speak at their level; clearly define your subject matter; avoid technical details and field-specific jargon; and acknowledge the limits and constraints of your observation or experience.

How has the COVID-19 outbreak impacted the way you conduct research? More importantly, perhaps, how has it shaped the research itself? For instance, have you had to alter any current projects—such as your work on political messaging in the 2020 election—to account for public discourse about COVID-19?

I wouldn’t say I shifted the focus of my research. But, generally speaking, I study the interplay among technology, people and contexts, and the COVID-19 outbreak has us situated in a different context than before; therefore I would say COVID-19 definitely influences my research. For example, in studying this year’s elections, I need to include voter suppression tactics that take advantage of uncertainty around mail-in voting and cast doubt on the legitimacy of this election and what role digital media play there.

How will you be spending your time at the Annenberg School for Communication this semester? Can you give us a preview of any upcoming projects?

I am thrilled to be part of the Annenberg School and the University of Pennsylvania community more broadly. My goal is to engage with students, faculty/staff, and people in the community as much as I can! I am teaching a graduate seminar on dis/misinformation this semester and have been very much enjoying interacting with the students across disciplines in the class. I have been attending talks, colloquiums, and roundtables across the campus that are offered to the Penn community.

While I continue to work on the unique trove of data I collected in the 2016 elections, I will study the mechanisms and effects of targeted campaigns in the 2020 elections, focusing on the cases of digital political advertising and coordinated inauthentic campaigns. I invite students who are interested in these topics to join me!

It is unfortunate that all of these activities have to be carried out online, and I don’t get the same experience as what I would have had if I were physically located in Philadelphia. Hopefully, this virtual visiting seeds long-term, sustainable, collaborative relationships with the ASC and UPenn.


Young Mie Kim is a Professor at the University of Wisconsin-Madison School of Journalism and Mass Communication and an Andrew Carnegie Fellow. Kim’s research concerns media and politics in the age of data-driven digital media, specifically the role digital media play in political communication among political leaders, non-party groups (issue advocacy groups), and citizens. To read Kim’s full research, “The Stealth Media? Groups and Targets Behind Divisive Issue Campaigns on Facebook,” click here.

Jeanna Sybert  is a steering committee member of the Center and doctoral student at the Annenberg School for Communication. Her current research explores the workings of U.S. journalism, digital culture, and visual communication. She is particularly interested in what the intersections of these focuses reveal about contemporary public discourse and culture. Follow her on Twitter @jeanna_sybert