Smirity Kaushik's Dissertation Defense

Smirity Kaushik

PhD candidate Smirity Kaushik will present her dissertation defense, "Digital Trust, Safety, and Privacy in the Age of Emerging Technologies." Kaushik's dissertation committee includes Professor Yang Wang (chair and co-director of research), Assistant Professor Madelyn Sanfilippo (co-director of research), Professor Michael B. Twidale, Assistant Professor Camille Cobb, and Yixin Zou, tenure-track faculty at Max Planck Institute for Security and Privacy.

Abstract

In today's digital global economy, social media platforms serve as powerful tools for information dissemination and bringing global community closer. However, these platforms, with underlying business model driven by advertising revenue, also expose users to significant digital harms such as scams, misinformation, and privacy invasive targeted ads. These risks are amplified by integration of Generative AI, enabling hyper-personalized and scalable synthetic content generation. Furthermore, these risks disproportionately affect the at-risk and understudied populations, such as young adults, and users from non-Western regions, whose needs are often neglected in platform design and governance.

My work aims to make people's online experiences safe and trustworthy, while ensuring inclusive privacy, with a particular focus on social media. Specifically, I examine three interconnected digital harms: a) the privacy risks associated with targeted advertising; b) prevalence of fraudulent content and scams on short-form video platforms (SVPs) like TikTok; and c) emerging role of Generative AI in perpetuating fraudulent content. To address these challenges, I use a mixed-method approach that integrates interviews, surveys, and content analysis of user-generated videos, and policy reviews. My research makes four key contributions:

  1. Non-western users exhibit novel targeted ads perceptions: People from India and Bangladesh (South Asian region) prefer emerging ad formats such as influencer-based ads, have novel mental models of how targeted ads work, and rarely use ad settings.
  2. Privacy behaviors are impacted by local social, cultural, and religious norms: Cross-country comparisons reveal that culture and religion influence users’ perceptions and privacy management behaviors of targeted ads on social media. These findings underscore the need to contextualize privacy beyond Western-centric norms.
  3. Taxonomy of fraudulent content systematizes studying scams: Fraudulent content is widespread issue and increasingly sophisticated on social media. The development of a taxonomy of fraudulent content on social media is a foundational step towards systematically studying these scams.
  4. Governance of Generative AI-drive manipulative content lacks consensus: Existing governance for use of Generative AI to produce and scale synthetic fraudulent content lacks enforceable rules, specifically for young adults. Variability across platform policies also signals lack of consensus on ethical use of Generative AI in social media.

Overall, my work highlights inequitable impact of digital harms, specifically for at-risk and understudied populations, proposes actionable design and policy interventions to enhance digital safety and privacy, and advances a more responsible and equitable vision for the development and governance of emerging information and communication technologies (ICTs).

Questions? Contact Smirity Kaushik.