AI Ethics in Social Media
Abstract
The rapid integration of Artificial Intelligence (AI) into social media has revolutionized user experiences, enabling personalized content delivery and creative media generation, yet it introduces profound ethical challenges, including AI-driven harassment, bullying, and the spread of synthetic media like deepfakes, threatening user trust and societal cohesion. This paper addresses the critical gap in understanding user perceptions of AI ethics, the technical limitations of detecting harmful AI- generated content, and its severe psychological and societal impacts. Employing a mixed-methods approach, we conducted the "AI Ethics in Social Media Questionnaire," collecting 200 responses from predominantly young, undergraduate users, complemented by qualitative analysis of real-world case studies and an extensive literature review. Our innovative integration of empirical survey data with case studies reveals that 97% of users are aware of AI features (66% highly aware, 31% somewhat aware), yet 53% express significant concern about ethical issues, including privacy violations, algorithmic bias, and inadequate content moderation. Alarmingly, 61% reported encountering AI-generated harassment, with 36% experiencing direct or indirect impact, underscoring the issue’s pervasiveness. Case studies, including Sewell Setzer III, Molly Russell, Chase Nasca, the Belgian Man, and deepfake victimizations, illustrate AI’s role in exacerbating psychological distress, reputational harm, and societal distrust through algorithmic amplification and inadequate detection mechanisms. The analysis highlights the persistent “arms race” between AI content generation and detection, compounded by algorithmic biases and scalability challenges in moderation. We propose a multi- stakeholder framework, including enhanced user control over AI interactions, robust platform policies with mandatory content labeling, advanced detection technologies, international regulatory collaboration, and public education on media literacy. This work advances AI ethics by offering a comprehensive strategy for responsible AI governance, fostering a safer digital environment, and safeguarding user well-being and public trust. Failure to implement these measures risks escalating online harms, undermining public discourse, and eroding the trust underpinning digital interactions.
Related articles
Related articles are currently not available for this article.