Media Content Policy-Making in the Age of Linguistic and Visual Artificial Intelligence: A Theoretical–Applied Analysis with a Case Study of Two Domestic Media Outlets
Keywords:
Generative AI, Media Ethics, Algorithmic Governance, Editorial Integrity, Content Authenticity, Recommender Systems, Epistemic Justice, AI PolicyAbstract
This study aims to evaluate the structural and operational impacts of linguistic and visual artificial intelligence (AI) on media systems, proposing a three-tier governance model to enhance transparency, editorial integrity, and ethical accountability in AI-integrated content ecosystems. The research employs a mixed-methods design combining comparative conceptual analysis, scenario-based sensitivity modeling, and two case studies conducted in Iranian private media organizations. Data were collected over six months through qualitative thematic coding, algorithmic configuration experiments, and expert panel reviews. The study assessed three key dimensions of AI-media interaction: content production, content distribution, and policy-level governance. Quantitative metrics such as semantic error rates, discursive diversity indices, and user complaint frequencies were triangulated with editorial satisfaction surveys and internal policy evaluations to validate the proposed framework. Results indicate that AI-generated content, if unreviewed, leads to a high incidence of factual and semantic inaccuracies (up to 34%), while multi-stage human oversight reduces errors to 3% and improves editorial satisfaction. Engagement-only recommender systems significantly decreased discursive diversity (index dropped to 0.38) and increased cognitive polarization (62%), while hybrid algorithms improved diversity (0.63) and reduced polarization (29%). Media organizations implementing a formal AI ethics charter reported a 64% reduction in content-related complaints and a 27% increase in public trust. The study confirms that internal governance frameworks and ethical transparency significantly enhance audience perception, editorial control, and institutional resilience. The integration of AI into media demands proactive, data-driven, and ethics-oriented governance. The proposed three-tier model offers a scalable framework for managing AI’s risks while fostering editorial responsibility and content authenticity. Institutions that embed human oversight and algorithmic transparency are better positioned to preserve public trust and adapt to the evolving information landscape.
Downloads
References
[1] T. B. Brown and et al., "Language models are few-shot learners," Advances in Neural Information Processing Systems, vol. 33, pp. 1877-1901, 2020.
[2] R. Bommasani and et al., "On the opportunities and risks of foundation models," 2022. [Online]. Available: https://arxiv.org/abs/2108.07258.
[3] K. Bontcheva, J. Posetti, and P. N. Howard, "Balancing freedom of expression and the challenges of AI-generated disinformation," 2024.
[4] D. Leslie and A. M. Perini, "Future Shock: Generative AI and the International AI Policy and Governance Crisis," 2024, doi: 10.1162/99608f92.88b4cc98.
[5] L. Edwards and I. Szpotakowski, "Private Ordering, Generative AI and the ‘Platformisation Paradigm’," 2025. [Online]. Available: https://www.cambridge.org/core/services/aop-cambridge-core/content/view/92790919A0203140ED012BF8A4BA8A0F/S3033373324000115a.pdf.
[6] W. H. K. Chun and B. S. Noveck, Algorithmic Publics: Technology, Power and Democratic Futures. NYU Press, 2025.
[7] S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019.
[8] N. Couldry and U. A. Mejias, The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism. Stanford University Press, 2019.
[9] R. Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge: Polity Press, 2019.
[10] A. Sandiumenge, "Unmasking bias: Algorithmic fairness in global media narratives," 2023.
[11] O. Kulesz, "Artificial Intelligence and International Cultural Relations: Challenges and Opportunities for Cross-Sectoral Collaboration," 2024. [Online]. Available: https://opus.bsz-bw.de/ifa/files/1203/ifa-2024_kulesz_ai-intl-cultural-relations.pdf.
[12] M. Oppedal, "Algorithmic neutrality and cultural invisibility in generative AI," AI and Society, vol. 38, no. 2, pp. 311-325, 2023.
[13] S. Hosseini and N. Kazemi, "Ethical Implications of Artificial Intelligence in Iranian Journalism: A Descriptive Approach," Quarterly Journal of Ethics and Information Technology, vol. 8, no. 1, pp. 23-48, 2022.
[14] F. Safari, "Futures Studies on the Application of Artificial Intelligence in Iranian News Agencies: An Exploratory Study," Allameh Tabataba'i University, 2021.
[15] L. Edwards, M. Szpotakowski, and C. da Mota, "AI, Media, and Policy: A Comparative Analysis," 2025.
[16] N. Helberger, J. Pierson, and T. Poell, "Governing online platforms: From contested to cooperative responsibility," The Information Society, vol. 36, no. 1, pp. 1-14, 2020, doi: 10.1080/01972243.2017.1391913.
[17] Oecd, "AI Innovation Concentration and the Governance Challenge," 2023. [Online]. Available: https://www.econstor.eu/bitstream/10419/299989/1/no292.pdf.
[18] S. A. Aaronson, "AI and the future of media trust: Challenges for global regulation," 2023.
[19] D. Leslie and L. Perini, "Algorithmic Bias and the Epistemic Crisis of Journalism," 2024.
[20] P. M. Napoli, Social Media and the Public Interest: Media Regulation in the Disinformation Age. New York: Columbia University Press, 2019.
[21] M. da Mota, "Toward an AI Policy Framework for Research Institutions," 2024. [Online]. Available: https://www.cigionline.org/documents/2520/DPH-paper-daMota.pdf.
[22] T. Amirova, "Comparing Models of Artificial Intelligence Governance: The Role of International Cooperation on Responsible AI and the EU AI Act in the Age of Generative AI," 2023. [Online]. Available: https://cadmus.eui.eu/bitstream/handle/1814/76040/Amirova_2023_Master_STG.pdf?sequence=1.
[23] L. M. Vigl, "Evaluating the Ethical and Social Implications of AI in Public Broadcasting," 2024. [Online]. Available: https://repositum.tuwien.at/bitstream/20.500.12708/205282/1/Vigl%20Laura%20Maria%20-%202024%20-%20Evaluating%20the%20Ethical%20and%20Social%20Implications%20of%20AI...pdf.
[24] M. Ahmadi and S. Zare, "Analyzing the Performance of Recommendation Algorithms in Social Media and Their Impact on Audience Preference Formation," Quarterly Journal of New Media Studies, vol. 10, no. 2, pp. 55-74, 2019.
[25] Oecd, "Policy responses to the risks of generative AI in news ecosystems," 2023.
[26] S. A. Chun and B. S. Noveck, "AI Augmentation in Government 4.0: Introduction to the Special Issue on ChatGPT and other Generative AI Commentaries," Digital Government: Research and Practice, 2025. [Online]. Available: https://dl.acm.org/doi/pdf/10.1145/3716859.
[27] S. A. Aaronson, "The Governance Challenge Posed by Large Learning Models," 2023. [Online]. Available: https://www2.gwu.edu/~iiep/assets/docs/papers/2023WP/AaronsonIIEP2023-07.pdf.
[28] K. Bontcheva, S. Papadopoulos, and F. Tsalakanidou, "Generative AI and Disinformation: Recent Advances, Challenges, and Opportunities," 2024. [Online]. Available: https://lirias.kuleuven.be/retrieve/758830.
[29] Unesco, "Guidelines for the Governance of AI in Media and Culture," 2024.
[30] M. Vigl, "Post-editorial sovereignty: AI, editorial judgment and the fading newsroom," Digital Journalism, vol. 12, no. 1, pp. 1-18, 2024, doi: 10.1080/21670811.2024.2309090.
[31] O. Kulesz, "Artificial Intelligence and Cultural Content Governance," 2024.
Downloads
Published
Submitted
Revised
Accepted
Issue
Section
License
Copyright (c) 2025 Abbas Taheri (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.