Newsfeed
The Syrian conflict. ACNIS
Day newsfeed

AI Use in Armenian Media: Ethical Considerations

October 25,2024 13:59

Anna Barseghyan

with editorial assistance from Andranik Shirinyan, Sona Mkhitaryan, and Roza Melkumyan

Introduction 

Artificial Intelligence (AI) is a disruptive technology that is transforming the media industry and reshaping the way stories are told, produced and consumed.1 Generative AI—the type of AI that can create content based on users’ prompts—automates routine tasks, generating audiovisual content, analyzing vast amounts of data, assisting in fact-checking and delivering highly personalized content to audiences based on their preferences.2 It opens up new avenues for productivity and efficiency, while also lowering costs for content creation.

Concurrently, the use of AI in media content creation also introduces significant challenges.4 It raises important ethical concerns, such as ensuring transparency to the audience regarding the use of AI in content creation, preserving media and information integrity, and defining who the author of the content generated by or with the help of AI is, as well as who is responsible if it contains incorrect or misleading information. This is especially important because reliance on generative AI can potentially lead to biased reporting, unintentional plagiarism, or the spread of misinformation, all of which can compromise public trust in media and damage their credibility. AI can also be used by malicious actors to create and disseminate disinformation.

Armenian media outlets have already started to explore AI’s capabilities, and its use will further increase as the technology becomes more accessible, accurate, and improves support of the Armenian language.

Since AI is still a new tool for Armenian media outlets, there are currently no broadly established rules or standards that can guide media professionals on how to use AI responsibly. Therefore, it is essential to develop comprehensive guidelines tailored to media outlets’ specific AI-use cases to ensure that they maintain credibility as well as their audience’s trust.

During the drafting of this policy brief, the Media Ethics Observatory (known as Ditord Marmin in Armenian), Armenia’s self-regulatory body that unites over 80 media outlets and organizations, developed a framework of ethical principles related to the use of AI in content production.5 This is an important and well-timed step in establishing self-regulation in AI use. Media outlets can use these principles as a starting point to develop their own customized and comprehensive guidelines.

To inform this policy brief, the author spoke to fifteen professionals including journalists, editors of prominent media outlets, media advocates, and policymakers, explored the challenges related to AI use and reviewed best practices regarding the use of AI in leading media outlets worldwide. This research, conducted from April 17 to May 17 of 2024, represents the first effort to collect such detailed information in the Armenian context, aiming to guide the ethical integration of AI in Armenian media.

AI Adoption in Armenian Media

Armenian media began exploring AI for content creation after OpenAI publicly released ChatGPT in late 2022, although its adoption remains limited. This is partly because AI tools do not yet fully support the Armenian language—although they have improved since the time of this research—and partly because not all journalists have the required proficiency in foreign languages and skills to effectively use this technology.6 Some outlets, nevertheless, have successfully incorporated AI in their work, accelerating daily journalistic tasks.

For example, Hetq uses AI tools to generate illustrations for articles, analyze large data, and simplify complex topics for their audiences, while News.am’s tech department leverages AI for rewriting and summarizing texts and ideating, as well as generating articles and tailoring them to Armenian audiences.7 

CivilNet’s fact-checking team utilizes AI to conduct research and verify the authenticity of images and videos.8 Additionally, Infocom has used AI to simplify a complex subject, formulate questions for interviews, and generate an image for a future academic city that currently exists only as a concept, allowing an AI tool more creative freedom and imagination in generating a visual for a conceptual project. AI is also used for processing audiovisual content, generating subtitles, translating, and transcribing.9

At the time the interviews were conducted (before the release of ChatGPT-4o, which has better support of the Armenian language), journalists were working with AI systems mainly in English, primarily using publicly available free tools such as OpenAI’s ChatGPT and DALL-E, Google’s Gemini, and Microsoft Copilot. Only a few also employed paid tools like Midjourney, Canva and various Adobe programs. 

Media professionals state that AI tools have significantly enhanced the speed, efficiency, and productivity of their work by automating tasks. 

The author believes that it is only a matter of time before Armenian media uses AI more extensively, as journalists begin to recognize the value this technology offers for their work. Therefore, it is essential to understand the technical limitations and ethical risks that it presents to develop effective guidelines for the responsible use of AI and prevent its potential negative impact on media integrity and public trust. 

Challenges of AI in Content Production

The primary challenges that AI introduces to the media include transparency and honesty with the audience, the technical limitations of AI that can contribute to ethical concerns, and authorship and accountability issues of AI-generated or AI-assisted content.

Transparency and Integrity

Transparency is a fundamental ethical principle in journalism, which entails that the media be open about their practices and decision-making processes, such as how they gather or verify information, or that they admit and correct any mistakes they make. This openness contributes to trust, which is the foundation of the relationship between media and the audience. 

However, studies show that the disclosure of AI use in media content production is a double-edged sword for the media outlets, which must be handled carefully. While being transparent with the audience and revealing the use of AI demonstrates the integrity and ethical commitment of the media outlet, it may also lead to distrust.10 

A recent survey by the Reuters Institute for the Study of Journalism reveals that audiences are generally uncomfortable with news created with the help of AI, although those who are familiar with the technology are more accepting of its use.11 They prefer AI to be employed “in ways that are not directly visible” to audiences, such as transcribing interviews or summarizing content. Despite this, audiences find it more acceptable for the media to use AI for producing text, illustrations, graphics and animations, rather than realistic-looking photos or videos, even when AI use is disclosed. According to the survey, audiences are against AI generating completely new content, and believe that fully automated processes should not be permitted. They believe that human oversight is essential at all stages.

Nevertheless, failing to disclose AI use in content production risks damaging the trust of the audience should they discover it later, especially given the decreasing global trust in media.12 Media outlets must find a way to strike a balance between using an innovative technology and preserving their trust and integrity.

Currently, Armenian media outlets inform their audiences about the use of AI mainly when it involves images or videos entirely generated by this tool. Media professionals whom the author spoke to agree that it is crucial to be transparent and honest with the audiences about AI use, especially when it comes to audiovisual content, as it may have a profound impact on viewers and even mislead them into thinking that the AI-generated imagery might be real. However, regarding textual content, journalists are uncertain of whether and to what extent they should reveal AI use. What advantage does the audience gain by knowing about the tools used behind the scenes? Journalists often find themselves confused by these questions due to the lack of AI guidelines within media outlets. 

These are complex questions that require detailed answers, based on in-depth discussions with journalists about the case-by-case use of AI and conversations with the audience to understand their expectations. As people become more knowledgeable about AI, they will likely become more accepting of AI-generated and AI-assisted content. Media outlets can take on an active role in increasing this awareness and provide clear and specific explanations about AI’s role in their work, particularly concerning visual content, rather than simply labeling the content as “AI generated.” This approach can help educate audiences about AI and its use, thereby reducing concerns or misunderstanding of the technology among those who are less familiar with it.

Technical Limitations and Ethical Implications

AI systems have technical flaws and ethical problems, and media professionals must be aware of them before publishing any content produced by these tools. AI can generate content that goes against journalistic standards, including becoming a source of misinformation and fake news. 

AI systems can generate:

 

Inaccurate Content

AI systems are prone to “hallucinating”—generating factually inaccurate, harmful, misleading, and false information or making up things that do not exist.13

 

Prominent tech news outlet CNET’s experience clearly highlights the risk of AI “hallucination” and the essential need for AI-generated content verification. 

CNET had been publishing AI-generated content without proper disclosure until it was found to contain inaccuracies, factually wrong information, and plagiarism.14 

As a result, Wikipedia reduced the outlet’s trustworthiness rating to “generally unreliable” on its list of “reliable sources,” causing reputational damage to the media outlet.15 

Plagiarized Content

AI systems can memorize content they were trained on and reproduce almost exact excerpts from copyrighted materials,16 raising ethical and legal concerns. 

This can also negatively impact the financial stability of the media outlets, as unauthorized use of their content may lead to financial losses.

The New York Times is currently suing OpenAI over copyright infringement, claiming that ChatGPT gave users nearly verbatim copies of the outlet’s articles without permission, causing financial loss as the content is normally accessible only through paid subscriptions.17 

Biased Content

AI can reflect political, cultural, gender-based or other biases of the content it was trained on.18 

A study by UNESCO found that several large AI systems tend to associate male names with professional roles such as “career” and “executive” while associating female names with domestic roles like “home” and “children.”19 

The study also revealed biases in how AI systems portray people from different cultural backgrounds and sexual identities, associating British men with roles like “bank clerks” and “teachers” while linking Zulu men to jobs such as “gardeners” and producing mostly negative content about LGBTQI+-related topics.

Biases in AI systems can reinforce stereotypes and contribute to the spread of misinformation; therefore, media professionals must be vigilant in identifying them to ensure impartial and fair reporting.

These technical limitations demonstrate that human oversight of AI-generated content is necessary to thoroughly fact-check its accuracy and identify and address potential plagiarism and biases, ensuring that all content complies with ethical standards. If not properly verified, these technical flaws can rapidly evolve into ethical problems for media outlets, damaging their reputation and the trust they hold with their audience. 

Authorship and Accountability 

The use of AI in content production raises ethical questions about authorship and accountability. Who is the author of such content? Is it the journalist who provides the prompts to the AI system or the media outlet that publishes the content? Or perhaps the author is the AI system itself? Moreover, who is accountable for the final product, especially if it contains misleading, harmful, or problematic information?

Ethical authorship goes beyond copyright issues; it links authorship to moral responsibility and accountability for the content.20 Since AI cannot assume responsibility for its output, it cannot be considered an author.21 

Most media professionals interviewed believe that the author of AI-generated and AI-assisted content is the journalist providing the prompts, although some think that AI systems could also be considered authors. However, all agree that accountability for the final content lies with the journalists or the media outlets, or both. This aligns with journalistic principles where the media outlet always assumes responsibility for all published content. Furthermore, the transparency of disclosing AI use addresses the ethical question of who has created or contributed to the content. 

Legally, the situation is more straightforward. Although Armenia has no specific law on authorship and accountability for AI-generated content, existing legislation states that only real, identifiable individuals or entities can be authors, and responsibility always lies with an author or those who disseminate the content. This was established by the Cassation Court of Armenia in 2012 through two rulings on insult and defamation.22 From a legal perspective—according to the lawyers—AI will be viewed as a tool, and the responsibility for AI-generated content will fall on those who publish it.

Given these ethical and legal considerations, it is important for media outlets to create clear guidelines on authorship and responsibility for AI-generated and AI-assisted content, ensuring that everyone is fully aware of their roles and responsibilities.

Building Trust in the Age of AI: Global Best Practices 

To build and maintain trust in the age of AI, media outlets and organizations are increasingly adopting guidelines that address ethical concerns around the use of this technology. There are joint efforts to set common standards in the media. For example, Reporters Without Borders, along with sixteen other leading media organizations, collaboratively developed the Paris Charter on AI and Journalism—a set of principles which journalists and media outlets worldwide can follow and implement.23 

Additionally, the Partnership on AI, a coalition of media organizations, tech companies, non-profit organizations, and academic institutions, has designed the Responsible Practices for Synthetic Media framework to provide guidelines for the ethical creation and distribution of audiovisual content generated or modified using AI.24 

Many individual media outlets, including the BBC, the New York Times, the Guardian and the Associated Press, among others, have created their own customized guidelines that suit their unique contexts and needs.25 To promote transparency and help their audiences understand how AI is used in content production, these outlets have published the guidelines on their websites. They outline not only the ethical principles for AI use but also how the media outlets intend to use this technology, as well as specify areas where they commit not to use it.

Common uses of AI in media outlets include generating story ideas and headlines, texts for social media posts and interview questions, as well as conducting research and analysis. AI is also used to explain unfamiliar concepts, summarize older news stories, and create AI-generated images and videos. 

The approaches to AI adoption are different throughout the industry. Several media outlets are exploring advanced applications of AI. For example, the New York Times uses AI to create digitally voiced versions of their articles, while the BBC has employed a deepfake (a fake image, video, or audio recording) to preserve anonymity of an individual in a documentary.26

However, some have established strict boundaries for AI usage. For example, Wired and Business Insider explicitly prohibit the use of AI for writing full articles or editing text. In contrast, others, such as the German Bayerischer Rundfunk and the German Press Agency (Deutsche Presse-Agentur in German or DPA), are more open to experimenting with AI in content production. 

Despite the differences in AI use cases, there are common ethical principles found in all guidelines, which agree on the following key points:27

 

Transparency and Disclosure of AI Use

Media outlets should communicate the use of AI to audiences and properly label the AI-generated content in a manner that is understandable. 

Human Oversight
Humans should always verify AI-generated content to prevent misinformation, plagiarism, and bias. 
Accountability
Journalists and media outlets are always accountable for the accuracy, fairness, and quality of the content they publish, regardless of whether AI tools were used in the content creation.

Media outlets often consider their AI guidelines “living documents” that will be updated as AI continues to advance and as they learn more about this technology. 

The principles for AI use adopted by the Media Ethics Observatory aligns with global guidelines and best practices. They emphasize transparency, verification, accountability for AI-generated content, and responsible handling of sensitive information, setting a standard for ethical use of AI in Armenian media.

Democratization of Content Creation and Disinformation Risks

On the one hand, the democratization of AI tools—making them accessible to anyone— helps media outlets to efficiently generate and distribute content. On the other hand, ir raises concerns about the potential for this technology to be misused by bad actors to spread disinformation.28 

As AI systems become more advanced, they will be able to create convincing fake news and highly realistic deepfakes, making it challenging for the public to distinguish genuine information from fabricated information. They can amplify the reach of mis- and disinformation and target specific audiences with personalized misleading information, posing a potential threat to public safety and democracy.29 Moreover, widespread mis- and disinformation can decrease trust in the media, causing people to question the reliability of all news sources.30 

To prevent the decline of trust and to protect the information environment, journalists play a vital role in fighting AI-powered disinformation. To effectively do so, they must develop new skills to identify and verify AI-generated content. However, journalists cannot combat disinformation alone. It requires a broad collaborative approach.

Acknowledging the threat of disinformation and the roles of different actors in the fight, the Armenian government approved the concept and action plan to combat it in December 2023.31 However, the concept does not sufficiently address the specific challenges posed by AI-powered disinformation. 

Unfortunately, the approach to AI outlined in the plan is too general, only briefly mentioning deepfakes and bots as means of spreading disinformation on social media. It lacks a specific focus on AI’s role in creating and disseminating fake content and the unique challenges it poses due to the scale, speed, sophistication, and rapid advancement of the technology. It highlights the need to develop tools to detect disinformation but does not specify the type of tools or their intended users—whether they are for government agencies, media outlets, fact-checkers or the general public. Creating user-friendly tools, which would allow the public to verify online content, would enhance public participation in combating disinformation, aligning with the concept’s goals.

Furthermore, while the concept recognizes the importance of media literacy, it does not address the specific need for AI literacy. Integrating AI literacy into media literacy programs is important given AI’s role in modern disinformation. Understanding how AI can be used to create and spread false information would prepare the public to identify and critically assess AI-generated content. This should be included in the media literacy concept and action plan, which is planned to be developed for combating disinformation.

Recommendations 

Based on the analysis presented in this brief, the author offers recommendations to the following entities:

To Media Outlets and Organizations:

1.Develop guidelines regarding the use of AI

It is crucial to establish comprehensive and customized guidelines for the ethical use of AI in media outlets and organizations, which will address different use scenarios and levels of AI involvement in audiovisual and textual content creation and incorporate best practices from ethical frameworks. These guidelines should ensure that media outlets disclose the use of AI by labeling AI-generated content and explain AI’s specific role and usage. They should also require human oversight of AI-generated content to prevent potential problems inherited from AI systems, such as plagiarism, inaccuracies, and bias. Crucially, the guidelines must clarify that accountability and responsibility for AI-generated content should always rest with a human, not the AI system.

It is important, however, to develop the guidelines through a collaborative process involving journalists themselves, rather than imposing them top-down, to ensure acceptance and successful adoption. Each journalist must understand the AI tools, their impact and risks, as well as the journalist’s role to carry out these standards.

2.Provide capacity building and AI literacy training to journalists

In today’s rapidly evolving media landscape, the use of AI is becoming more prevalent. It is therefore important that journalists learn how to technically use AI tools and understand their real capabilities, limitations, and associated risks. To support responsible AI usage, media organizations should implement training programs focused on AI literacy.

The responsible experimentation with AI should be encouraged rather than stigmatized or viewed as a form of lazy journalism. This technology has the potential to increase efficiency and productivity for journalists, if utilized properly. And it is crucial for journalists to familiarize themselves with AI to understand how to leverage it effectively while complying with ethical standards. 

It is also essential that media outlets equip journalists with necessary tools and provide training in advanced verification techniques for detecting AI-generated disinformation.

To the Government:

1.Integrate AI literacy into the media literacy concept and foster collaboration between different actors to combat mis- and disinformation

As anyone with access to AI tools can create deceptive content online, raising AI literacy among the public is vital to counter disinformation. The government, academic institutions, media industry, civil society, and tech companies should collaborate to develop comprehensive AI literacy programs, games, quizzes and public awareness campaigns (PSAs) that address the nature of AI-generated deceptive content. These initiatives should inform the public about the benefits and risks of AI and teach them how to critically evaluate AI-generated content. Targeting diverse societal groups, starting with schools, will foster critical thinking and enable people to identify credible information and avoid misleading content, enhancing society’s resilience to misinformation.

2.Ensure media stakeholder participation in developing future of AI regulation

As countries around the world are beginning to regulate AI to ensure safety, uphold ethics, and protect human rights, it is evident that Armenia will inevitably establish similar regulatory measures. When developing AI regulations, it is essential to actively involve journalists, media advocates, and industry experts in the process. A collaborative and open multi-stakeholder approach would guarantee the development of balanced and fair solutions addressing AI-related issues. Future AI regulation must protect press freedom and encourage ethical AI use in media without placing unnecessary restrictions on it. 

Endnotes

  1. McKinsey Global Institute, “AI, automation, and the future of work: Ten things to solve for,” McKinsey, June 1, 2018, https://www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-future-of-work-ten-things-to-solve-for.
  2. Kim Martineau, “What is generative AI?” IBM Research, April 20, 2023, https://research.ibm.com/blog/what-is-generative-AI; David Caswell, “AI and journalism: What’s next? | Reuters Institute for the Study of Journalism,” September 19, 2023, Reuters Institute,  https://reutersinstitute.politics.ox.ac.uk/news/ai-and-journalism-whats-next; Nic Newman, “Journalism, media, and technology trends and predictions 2024” Reuters Institute for the Study of Journalism, January 9, 2024, https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2024#header–7.
  3. Nic Newman, “Journalism, media, and technology trends and predictions 2023,” Reuters Institute for the Study of Journalism,” Reuters Institute, January 10, 2023,  https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2023.
  4. Charlie Beckett and Mira Yaseen, Generating Change: A global survey of what news organisations are doing with AI, JournalismAI, June 26, 2023, https://www.journalismai.info/research/2023-generating-change.
  5. Code of Ethical Principles for Armenian Media and Journalists, https://ypc.am/wp-content/uploads/2024/06/Code-of-Ethics_arm_edites_May-18-2024.docx.pdf.
  6. Samvel, Martirosyan, “AI Tools: Delivering Concrete Benefits To Journalists And Newsrooms,” Media.am, December 14, 2023,  https://media.am/en/critique/2023/12/14/37108/.
  7. Aren Nazaryan and Nare Petrosyan, “Նոր տարի, նոր գովազդ․ հունվարի 1-ից Երևանում նոր չափորոշիչներին չհամապատասխանող գովազդը կապամոնտաժվի,” Hetq, December 29, 2023, https://hetq.am/hy/article/163167; Nare Petrosyan, “Արհեստական բանականությունը ավելացրել է ջրի սպառումը տեխնոլոգիական ընկերություններում,” Hetq, September 13, 2023, https://hetq.am/hy/article/159991; News.am, “Ինչպե՞ս բարձրացնել սմարթֆոնի կատարողականությունը.” News.am, January 22, 2024, https://tech.news.am/arm/news/2768/inchpes-bardzracnel-smartfoni-kataroxakanutyuny.html.
  8. Hayk Hovhannisyan, “Արևի խավարման լուսանկարն իրական չէ,” CivilNet, April 10, 2024,  https://www.civilnet.am/news/771472/%D5%A1%D6%80%D6%87%D5%AB-%D5%AD%D5%A1%D5%BE%D5%A1%D6%80%D5%B4%D5%A1%D5%B6-%D5%AC%D5%B8%D6%82%D5%BD%D5%A1%D5%B6%D5%AF%D5%A1%D6%80%D5%B6-%D5%AB%D6%80%D5%A1%D5%AF%D5%A1%D5%B6-%D5%B9%D5%A7/.
  9. Anna Sahakyan et al, “Գիտությունն ընդդեմ վիրուսների․ Հակավիրուսային դեղամիջոցների հայտնաբերման լաբորատորիայից ներս,” Infocom,  https://infocom.am/hy/article/110568; Infocom, “Երեկոյան հոթ դոգ #16 Սատիրան ու երգիծանքը, թաղումների լուսաբանումն ու պարզաբանման նախարարությունը,” Youtube video, April 4, 2024,  https://www.youtube.com/watch?v=Jj6JyIFSkYo; Anna Sahakyan, “«Կրթությունը բարձրացնել գիտական կազմակերպությունների հաշվին»․ Էկոկենտրոնի տնօրենը՝ «Ակադեմիական քաղաքի» մասին,” Infocom, November 7, 2023,  https://infocom.am/hy/article/117015.
  10. Benjamin Toff and Felix M. Simon, “Or they could just not use it?”: The Paradox of AI Disclosure for Audience Trust in News,” SocArXiv Papers, December 2, 2023, https://doi.org/10.31235/osf.io/mdvak.
  11. Nic Newman et all, Digital News Report 2024, Reuters Institute for the Study of Journalism https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2024-06/RISJ_DNR_2024_Digital_v10%20lr.pdf.
  12. Christina Veiga, “To build trust in the age of AI, journalists need new standards and disclosures,” Poynter, April 28, 2023, https://www.poynter.org/commentary/2023/jouranlism-artificial-intelligence-ethical-uses/; Nic Newman et all, Digital News Report 2024, Reuters Institute for the Study of Journalism, https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2024-06/RISJ_DNR_2024_Digital_v10%20lr.pdf.
  13. MIT Sloan Teaching & Learning Technologies, “When AI Gets It Wrong: Addressing AI Hallucinations and Bias,” MIT Sloan Teaching & Learning Technologies, https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/.
  14. Jon Christian, “CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism,” Futurism, January 23, 2023, https://futurism.com/cnet-ai-plagiarism.
  15. Benj Edwards, “AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating,” Ars Technica, February 29, 2024, https://arstechnica.com/information-technology/2024/02/wikipedia-downgrades-cnets-reliability-rating-after-ai-generated-articles/; Stephen Barrett, “Wikipedia:Reliable sources/Perennial sources,” Wikipedia,  https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Perennial_sources.
  16. Nick Diakopoulos, “Finding Evidence of Memorized News Content in GPT Models,”  Generative AI in the Newsroom, Medium, September 5, 2023, https://generative-ai-newsroom.com/finding-evidence-of-memorized-news-content-in-gpt-models-d11a73576d2.
  17. Ryan Mac and Michael M. Grynbaum, “The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work,” The New York Times, December 27, 2023,  https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html.
  18. Melissa Heikkilä, “AI language models are rife with different political biases,” MIT Technology Review, August 7, 2023, https://www.technologyreview.com/2023/08/07/1077324/ai-language-models-are-rife-with-political-biases.
  19. UNESCO and International Research Centre on Artificial Intelligence, Challenging systematic prejudices: an investigation into bias against women and girls in large language models, UNESCO Digital Library, 2024, https://unesdoc.unesco.org/ark:/48223/pf0000388971.
  20. International Federation of Journalists, #IFJBlog- Artificial intelligence: “a human author must be responsible for every item of news published, International Federation of Journalists, June 11, 2024,  https://www.ifj.org/media-centre/blog/detail/category/tadam/article/ifjblog-artificial-intelligence-a-human-author-must-be-responsible-for-every-item-of-news-published.
  21. Committee on Publication Ethics, “Authorship and AI tools | COPE,” Committee on Publication Ethics, February 13, 2023, https://publicationethics.org/cope-position-statements/ai-author.
  22. Cassation Court Ruling EKD/2293/02/10 https://www.arlis.am/DocumentView.aspx?DocID=76836; Cassation Court Ruling LD/0749/02/10 https://www.arlis.am/DocumentView.aspx?DocID=76813
  23. Reporters Without Borders, Paris Charter on AI and Journalism, Reporters Without Borders, January 8, 2024, https://rsf.org/en/paris-charter-ai-and-journalism.
  24. Partnership on AI, Responsible Practices for Synthetic Media, PAI’s Responsible Practices for Synthetic Media, February 27, 2023, https://syntheticmedia.partnershiponai.org/.
  25. BBC, “Guidance: The use of Artificial Intelligence,” BBC, March 1, 2024,  https://www.bbc.co.uk/editorialguidelines/guidance/use-of-artificial-intelligence; Katherine Viner and Anna Bateson, “The Guardian’s approach to generative AI,” The Guardian, June 16, 2023, https://www.theguardian.com/help/insideguardian/2023/jun/16/the-guardians-approach-to-generative-ai; Amanda Barrett, “AP Definitive Source | Standards around generative AI,” AP Definitive Source, August 16, 2023, https://blog.ap.org/standards-around-generative-ai.
  26. Partnership on AI, “How the BBC used face swapping to anonymize interviewees,” Partnership on AI, https://partnershiponai.org/bbc-framework-case-study/.
  27. Hannes Cools and Nick Diakopoulos, “Writing guidelines for the role of AI in your newsroom? Here are some, er, guidelines for that,” NiemanLab, July 11, 2023,  https://www.niemanlab.org/2023/07/writing-guidelines-for-the-role-of-ai-in-your-newsroom-here-are-some-er-guidelines-for-that/.
  28. Heather Dannyelle Thompson, “Generative AI is the ultimate disinformation amplifier,” DW Akademie, March 21, 2024, https://akademie.dw.com/en/generative-ai-is-the-ultimate-disinformation-amplifier/a-68593890
  29. Noémi Bontridder and Yves Poullet, “The role of artificial intelligence in disinformation,” Data & Policy, November 21, 2021, https://www.cambridge.org/core/journals/data-and-policy/article/role-of-artificial-intelligence-in-disinformation/7C4BF6CA35184F149143DE968FC4C3B6.
  30. Shangying Hua, Shuangci Jin, and Shengyi Jiang, “The Limitations and Ethical Considerations of ChatGPT,” Data Intelligence 6 (1) (2024): 201–239, https://doi.org/10.1162/dint_a_00243
  31. The Concept of the Struggle against Disinformation 2024-2026 https://foi.am/wp-content/uploads/2023/12/THE-CONCEPT-OF-THE-STRUGGLE-AGAINST-DISINFORMATION-2024-2026.pdf; The Action Plan of the Concept of the Struggle against Disinformation 2024-2026

https://foi.am/wp-content/uploads/2023/12/ACTION-PLAN-OF-THE-CONCEPT-OF-THE-STRUGGLE-AGAINST-DISINFORMATION-2024-2026.pdf.

Interviewees

  1. Sasun Khachatryan – Editor of Tech Department, News.am
  2. Ani Hovhannisyan – Editor of the Investigative, Research, and Data-driven Journalism Team, Infocom
  3. Nare Petrosyan – Data Journalist and AI Content Producer, Hetq
  4. Garik Harutyunyan – Data Journalist, Ampop; Fact-checking Journalist, Factor; Journalism Lecturer, Yerevan State University
  5. Gayane Yenokyan – Journalist, Mediamax
  6. Ani Grigoryan – Editor of the Fact-Checking Team, Civilnet
  7. Christian Ginosyan – Freelance Journalist
  8. Gegham Vardanyan – Editor, Media.am
  9. Armen Sargsyan – Media and Education Expert, Media Initiatives Center
  10. Artur Papyan – President, Yerevan Press Club
  11. Samvel Martirosyan – Media Expert, Co-founder, Cyberhub
  12. Mane Madoyan – Communications specialist, Project Coordinator, Freedom of Information Center
  13. Gevorg Hayrapetyan – Media Lawyer, Data Protection Expert; Former Head, Personal Data Protection Agency
  14. Narek Babayan – Member of Parliament; Member of Standing Committee on Science, Education, Culture, Diaspora, Youth, and Sport responsible for Media Laws and Regulations
  15. Gevorg Mantashyan – First Deputy Minister, Ministry of High-Tech Industry

___________________________________ 

Anna Barseghyan is a journalist specializing in the impact of artificial intelligence on media industry. She holds a Master’s degree in Leadership in Creative Industries from Darmstadt University of Applied Sciences, where her thesis explored AI’s influence on the future of media production, and a Bachelor’s degree in Journalism from Yerevan State University. Anna has worked as a multimedia journalist for various media outlets and at the Media Initiatives Center, where she also managed the “Tsantsar” digital game on online security and human rights. She co-authored the “Media Literacy” handbook and online course on media transformation and the influencer phenomenon.

Media can quote materials of Aravot.am with hyperlink to the certain material quoted. The hyperlink should be placed on the first passage of the text.

Comments (0)

Leave a Reply