Report March 2025
Your organisation description
Empowering Users
Commitment 17
In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.
We signed up to the following measures of this commitment
Measure 17.1 Measure 17.2 Measure 17.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 17.1
Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.
QRE 17.1.1
Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.
SLI 17.1.1
Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.
| Country | Total count of the tool’s impressions | Interactions/ engagement with the tool | Other relevant metrics |
|---|---|---|---|
| Austria | 0 | 0 | 0 |
| Belgium | 0 | 0 | 0 |
| Bulgaria | 0 | 0 | 0 |
| Croatia | 0 | 0 | 0 |
| Cyprus | 0 | 0 | 0 |
| Czech Republic | 0 | 0 | 0 |
| Denmark | 0 | 0 | 0 |
| Estonia | 0 | 0 | 0 |
| Finland | 0 | 0 | 0 |
| France | 0 | 0 | 0 |
| Germany | 0 | 0 | 0 |
| Greece | 0 | 0 | 0 |
| Hungary | 0 | 0 | 0 |
| Ireland | 0 | 0 | 0 |
| Italy | 0 | 0 | 0 |
| Latvia | 0 | 0 | 0 |
| Lithuania | 0 | 0 | 0 |
| Luxembourg | 0 | 0 | 0 |
| Malta | 0 | 0 | 0 |
| Netherlands | 0 | 0 | 0 |
| Poland | 0 | 0 | 0 |
| Portugal | 0 | 0 | 0 |
| Romania | 0 | 0 | 0 |
| Slovakia | 0 | 0 | 0 |
| Slovenia | 0 | 0 | 0 |
| Spain | 0 | 0 | 0 |
| Sweden | 0 | 0 | 0 |
| Iceland | 0 | 0 | 0 |
| Liechtenstein | 0 | 0 | 0 |
| Norway | 0 | 0 | 0 |
Measure 17.2
Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.
QRE 17.2.1
Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.
Measure 17.3
For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.
QRE 17.3.1
Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.
Empowering Researchers
Commitment 28
COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.
We signed up to the following measures of this commitment
Measure 28.1 Measure 28.2 Measure 28.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 28.1
Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.
QRE 28.1.1
Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.
In 2024, AI Forensics led a collaborative effort with civil society organizations, scholars, and media to analyze algorithm-driven content dissemination across YouTube, TikTok, and Microsoft Copilot during the EU elections. This initiative produced critical reports exposing the role of recommendation systems in shaping the electoral landscape.
Our research on AI-generated imagery during the EU and French elections uncovered 51 instances of unlabeled AI images, often amplifying anti-EU and anti-immigrant narratives. Additionally, in partnership with SNV, we assessed misleading TikTok search suggestions that distorted election-related information.
In collaboration with Nieuwsuur, we investigated AI chatbot responses to political campaign strategy prompts in the Netherlands. The follow-up report analyzed the effectiveness of content moderation across different chatbots, evaluating how electoral safeguards varied based on factors such as platform, language, electoral context, and interface.
Measure 28.2
Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.
QRE 28.2.1
Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.
Measure 28.3
Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.
QRE 28.3.1
Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.
Commitment 29
Relevant Signatories commit to conduct research based on transparent methodology and ethical standards, as well as to share datasets, research findings and methodologies with relevant audiences.
We signed up to the following measures of this commitment
Measure 29.1 Measure 29.2
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 29.1
Relevant Signatories will use transparent methodologies and ethical standards to conduct research activities that track and analyse influence operations, and the spread of Disinformation. They will share datasets, research findings and methodologies with members of the Task-force including EDMO, ERGA, and other Signatories and ultimately with the broader public.
QRE 29.1.1
Relevant Signatories will provide reports on their research, including topics, methodology, ethical standards, types of data accessed, data governance, and outcomes.
Monitoring of the Code
Commitment 38
The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.
We signed up to the following measures of this commitment
Measure 38.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 38.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
QRE 38.1.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
Commitment 39
Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Crisis and Elections Response
Elections 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
1. French Elections (Artifi cial Elections: Exposing the Use of Generative AI Imagery in the Political Campaigns of the 2024 French Elections) AI Forensics investigated how AI-generated images were used in French political campaigns during the 2024 European Parliament and legislative elections. In May and June of 2024, we collected data from a variety of sources to get a comprehensive look at the use of AI imagery. We explored offi cial party websites and their social media accounts on platforms such as Facebook, Instagram, X (formerly Twitter), TikTok, YouTube, and LinkedIn.
Main threats:
The lack of transparency is alarming and highlights several critical concerns. Firstly, political parties and social media platforms are failing to adequately disclose the use of AI-generated imagery, which undermines public trust. Additionally, there is a pressing need for stricter content labelling to ensure the integrity of political campaigns and prevent the spread of misleading information. Finally, our fi ndings underscore the necessity of reinforcing EU-wide policies on the use of generative AI in elections to safeguard democratic processes and maintain electoral integrity.
2. TikTok Search: Analyzing TikTok´s “Others searched for” Feature: TikTok’s impact on public discourse among young users in Germany, focusing on the infl uence of search suggestions. This investigation on TikTok “Others searched for”; feature helps to understand its infl uence on political discourse, especially in the context of the 2024 elections. Conducted in collaboration with AI Forensics and interface TikTok Audit Team, this study aimed to determine if TikTok´s algorithm promotes misleading or sensational content. This feature suggests search terms to users, which could potentially lead them to questionable information or politically biased content, posing signifi cant risks to public discourse.
Main threats: The study highlights that TikTok's "Others Searched For" feature can distort reality for young users, especially during critical electoral periods. This distortion can negatively affect public political discourse, making it imperative for social media platforms to implement more robust oversight and transparency on their algorithms, including on less prominent algorithmic features such as search suggestions.. Our fi ndings emphasize the need for improved measures to ensure that search suggestions do not perpetuate misinformation or political bias, thus contributing to a more informed and balanced media environment.
3. Chatbot (s)elected moderation: Measuring the Moderation of Election-Related Content Across Chatbots, Languages and Electoral Contexts
This report evaluates and compares the effectiveness of these safeguards in different scenarios. In particular, we investigate the consistency with which electoral moderation is triggered, depending on (i) the chatbot, (ii) the language of the prompt, (iii) the electoral context, and (iv) the interface.
Main threats: The effectiveness of the moderation safeguards deployed by Copilot, ChatGPT, and Gemini is widely different. Gemini's moderation was the most consistent, with a moderation rate of 98%. For the same sample on Copilot, the rate was around 50%, while on the OpenAI web version of ChatGPT, there is no additional election-related moderation. Moderation is strictest in English and highly inconsistent across languages. When prompting Copilot about EU Elections, the moderation rate was the highest for English (90%), followed by Polish (80%), Italian (74%), and French (72%). It falls below 30% for Romanian, Swedish, Greek, or Dutch, and even for German (28%) despite it being the EU’s second most spoken language. For a given language, when asking the analogous prompts for both the EU and the US elections, the moderation rate can vary substantially. This confi rms the inconsistency of the process. Moderation is inconsistent between the web and API versions. The electoral safeguards on the web version of Gemini have not been implemented on the API version of the same tool.
4. No Embargo in Sight: Meta leds pro-Russian propaganda fl ood the EU: This investigation sheds light on a signifi cant loophole in the moderation of political advertisements on Meta platforms, highlighting systemic failures just as the European Union heads into crucial parliamentary elections. Our fi ndings uncover a sprawling pro-Russian infl uence operation that exploits these moderation failures, risking the integrity of democratic processes in Europe.
Main threats: Widespread Non-compliance: Less than 5% of undeclared political ads are caught by Meta's moderation system.Ineffective Moderation: 60% of ads moderated by Meta do not adhere to their own guidelines concerning political advertising. Signifi cant Reach: A specifi c pro-Russian propaganda campaign reached over 38 million users in France and Germany, with most ads not being identifi ed as political in a timely manner. Rapid Adaptation: The infl uence operation has adeptly adjusted its messaging to major geopolitical events to further its narratives.
Mitigations in place
Policies and Terms and Conditions
1. Transparency Requirements: There is a critical need for greater transparency from political parties and social media platforms regarding the use of AI-generated imagery. Current policies must enforce clear disclosure when synthetic content is used in campaigns, ensuring the public is fully informed about AI-altered visuals. This should include a requirement for political actors to label AI-generated materials and for platforms to fl ag such content when shared on social media.
2. Stricter Content Labelling: To combat the spread of misleading or deceptive AI-generated content, platforms must enhance their content moderation policies. Automated tools and human oversight should work in tandem to identify and remove manipulated or misleading images that distort political discourse. Policies should also include stringent checks to ensure that AI-generated content used in political contexts complies with electoral laws and ethical standards.
3. Translating Codes of Conduct into regulatory obligations: The fi ndings underline the necessity of strengthening EU-wide policies on the use of generative AI in elections. Current frameworks, like the Code of Conduct for the 2024 European Parliamentary Elections, should be reinforced with mandatory regulations, penalties for violations, and robust enforcement mechanisms. This will safeguard democratic processes from the undue infl uence of misleading, AI-generated content and maintain electoral integrity across member states.
4. Amplification of Misinformation: Generative AI has been used to produce content that spreads misinformation, emotionally manipulates voters, and supports extremist ideologies. The ease and low cost of creating such content exacerbate the risk of misleading narratives dominating electoral campaigns.
Our report on TikTok´s “Others Searched for” Feature suggests several solutions to address the threats:
1. Stronger Oversight to prevent algorithmic harms: Social media platforms, especially TikTok, should strengthen their content moderation systems to prevent misleading or biased search suggestions. This includes actively identifying and removing dog whistles, misinformation, and content designed to manipulate users' political views
2. Transparency in Algorithms: Platforms must be more transparent about how their algorithms generate search suggestions. Clear policies are needed to explain how suggestions are ranked, especially during election periods, to ensure that users aren't steered toward specifi c political narratives or parties.
3. Reducing Political Bias: TikTok should implement safeguards to ensure that search suggestions do not disproportionately promote one political party or viewpoint. By doing so, they can help foster a more balanced media environment that avoids distorting electoral discourse.
Our report on “Chatbot (s)elected moderationbsuggests the following solutions to address the threats posed by chatbot moderation and misinformation in sensitive contexts such as elections:
1. Consistency in Moderation: Platforms must ensure that chatbot moderation mechanisms are applied uniformly across all languages and geographies, preventing gaps in protection for non-English users and elections in various regions.
2. Transparency of Moderation Systems: Platforms should publish clear documentation explaining the design, implementation, and functioning of their moderation systems, helping users and researchers understand how content is managed and ensuring safeguards are in place.
3. Accountability through External Scrutiny: Introducing research APIs that allow third parties to test and scrutinize chatbot moderation layers is essential for improving accountability. This would enable external experts to assess the effectiveness of the moderation mechanisms and identify potential biases or inconsistencies.
4. Improved Moderation for Sensitive Prompts: Platforms should develop robust safeguards for sensitive topics, such as elections, ensuring that chatbots do not spread harmful misinformation or propaganda. Enhanced moderation must be implemented systematically across all contexts.
Political Advertising
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
1. Launch infringement proceedings against Meta under the Digital Services Act (DSA) for systemic risks, emphasizing Meta's failure to address Coordinated Inauthentic Behavior that threatens election integrity.
2. Enforce stricter application of DSA Article 39 to require platforms to provide comprehensive metadata in their ad registries, enabling external scrutiny of political ads. Platforms like X should improve transparency in line with Meta's standards.
3. Immediate action by Meta to neutralize the ongoing "Doppelgänger" infl uence operation and preemptively moderate any new similar activity.
4. Automate the labelling of political ads with systems to fl ag political
Empowering Users
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
User Education: Platforms should provide educational tools to help users critically assess the information they encounter, promoting media literacy and a deeper understanding of potential biases within search suggestions.
Empowering the Fact-Checking Community
Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.
Fact-Checking and Flagging of Sensitive Content: Implementing robust fact-checking mechanisms that fl ag potentially misleading or biased search suggestions would help young users navigate political content more responsibly.