Meta

Report March 2025

Submitted

Your organisation description

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, the key part of our approach to combat misinformation is providing tools and products that will contribute to a more resilient digital society, where people are able to critically evaluate information, make informed decisions about the content they see, and self-correct.  Below are some examples of that work relevant to the European Union. 

During the reporting period Meta ran a range of media literacy topics, focusing on a range of areas, including Youth, EU Elections, Gen AI, as well as EU national elections. These campaigns are outlined in more detail in QRE 17.2.1 with reach metrics outlined in SLI 17.2.1.

In the second half of 2024, Meta undertook several initiatives aimed at promoting digital literacy and combating misinformation in the EU. 

As part of these efforts, in November 2024, Meta launched a global Fraud and Scams campaign including several EU markets, such as France, Germany, Poland, Romania, Belgium, and Spain. The campaign featured ads from Facebook, Instagram, and WhatsApp, emphasizing our commitment to user safety. It educates users on how to identify, avoid, and report scams while highlighting our ongoing efforts to protect them on our platforms. 

In addition to these campaigns, we continued our collaboration with the European Disability Forum (EDF) by launching a media literacy initiative focused on accessible elections. This program aimed to promote inclusive and accessible electoral processes for all citizens, including those with disabilities.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

In January 2025, Meta launched a Youth campaign running in France, Ireland, Spain, Italy and the Netherlands.  

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, we continue to enforce our policies to combat the spread of misinformation.

In December 2024, we globally deprecated the feature on Instagram that displayed a pop-up when an account attempted to tag or mention another account that had been repeatedly fact-checked.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. 

Commitment 18 covers the current practices for Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

There have been no significant updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our transparency and recommender tools.

Commitment 21

Relevant Signatories commit to strengthen their efforts to better equip users to identify Disinformation. In particular, in order to enable users to navigate services in an informed way, Relevant Signatories commit to facilitate, across all Member States languages in which their services are provided, user access to tools for assessing the factual accuracy of sources through fact-checks from fact-checking organisations that have flagged potential Disinformation, as well as warning labels from other authoritative sources.

We signed up to the following measures of this commitment

Measure 21.1 Measure 21.2 Measure 21.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our previous report, we updated our fact-checking program guidelines to clarify that our existing policies allow fact-checkers to rate digitally created or edited content - including through the use of artificial intelligence (AI) - when content risks misleading people about something consequential that has no basis in fact. We also employed measures to improve fact-checkers ability to apply their ratings to fake or manipulated audio content.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. 

Commitment 21 covers the current practices for Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Commitment 23

Relevant Signatories commit to provide users with the functionality to flag harmful false and/or misleading information that violates Signatories policies or terms of service.

We signed up to the following measures of this commitment

Measure 23.1 Measure 23.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we maintain a specific report category for users to flag to us what they believe is false information (in addition to content that they believe violates any of our other Community Standards). 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our user reporting tools or processes. 

Commitment 24

Relevant Signatories commit to inform users whose content or accounts has been subject to enforcement actions (content/accounts labelled, demoted or otherwise enforced on) taken on the basis of violation of policies relevant to this section (as outlined in Measure 18.2), and provide them with the possibility to appeal against the enforcement action at issue and to handle complaints in a timely, diligent, transparent, and objective manner and to reverse the action without undue delay where the complaint is deemed to be founded.

We signed up to the following measures of this commitment

Measure 24.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we’re committed to fighting the spread of misinformation on our platforms, but we also believe it’s critical to enable expression, debate and voice. We let users know when we remove a piece of content for breaching our Community Standards or when a fact-checker rated their content. In June 2023, we also took steps to improve our penalty system to make it fairer and more effective.

Relevant updates to user notice and appeal processes were also made in 2023,  in line with DSA requirements.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our processes. 

Commitment 25

In order to help users of private messaging services to identify possible disinformation disseminated through such services, Relevant Signatories that provide messaging applications commit to continue to build and implement features or initiatives that empower users to think critically about information they receive and help them to determine whether it is accurate, without any weakening of encryption and with due regard to the protection of privacy.

We signed up to the following measures of this commitment

Measure 25.1 Measure 25.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

N/A

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

N/A

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Empowering Researchers

Commitment 26

Relevant Signatories commit to provide access, wherever safe and practicable, to continuous, real-time or near real-time, searchable stable access to non-personal data and anonymised, aggregated, or manifestly-made public data for research purposes on Disinformation through automated means such as APIs or other open and accessible technical solutions allowing the analysis of said data.

We signed up to the following measures of this commitment

Measure 26.1 Measure 26.2 Measure 26.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our previous reports, Meta rolled out the Content Library and API tools to provide access to near real-time public content on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on a graphical User Interface (UI) or through a programmatic API. 

Together, these tools provide comprehensive access to publicly-accessible content across Facebook and Instagram.

Individuals, including journalists affiliated with qualified institutions pursuing scientific or public interest research topics can apply for access to these tools through partners with deep expertise in secure data sharing for research, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. This is a first-of-its-kind partnership that will enable researchers to analyse data from the API in ICPSR’s Social Media Archives (SOMAR) Virtual Data Enclave.

Meta continues to publish reports with relevant data regarding content on Instagram via its Transparency Centre. We’ve shared our quarterly reports throughout 2024 there: 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to, and are in process of adding new features and functionality to Meta Content Library, including  improvements to the application processes for access to the research tools. In addition to this, we regularly seek feedback from the research community for critical updates. 

Measure 26.1

Relevant Signatories will provide public access to non-personal data and anonymised, aggregated or manifestly-made public data pertinent to undertaking research on Disinformation on their services, such as engagement and impressions (views) of content hosted by their services, with reasonable safeguards to address risks of abuse (e.g. API policies prohibiting malicious or commercial uses).

Instagram

QRE 26.1.1

Relevant Signatories will describe the tools and processes in place to provide public access to non-personal data and anonymised, aggregated and manifestly-made public data pertinent to undertaking research on Disinformation, as well as the safeguards in place to address risks of abuse.

As mentioned in our baseline report, we publish a wide range of regular reports on our Transparency Centre including to give our community visibility into how we enforce our policies or respond to some requests: https://transparency.fb.com/data/. We also publish extensive reports on our findings about coordinated behaviour in our newsroom and we have a dedicated public website hosting our Ad Library tools.

QRE 26.1.2

Relevant Signatories will publish information related to data points available via Measure 25.1, as well as details regarding the technical protocols to be used to access these data points, in the relevant help centre. This information should also be reachable from the Transparency Centre. At minimum, this information will include definitions of the data points available, technical and methodological information about how they were created, and information about the representativeness of the data.

Ad Library Tools: The dedicated website for the Ad Library allows users to search all of the ads currently running across Meta technologies. All ads that are currently running on Meta technologies show: the ad content; the basic information, such as when the ad started running and which advertiser is running it. For the ads that have run anywhere in the European Union in the past year, it includes additional transparency specific to the EU. Regarding Ads about social issues, elections or politics that have run in the past seven years, it shows: the ad content, the basic information, such as when the ad started running and which advertiser is running it and additional transparency about spend, reach and funding entities.

As mentioned in our baseline report, we publish on our Transparency Centre numerous reports : 
  • Community Standards Enforcement Report: We publish this report publicly in our Transparency Centre on a quarterly basis to more effectively track our progress and demonstrate our continued commitment to making our services safe and inclusive. The report shares metrics on how we are doing at preventing and taking action on content that goes against our Community Standards (against 12 policies on Instagram). 
  • Quarterly Adversarial Threat Report: We share publicly our findings about coordinated inauthentic behaviour (CIB) we detect and remove from our platforms. As part of our quarterly adversarial threat reports, we will publish information about the networks we take down to make it easier for people to see progress we’re making in one place.

Measure 26.2

Relevant Signatories will provide real-time or near real-time, machine-readable access to non-personal data and anonymised, aggregated or manifestly-made public data on their service for research purposes, such as accounts belonging to public figures such as elected official, news outlets and government accounts subject to an application process which is not overly cumbersome.

Instagram

QRE 26.2.1

Relevant Signatories will describe the tools and processes in place to provide real-time or near real-time access to non-personal data and anonymised, aggregated and manifestly-made public data for research purposes as described in Measure 26.2.

Meta Content Library includes public posts and data on Instagram. Data from the Library can be searched, explored, and filtered on a graphical UI or through a programmatic API. 

Meta Content Library is a web-based, controlled-access environment where researchers can perform deeper analysis of the public content by using Content Library API in a secured clean room environment: 
  • Searching and filtering: searching public posts across Facebook and Instagram is easy with comprehensive sorting and filtering options. Post results can be filtered by language, view count, media type, content producer and more.
  • Multimedia: Photos, videos and reels are available for dynamic search, exploration and analysis.
  • Producer lists: customizable collections of content producers can be used to refine search results. Researchers can apply custom producer lists to a search query to surface public content from specific content owners on Facebook or Instagram.

Content Library API allows programmatic queries of the data and is designed for computational researchers. Data pulled from the API can be analysed in a secure platform: 
  • Endpoints and data fields: With 8 dedicated endpoints, the Content Library API can search across over 100 data fields from Instagram posts, including a subset of personal Instagram accounts.
  • Search indexing and results: Powerful search capabilities can return up to 100,000 results per query.
  • Asynchronous search: allows for queries to run in the background while a researcher works on other tasks. Query progress is monitored and tracked by the API.

For more details - see here

QRE 26.2.2

Relevant Signatories will describe the scope of manifestly-made public data as applicable to their services.

Meta Content Library and API provide near real-time public content from Facebook and Instagram. Details about the content, such as the post owner and the number of reactions and shares, are also available: 
  • Posts shared by and information about Instagram business and creator accounts including from a subset of personal accounts.
  • Available for most countries and territories but excluded from countries where Meta is still evaluating legal and compliance requirements
  • The number of times a post or reel was displayed on screen

For more details - see here

QRE 26.2.3

Relevant Signatories will describe the application process in place to in order to gain the access to non-personal data and anonymised, aggregated and manifestly-made public data described in Measure 26.2.

Individuals, including journalists affiliated with qualified institutions pursuing scientific or public interest research topics are able to apply for access to these tools through a partner with deep expertise in secure data sharing for research, the University of Michigan’s Inter-university Consortium for Political and Social Research (ICPSR). 

For more details on the application process - see here

Measure 26.3

Relevant Signatories will implement procedures for reporting the malfunctioning of access systems and for restoring access and repairing faulty functionalities in a reasonable time.

Instagram

QRE 26.3.1

Relevant Signatories will describe the reporting procedures in place to comply with Measure 26.3 and provide information about their malfunction response procedure, as well as about malfunctions that would have prevented the use of the systems described above during the reporting period and how long it took to remediate them.

We provide comprehensive developer documentation and in depth technical guides that walk through how to use the different tools directly on our website, which also include a dedicated help centre.

Commitment 27

Relevant Signatories commit to provide vetted researchers with access to data necessary to undertake research on Disinformation by developing, funding, and cooperating with an independent, third-party body that can vet researchers and research proposals.

We signed up to the following measures of this commitment

Measure 27.1 Measure 27.2 Measure 27.3 Measure 27.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, we are  actively engaged in the EDMO working group on Platform to Researcher data sharing to develop standardised processes for sharing data with researchers. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We will continue to participate in the EDMO working group to further support the development of  an independent intermediary body to enable GDPR-compliant data sharing. This will include feeding learnings from the EDMO pilot described above into the EDMO working group.

We continue to provide access to new and existing researchers on Meta Content Library, while also evaluating and working towards any improvements to access methods and application processes.

Measure 27.1

Relevant Signatories commit to work with other relevant organisations (European Commission, Civil Society, DPAs) to develop within a reasonable timeline the independent third-party body referred to in Commitment 27, taking into account, where appropriate, ongoing efforts such as the EDMO proposal for a Code of Conduct on Access to Platform Data.

Instagram

QRE 27.1.1

Relevant Signatories will describe their engagement with the process outlined in Measure 27.1 with a detailed timeline of the process, the practical outcome and any impacts of this process when it comes to their partnerships, programs, or other forms of engagement with researchers.

As mentioned in our baseline report, we’ve been actively engaged in the EDMO working group on Platform to Researcher data sharing to develop standardised processes for sharing data with researchers since 2019, and in 2020, we shared extensive comments in response to EDMO call for comment on the GDPR and sharing data for independent social scientific research.

We are participating in the EDMO working group for the Creation of an Independent Intermediary Body to Support Research on Digital Platforms. In 2025 we continue our involvement in the EDMO working group.

Measure 27.2

Relevant Signatories commit to co-fund from 2022 onwards the development of the independent third-party body referred to in Commitment 27.

Instagram

QRE 27.2.1

Relevant Signatories will disclose their funding for the development of the independent third-party body referred to in Commitment 27.

As mentioned in our baseline report, while the EDMO process has been initially funded by the European Commission, we’ve actively supported it by skills-based sponsorship and participation in the EDMO pilot. Separately, we have funded a third party (CASD) to act as a third-party data sharing intermediary as part of the pilot. 

Measure 27.3

Relevant Signatories commit to cooperate with the independent third-party body referred to in Commitment 27 once it is set up, in accordance with applicable laws, to enable sharing of personal data necessary to undertake research on Disinformation with vetted researchers in accordance with protocols to be defined by the independent third-party body.

Instagram

QRE 27.3.1

Relevant Signatories will describe how they cooperate with the independent third-party body to enable the sharing of data for purposes of research as outlined in Measure 27.3, once the independent third-party body is set up.

N/A at this stage

SLI 27.3.1

Relevant Signatories will disclose how many of the research projects vetted by the independent third-party body they have initiated cooperation with or have otherwise provided access to the data they requested.

At this time, the EDMO process has not yet vetted research proposals. We are engaging with another highly experienced third-party, ICPSR, who is vetting researchers and hosting access to datasets about the US 2020 election, and the Meta Content Library and API.

Country Nr of research projects for which they provided access to data
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Measure 27.4

Relevant Signatories commit to engage in pilot programs towards sharing data with vetted researchers for the purpose of investigating Disinformation, without waiting for the independent third-party body to be fully set up. Such pilot programmes will operate in accordance with all applicable laws regarding the sharing/use of data. Pilots could explore facilitating research on content that was removed from the services of Signatories and the data retention period for this content.

Instagram

QRE 27.4.1

Relevant Signatories will describe the pilot programs they are engaged in to share data with vetted researchers for the purpose of investigating Disinformation. This will include information about the nature of the programs, number of research teams engaged, and where possible, about research topics or findings.

As mentioned in our baseline report, since 2018, we have been sharing information with independent researchers about our network disruptions relating to coordinated inauthentic behaviour (CIB).  Since 2021, we have been expanding our Influence Operations (IO) Archive dataset— which provides information on Coordinated Inauthentic Behaviour and contains more than 100 removed networks — to more researchers studying influence operations worldwide. This dataset provides access to raw data where researchers can visualise and assess these network operations both quantitatively and qualitatively. In addition, we share our own internal research and analysis. 

Commitment 28

COOPERATION WITH RESEARCHERS Relevant Signatories commit to support good faith research into Disinformation that involves their services.

We signed up to the following measures of this commitment

Measure 28.1 Measure 28.2 Measure 28.3 Measure 28.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Meta continues to explore options for sharing insights with research groups on these issues, in addition to our sharing through the IO Research Archive and in our public Quarterly threat reports. 

As part of our ongoing efforts to enhance the Meta Content Library tool and incorporate feedback from researchers, we've introduced several improvements. We've made searching more efficient by adding exact phrase matching, text-in-image search, and researchers can now share content producer lists with their peers, enabling quick filtering of public data from specific content producers on Instagram. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to, and are in process of adding new features and functionality to Meta Content Library, including enhancing application processes for access to the research tools. In addition to this, we regularly seek feedback from the research community for critical updates. By developing these tools and supporting the research community we continue to support good faith research. 

Measure 28.1

Relevant Signatories will ensure they have the appropriate human resources in place in order to facilitate research, and should set-up and maintain an open dialogue with researchers to keep track of the types of data that are likely to be in demand for research and to help researchers find relevant contact points in their organisations.

Instagram

QRE 28.1.1

Relevant Signatories will describe the resources and processes they deploy to facilitate research and engage with the research community, including e.g. dedicated teams, tools, help centres, programs, or events.

As mentioned in our baseline report, Meta has a team dedicated to providing academics and independent researchers with the tools and data they need to study Meta’s impact on the world.

Relevant details about research tools are available on our Transparency Centre.

Measure 28.2

Relevant Signatories will be transparent on the data types they currently make available to researchers across Europe.

Instagram

QRE 28.2.1

Relevant Signatories will describe what data types European researchers can currently access via their APIs or via dedicated teams, tools, help centres, programs, or events.

As mentioned in our baseline report, Meta provides a variety of data sets and tools for researchers and they can consult a chart to verify if the data would be available for request. All the data access opportunities for independent researchers are logged in one place

The main data available only to researchers are: 
  • Meta Content Library and API. For Instagram, it will include public posts and data.  Data from the Library can be searched, explored, and filtered on a graphical user interface or through a programmatic API. 700+ researchers globally now have access to Meta Content Library. 
  • Ad Targeting Data Set, which includes detailed targeting information for social issue, electoral, and political ads that ran globally since August 2020. 150+ researchers globally have accessed Ads Targeting API since it launched publicly in Sept 2022.
  • Influence Operations Research Archive for coordinated inauthentic behaviour (CIB) Network Disruptions, as outlined in QRE 27.4.1.

Measure 28.3

Relevant Signatories will not prohibit or discourage genuinely and demonstratively public interest good faith research into Disinformation on their platforms, and will not take adversarial action against researcher users or accounts that undertake or participate in good-faith research into Disinformation.

Instagram

QRE 28.3.1

Relevant Signatories will collaborate with EDMO to run an annual consultation of European researchers to assess whether they have experienced adversarial actions or are otherwise prohibited or discouraged to run such research.

No reporting possible at this stage 

Measure 28.4

As part of the cooperation framework between the Signatories and the European research community, relevant Signatories will, with the assistance of the EDMO, make funds available for research on Disinformation, for researchers to independently manage and to define scientific priorities and transparent allocation procedures based on scientific merit.

Instagram

QRE 28.4.1

Relevant Signatories will disclose the resources made available for the purposes of Measure 28.4 and procedures put in place to ensure the resources are independently managed.

No reporting possible at this stage 

Empowering fact-checkers

Commitment 30

Relevant Signatories commit to establish a framework for transparent, structured, open, financially sustainable, and non-discriminatory cooperation between them and the EU fact-checking community regarding resources and support made available to fact-checkers.

We signed up to the following measures of this commitment

Measure 30.1 Measure 30.2 Measure 30.3 Measure 30.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

In the first half of 2024, Meta provided all third-party fact-checkers (3PFCs) participating in our fact-checking programs with access to the Meta Content Library (MCL). This initiative aimed to enhance the fact-checking workflow and provide users with a more comprehensive toolset.

Throughout the second half of 2024, Meta has continued to release new features and improvements to the MCL, including collaborative dashboards, text-in-image search, and expanded data scope. These enhancements have been designed to support our users and promote best practices in fact-checking.

To facilitate a seamless transition of our 3PFCs to the MCL, we initiated a proactive outreach and education program. This comprehensive program included a targeted e-Newsletter series, training calls, and live tutorials. 

The education program has yielded encouraging results, with notable increases in usage by 3PFCs. We will continue to monitor the impact of our initiatives and make adjustments as needed to ensure that our users have the support and resources they need to effectively utilize our tools and contribute to a safer and more informed online community. 
As a part of stakeholder engagement initiatives, Meta participated in the EFCSN Conference in Brussels, where we were joined by over 40 of our third-party fact-checking (3PFC) partners from the European Fact-Checking Program. During the conference, we also conducted 20 strategic partner meetings to further strengthen our collaborations and advance our shared goals. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As currently drafted, this chapter covers the current practices for Facebook and Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 30.1

Relevant Signatories will set up agreements between them and independent fact-checking organisations (as defined in whereas (e)) to achieve fact-checking coverage in all Member States. These agreements should meet high ethical and professional standards and be based on transparent, open, consistent and non-discriminatory conditions and will ensure the independence of fact-checkers.

Instagram

QRE 30.1.1

Relevant Signatories will report on and explain the nature of their agreements with fact-checking organisations; their expected results; relevant quantitative information (for instance: contents fact-checked, increased coverage, changes in integration of fact-checking as depends on the agreements and to be further discussed within the Task-force); and such as relevant common standards and conditions for these agreements.

As mentioned in our baseline report, Meta’s fact-checking partners all go through a rigorous certification process with the IFCN. As a subsidiary of the journalism research organisation Poynter Institute, the IFCN is dedicated to bringing fact-checkers together worldwide.
All fact-checking partners follow IFCN’s Code of Principles, a series of commitments they must adhere to in order to promote excellence in fact-checking. 

The detail of our partnership with fact-checkers (i.e., how they rate content and what actions we take as a result) is outlined in QRE 21.1.1 and here.

QRE 30.1.3

Relevant Signatories will report on resources allocated where relevant in each of their services to achieve fact-checking coverage in each Member State and to support fact-checking organisations' work to combat Disinformation online at the Member State level.

As mentioned in our baseline report, the list of fact-checkers with whom we partner across the EU is in QRE 30.1.2. 

SLI 30.1.1

Relevant Signatories will report on Member States and languages covered by agreements with the fact-checking organisations, including the total number of agreements with fact-checking organisations, per language and, where relevant, per service.

Number of individual agreements we have with fact-checking organisations. Each agreement covers both Facebook and Instagram. 

Country Nr of agreements with fact-checking organisations
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Measure 30.2

Relevant Signatories will provide fair financial contributions to the independent European fact-checking organisations for their work to combat Disinformation on their services. Those financial contributions could be in the form of individual agreements, of agreements with multiple fact-checkers or with an elected body representative of the independent European fact-checking organisations that has the mandate to conclude said agreements.

Instagram

QRE 30.2.1

Relevant Signatories will report on actions taken and general criteria used to ensure the fair financial contributions to the fact-checkers for the work done, on criteria used in those agreements to guarantee high ethical and professional standards, independence of the fact-checking organisations, as well as conditions of transparency, openness, consistency and non-discrimination.

As mentioned in our baseline report, Meta’s fact-checking partners all go through a rigorous certification process with the IFCN. All our fact-checking partners follow IFCN’s Code of Principles, a series of commitments they must adhere to in order to promote excellence in fact-checking.

From 2024, third-party fact-checkers may also be onboarded to Meta if they are certified with the European Fact-Checking Standards Networks (EFCSN).

QRE 30.2.2

Relevant Signatories will engage in, and report on, regular reviews with their fact-checking partner organisations to review the nature and effectiveness of the Signatory's fact-checking programme.

As mentioned in our baseline report, Meta has a team in charge of maintaining our relationships with our fact-checking partners, understanding their feedback and improving our fact-checking program together. 

Meta has also dedicated the necessary resources to engage with the Taskforce including on work-streams related to fact-checking. 

QRE 30.2.3

European fact-checking organisations will, directly (as Signatories to the Code) or indirectly (e.g. via polling by EDMO or an elected body representative of the independent European fact-checking organisations) report on the fairness of the individual compensations provided to them via these agreements.

QRE 30.2.3 applies to fact-checking organisations

Measure 30.3

Relevant Signatories will contribute to cross-border cooperation between fact-checkers.

Instagram

QRE 30.3.1

Relevant Signatories will report on actions taken to facilitate their cross-border collaboration with and between fact-checkers, including examples of fact-checks, languages, or Member States where such cooperation was facilitated.

As outlined in QRE 30.2.2 Meta has a team in charge of our relationships with fact-checking partners where we take on feedback including on ways to support their cooperation. 

Measure 30.4

To develop the Measures above, relevant Signatories will consult EDMO and an elected body representative of the independent European fact-checking organisations.

Instagram

QRE 30.4.1

Relevant Signatories will report, ex ante on plans to involve, and ex post on actions taken to involve, EDMO and the elected body representative of the independent European fact-checking organisations, including on the development of the framework of cooperation described in Measures 30.3 and 30.4.

As mentioned in our baseline report, Instagram is in touch with several EDMO regional hubs and looks forward to engaging with EDMO on our fact-checking efforts.

Commitment 31

Relevant Signatories commit to integrate, showcase, or otherwise consistently use fact-checkers' work in their platforms' services, processes, and contents; with full coverage of all Member States and languages.

We signed up to the following measures of this commitment

Measure 31.1 Measure 31.2 Measure 31.3 Measure 31.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

There have been no updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As currently drafted, this chapter covers the current practices for Facebook and Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 31.1

Relevant Signatories that showcase User Generated Content (UGC) will integrate, showcase, or otherwise consistently use independent fact-checkers' work in their platforms' services, processes, and contents across all Member States and across formats relevant to the service. Relevant Signatories will collaborate with fact-checkers to that end, starting by conducting and documenting research and testing.

Instagram

Measure 31.2

Relevant Signatories that integrate fact-checks in their products or processes will ensure they employ swift and efficient mechanisms such as labelling, information panels, or policy enforcement to help increase the impact of fact-checks on audiences.

Instagram

Measure 31.3

Relevant Signatories (including but not necessarily limited to fact-checkers and platforms) will create, in collaboration with EDMO and an elected body representative of the independent European fact-checking organisations, a repository of fact-checking content that will be governed by the representatives of fact-checkers. Relevant Signatories (i.e. platforms) commit to contribute to funding the establishment of the repository, together with other Signatories and/or other relevant interested entities. Funding will be reassessed on an annual basis within the Permanent Task-force after the establishment of the repository, which shall take no longer than 12 months.

Instagram

QRE 31.3.1

Relevant Signatories will report on their work towards and contribution to the overall repository project, which may include (depending on the Signatories): financial contributions; technical support; resourcing; fact-checks added to the repository. Further relevant metrics should be explored within the Permanent Task-force.

There have been no significant updates since the last submitted report.

Measure 31.4

Relevant Signatories will explore technological solutions to facilitate the efficient use of this common repository across platforms and languages. They will discuss these solutions with the Permanent Task-force in view of identifying relevant follow up actions.

Instagram

QRE 31.4.1

Relevant Signatories will report on the technical solutions they explore and insofar as possible and in light of discussions with the Task-force on solutions they implemented to facilitate the efficient use of a common repository across platforms.

There have been no significant updates since the last submitted report.

Commitment 32

Relevant Signatories commit to provide fact-checkers with prompt, and whenever possible automated, access to information that is pertinent to help them to maximise the quality and impact of fact-checking, as defined in a framework to be designed in coordination with EDMO and an elected body representative of the independent European fact-checking organisations.

We signed up to the following measures of this commitment

Measure 32.1 Measure 32.2 Measure 32.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

As mentioned in our baseline report, fact-checkers can identify hoaxes based on their own reporting, and Meta also surfaces potential misinformation to fact-checkers using signals, such as feedback from our community or similarity detection. Our technology can detect posts that are likely to be misinformation based on various signals, including how people are responding and how fast the content is spreading. We may also send content to fact-checkers when we become aware that it may contain misinformation.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As currently drafted, this chapter covers the current practices for Facebook and Instagram in the EU. In keeping with Meta’s public announcements on 7 January 2025, we will continue to assess the applicability of this chapter to Facebook and Instagram and we will keep under review whether it is appropriate to make alterations in light of changes in our practices, such as the deployment of Community Notes.

Measure 32.1

Relevant Signatories will provide fact-checkers with information to help them quantify the impact of fact-checked content over time, such as (depending on the service) actions taken on the basis of that content, impressions, clicks, or interactions.

Instagram

Measure 32.2

Relevant Signatories that showcase User Generated Content (UGC) will provide appropriate interfaces, automated wherever possible, for fact-checking organisations to be able to access information on the impact of contents on their platforms and to ensure consistency in the way said Signatories use, credit and provide feedback on the work of fact-checkers.

Instagram

Measure 32.3

Relevant Signatories will regularly exchange information between themselves and the fact-checking community, to strengthen their cooperation.

Instagram

QRE 32.3.1

Relevant Signatories will report on the channels of communications and the exchanges conducted to strengthen their cooperation - including success of and satisfaction with the information, interface, and other tools referred to in Measures 32.1 and 32.2 - and any conclusions drawn from such exchanges.

There have been no significant updates since the last submitted report.

Transparency Centre

Commitment 35

Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.

We signed up to the following measures of this commitment

Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) commits to upload its reports on the Transparency Centre in due course. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) commits to upload its reports on the Transparency Centre in due course. 

Measure 35.1

Signatories will list in the Transparency Centre, per each Commitment and Measure that they subscribe to, the terms of service and policies that their service applies to implement these Commitments and Measures.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.2

Signatories provide information on the implementation and enforcement of their policies per service, including geographical and language coverage.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.3

Signatories ensure that the Transparency Centre contains a repository of their reports assessing the implementation of the Code's commitments.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.4

In crisis situations, Signatories use the Transparency Centre to publish information regarding the specific mitigation actions taken related to the crisis.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.5

Signatories ensure that the Transparency Centre is built with state-of-the-art technology, is user-friendly, and that the relevant information is easily searchable (including per Commitment and Measure). Users of the Transparency Centre will be able to easily track changes in Signatories' policies and actions.

Facebook, Instagram, WhatsApp, Messenger

Measure 35.6

The Transparency Centre will enable users to easily access and understand the Service Level Indicators and Qualitative Reporting Elements tied to each Commitment and Measure of the Code for each service, including Member State breakdowns, in a standardised and searchable way. The Transparency Centre should also enable users to easily access and understand Structural Indicators for each Signatory.

Facebook, Instagram, WhatsApp, Messenger

Commitment 36

Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.

We signed up to the following measures of this commitment

Measure 36.1 Measure 36.2 Measure 36.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) will both upload this report in due course and support other signatories in their efforts to upload their own reports.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, Meta (representing Facebook, Instagram, WhatsApp and Messenger) will both upload all future reports in due course.

Measure 36.1

Signatories provide updates about relevant changes in policies and implementation actions in a timely manner, and in any event no later than 30 days after changes are announced or implemented.

Facebook, Instagram, WhatsApp, Messenger

Measure 36.2

Signatories will regularly update Service Level Indicators, reporting elements, and Structural Indicators, in parallel with the regular reporting foreseen by the monitoring framework. After the first reporting period, Relevant Signatories are encouraged to also update the Transparency Centre more regularly.

Facebook, Instagram, WhatsApp, Messenger

Measure 36.3

Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.

Facebook, Instagram, WhatsApp, Messenger

QRE 36.1.1

With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

We continue to upload our report according to the approved deadlines.

QRE 36.1.2

Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.

The administration of the Transparency Centre website has been transferred fully to the community of the Code’s signatories, with VOST Europe taking the role of developer.

SLI 36.1.1

Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.

The common Transparency Centre was visited by 20,255 unique users between 01/07/2024 to 31/12/2024, and 1,275 users downloaded reports 5,626 times during this period. For Meta specifically, 776 downloads (combined) occurred of our most recent and previous reports by 373 unique users.

Country Our company would like to provide the following data: Nr of fact-checkers IFCN-certified
Austria 0
Belgium 0
Bulgaria 0
Croatia 0
Cyprus 0
Czech Republic 0
Denmark 0
Estonia 0
Finland 0
France 0
Germany 0
Greece 0
Hungary 0
Ireland 0
Italy 0
Latvia 0
Lithuania 0
Luxembourg 0
Malta 0
Netherlands 0
Poland 0
Portugal 0
Romania 0
Slovakia 0
Slovenia 0
Spain 0
Sweden 0
Iceland 0
Liechtenstein 0
Norway 0

Permanent Task-Force

Commitment 37

Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.

We signed up to the following measures of this commitment

Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

There have been no significant updates since the last submitted report.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 37.1

Signatories will participate in the Task-force and contribute to its work. Signatories, in particular smaller or emerging services will contribute to the work of the Task-force proportionate to their resources, size and risk profile. Smaller or emerging services can also agree to pool their resources together and represent each other in the Task-force. The Task-force will meet in plenary sessions as necessary and at least every 6 months, and, where relevant, in subgroups dedicated to specific issues or workstreams.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.2

Signatories agree to work in the Task-force in particular – but not limited to – on the following tasks: Establishing a risk assessment methodology and a rapid response system to be used in special situations like elections or crises; Cooperate and coordinate their work in special situations like elections or crisis; Agree on the harmonised reporting templates for the implementation of the Code's Commitments and Measures, the refined methodology of the reporting, and the relevant data disclosure for monitoring purposes; Review the quality and effectiveness of the harmonised reporting templates, as well as the formats and methods of data disclosure for monitoring purposes, throughout future monitoring cycles and adapt them, as needed; Contribute to the assessment of the quality and effectiveness of Service Level and Structural Indicators and the data points provided to measure these indicators, as well as their relevant adaptation; Refine, test and adjust Structural Indicators and design mechanisms to measure them at Member State level; Agree, publish and update a list of TTPs employed by malicious actors, and set down baseline elements, objectives and benchmarks for Measures to counter them, in line with the Chapter IV of this Code.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.3

The Task-force will agree on and define its operating rules, including on the involvement of third-party experts, which will be laid down in a Vademecum drafted by the European Commission in collaboration with the Signatories and agreed on by consensus between the members of the Task-force.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.4

Signatories agree to set up subgroups dedicated to the specific issues related to the implementation and revision of the Code with the participation of the relevant Signatories.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.5

When needed, and in any event at least once per year the Task-force organises meetings with relevant stakeholder groups and experts to inform them about the operation of the Code and gather their views related to important developments in the field of Disinformation.

Facebook, Instagram, WhatsApp, Messenger

Measure 37.6

Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.

Facebook, Instagram, WhatsApp, Messenger

QRE 37.6.1

Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.

There have been no significant updates since the last submitted report.

Monitoring of the Code

Commitment 38

The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.

We signed up to the following measures of this commitment

Measure 38.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Measure 38.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

Facebook, Instagram, WhatsApp, Messenger

QRE 38.1.1

Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.

Globally we have around 40,000 people working on safety and security including around 15,000 content reviewers. All of these investments work to combat the spread of harmful content, including disinformation and misinformation, and thereby contribute to our implementation of the Code. Teams with expertise in content moderation, operations, policy design, safety, market specialists, data and forensic analysis, stakeholder and partner engagement, threat investigation, cybersecurity and product development all work on these challenges. These teams are distributed globally, and draw from the local expertise of their team members and local partners.

Commitment 39

Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.

We signed up to the following measures of this commitment

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

If yes, which further implementation measures do you plan to put in place in the next 6 months?

Commitment 40

Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.

We signed up to the following measures of this commitment

Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

For this report, Facebook. Instagram, WhatsApp and Messenger  provided QREs and SLIs across the different chapters.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

As mentioned in our baseline report, Facebook, Instagram, WhatsApp and Messenger will continue to provide relevant QREs and SLIs across the chapters of this Code.

Commitment 41

Signatories commit to work within the Task-force towards developing Structural Indicators, and publish a first set of them within 9 months from the signature of this Code; and to publish an initial measurement alongside their first full report.

We signed up to the following measures of this commitment

Measure 41.1 Measure 41.2 Measure 41.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We continue to engage with the Taskforce Monitoring Working Group. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

Yes

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We continue to engage with the Taskforce Monitoring Working Group. 

Crisis and Elections Response

Elections 2024

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated


Over many years, Meta has developed a comprehensive approach for elections on its platforms. While each election is unique, we have used our experience working on more than 200 elections around the world to build a robust election program that includes mature processes, tools, and policies to protect speech on our platform and safeguard the integrity of the elections. We continuously improve these measures to make sure they remain responsive to risks as they emerge, and we have reinforced these efforts in light of the regulatory framework set out under the Digital Services Act, the Election Guidelines, and our commitments under this Code.


We outlined our comprehensive approach for elections, and its particular relevance to the 2024 European Parliament (“EP”) elections, in our public post-elections report for the EP elections available on our Transparency Center.  This work continued in earnest for the snap legislative elections in France, which were called on 9 June 2024 following the results of the EP elections, and which occurred shortly thereafter. Additionally, similar efforts were made for the Presidential and Parliamentary elections in Romania, held on 24 November 2024, and 1 December 2024, respectively.


Our comprehensive approach for elections was outlined in our public post-elections report for the EP elections available on our Transparency Center. Meta’s approach to elections is outlined in full across the following pillars:

  1. Utilising and deploying our policies, and our overall content moderation efforts, to remove policy-violating content and help keep people safe on our platforms
  2. Our election risk management processes
  3. Cooperation with external stakeholders
  4. Tools to support civic engagement
  5. Preventing interference and disinformation
  6. Reducing the spread of misinformation
  7. Safeguards and transparency efforts related to political advertising
  8. Responsible approach to Generative AI



This work continued in earnest for the European National elections, including snap legislative elections in France. Below we provide a summarised overview of support for the legislative elections in France and the impact of our efforts during this period, with the focus on 2 key aspects, which are:

  • Cooperation with external stakeholders in advance of the elections:
    • Working Group on Elections & Rapid Response System
    • Engagement with national authorities

  • Our work in the Generative AI space

Mitigations in place


Cooperation with External Stakeholders

Meta engages with a full range of external stakeholders to inform our processes and procedures as part of day-to-day business, and this practice continued during our election preparation. Meta values the networks and channels we have with our external stakeholders to work together in identifying risks on our platforms, and as such, we have welcomed many of the Election Guidelines recommending cooperation and points of contact with national authorities, civil society organisations, and others.


France: Pre-Election Engagements with National Authorities and Civil Society:
As part of the Working Group, Meta participated in the various sessions organised ahead of the legislative elections in France to discuss election readiness with the signatories of the EU CoP on Disinformation, including fact checkers and civil society organisations. In these engagements, along with other signatory platforms, we presented the efforts and tools we were deploying to fight against misinformation and foreign interference, and to provide more transparency on political ads. In addition, we shared information on our civic products aimed at informing users. Meta also responded to questions from the different participants on escalation channels and approaches.

Digital Service Coordinator (“DSC”) - Arcom:
Meta conducted outreach and delivered comprehensive training to Autorité de régulation de la communication audiovisuelle et numérique (Arcom), as France’s appointed DSC. Arcom, as well as other onboarded DSCs, have access to Meta’s government reporting channels. 

We provided step-by-step guidance to help Arcom navigate the “Single Point of Contact” (SPOC) Form for EU Member States’ authorities, the EU Commission, and the EU Board for Digital Services, as well as the onboarding process, where required, in order to access the relevant contact forms. During the electoral period, we received no reports from Arcom through this dedicated reporting channel. 

We have a long-standing relationship with Arcom and are in regular touch on various topics. We maintained continuous communication and engagement ahead of the EP elections through regular check-ins on election preparedness. In addition, we joined the industry roundtable hosted by Arcom on 2 May 2024 in their headquarters, along with VIGINUM (France’s agency in charge of tackling online foreign interference) and other tech platforms to present our work on election integrity, with a particular focus on misinformation and foreign interference. 

Meta also participated in a roundtable co-organised by Arcom, the European Commission, and VIGINUM on 24 June 2024 ahead of the election, bringing together industry partners to discuss elections preparations and mitigations to address systemic risks around the French snap elections. Meta continued direct engagements with Arcom throughout the electoral period.

VIGINUM
In addition to our engagements with VIGINUM at the roundtables discussed above, we held an engagement with them on 21 May 2024 to discuss our investments to prevent foreign interference, protect the elections, and establish the appropriate communication channels between our teams to ensure we could identify and tackle potential operations efficiently.

Political Parties:
Ahead of the EP elections, Meta organised training sessions and office hours on our policies and products with French government organisations, political parties, and civil society organisations. Political parties were provided an email alias to contact for any urgent escalations around the election. We additionally launched an EU Election Center (https://www.facebook.com/government-nonprofits/eu) in all 24 EU official languages, including French, to support our government partners. For the legislative elections in France, these same resources were available and further office hours were offered to ensure provision of best practices and support.


Romania
As part of the elections preparations efforts, Meta has engaged with a full range of Romanian stakeholders to inform our processes and procedures and hear their concerns. Engagements with government and non government partners started ahead of the 2024 EP Elections, and continue at this point in time. 

  • Romanian government stakeholders: We are in regular contact with the AnCOM (Romanian Digital Service Coordinator), the Ministry of Digitalisation, the Electoral Body and the Romanian Cybersecurity agency on elections related topics . All of them are onboarded to our direct escalation channels, where they have been reporting content to us.
  • Election Engagements with the European Commission, National Authorities and Civil Society: Similar to what we did in France, Meta participated in the various sessions organised ahead and after the 2024 elections to discuss election readiness with the signatories of the EU CoP on Disinformation, including fact checkers and civil society organisations. In these engagements, along with other signatory platforms, we presented the efforts and tools we were deploying to fight against misinformation and foreign interference, and to provide more transparency on political ads. In addition, we shared information on our civic products aimed at informing users. Meta also responded to questions from the different participants on escalation channels and approaches. 


Working Group on Elections & Rapid Response System
:

Meta is also an active member of the EU Code of Practice (“CoP”) on Disinformation Taskforce’s Working Group on Elections and took part in its Rapid Response System. This was first piloted for the European Parliamentary elections and the CoP Taskforce decided to have it in place for the legislative elections in France as well.

France
As part of the Working Group, Meta participated in the various sessions organised ahead of the legislative elections in France to discuss election readiness with the signatories of the EU CoP on Disinformation, including fact checkers and civil society organisations. In these engagements, along with other signatory platforms, we presented the efforts and tools we were deploying to fight against misinformation and foreign interference, and to provide more transparency on political ads. Meta also responded to questions from the different participants on escalation channels and approaches.


Romania
Rapid Alert System:
Meta participated in the Rapid Alert system and has been in regular touch with civil society organisations from Romania, through various meetings and roundtables organised by the Disinfo working group. Meta created a direct escalation channel for five Romanian partners to report Community Standards violations,  and unlawful content.  


Political parties
: Meta started engaging with Romanian Political Parties already in advance to the European Parliamentary Elections. Ahead of the 2024 Presidential and Parliamentary elections, Meta organised online training sessions on our policies and products and provided, and on how to contact Meta in case of an escalation.   



Responsible Approach to Gen AI


Meta’s approach to responsible AI is another way that we are safeguarding the integrity of elections globally, including for the EU national elections.

Community Standards, Fact-Checking, and AI Labelling:


Meta’s Community Standards and Advertising Standards apply to all content, including content generated by AI. AI-generated content is also eligible to be reviewed and rated by Meta’s third-party fact-checking partners, whose rating options allow them to address various ways in which media content may mislead people, including but not limited to media that is created or edited by AI. 


Meta labels photorealistic images created using Meta AI, as well as AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook and Instagram.


Meta has begun labelling a wider range of video, audio, and image content when we detect industry-standard AI image indicators or when users disclose that they’re uploading AI-generated content. Meta requires people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and may apply penalties if they fail to do so. If Meta determines that digitally created or altered image, video, or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label, so that people have more information and context.


Political Ads and Meta’s AI Disclosure Policy:

Meta announced in November 2023 a disclosure policy to help people understand when a SIEP ad (as described in Section 6) on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy went into effect in January 2024 and was active during the legislative elections in France. 

Advertisers have to disclose whenever a SIEP ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video or audio recording of the event.


If advertisers do not disclose these specified scenarios, the ad may be disapproved. Repeated failure to disclose may result in further penalties to the account.
AI Content Around the French Elections:
As a result of our policies and measures relating to AI-generated content, between 1 June - 21 July 2024, over 50 SIEP ads created by users in France across Facebook and Instagram were labelled with the “digitally created” AI disclaimer as a result of self-disclosure, providing enhanced transparency to users.
SIEP Ads and Enforcement Around the French Elections:
The below table shows information on the number of ads accepted and run with SIEP disclaimers as well as the number of ads removed for non-compliance with Meta’s SIEP policy between 1 June - 21 July 2024, where the inferred advertiser location at the time of enforcement was France. This reflects application of the above mentioned policies and measures.

Number of SIEP ads accepted & labelled on Facebook and Instagram combined | Over 10,000
Number of SIEP ads removed for not complying with our SIEP ads policy on Facebook and Instagram combined | Over 20,000

Continuing to Foster AI Transparency through Industry Collaboration:

Meta has also been working with other companies in the tech industry on common standards and guidelines. Meta Platforms, Inc. is a member of the Partnership on AI, for example, and signed onto the tech accord designed to combat the spread of deceptive AI content in 2024 elections globally. Meta receives information from Meta Platforms, Inc. in the progress of these initiatives, and benefits from these partnerships when addressing the risks of manipulated media. 

Policies and Terms and Conditions

All the measures outlined in this report are in place ahead of the European Parliament elections, as well as national elections. In addition, we have the policy change outlined below.

Policy
Prohibited Ads Policy

Changes (such as newly introduced policies, edits, adaptation in scope or implementation)
We've established measures in which ads related to voting around elections (this includes primary, general, special, and run-off elections) are subject to additional prohibitions and will be rejected if in violation of our policies. This policy applies to the Member States of the EU. 

Rationale

Ads targeting the EU with the following content aren't allowed:

  • Ads that discourage people from voting in an election. This includes ads that portray voting as useless/meaningless and/or advise people not to vote.
  • Ads that call into question the legitimacy of an upcoming or ongoing election.
  • Ads with premature claims of election victory.


This prohibition includes ads that call into question the legitimacy of the methods and processes of elections, as well as their outcomes.


Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

The measures outlined in Chapters 1 to 3 of this report were in place for the European national elections. They were complemented by the prohibited ads policy outlined above. Most pertinently, under these policies, content that is fact-checked cannot be used for an ad under our Advertising Standards.

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

We continue to enforce our policy for Ads about social issues, elections or politics (“SIEP ads”) as outlined in chapters 4 to 13 of this report. As a result of those policies and measures, we removed over 20,000 SIEP ads in France around the time of the French elections for non-compliance with Meta’s SIEP policy. 


Policy updates regarding digitally altered content

Meta helps users understand when a social issue, election or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI. 

Advertisers must disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If it is determined that an advertiser did not disclose as required, Meta will reject the ad. Repeated failure to disclose may result in penalties against the advertiser.


The expected impact of this policy is to increase users' awareness of when they are viewing advertisements related to social issues, elections or politics that are digitally altered. It will also increase the transparency of these ads by requiring that advertisers disclose this information. 

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.


All the measures outlined in Chapters 14 to 16 of this report were in place ahead of the European national elections.


Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

All the measures outlined in Chapters 17 to 25 of this report to combat disinformation and misinformation were in place ahead of the European national elections. In addition, we had the measures outlined below.

Reminders
We proactively point users to reliable information on the electoral process through in-app ‘Election Day Information’. These are notices at the top of feed on both Facebook and Instagram, reminding people of the day they can vote and re-directing them to national authoritative sources on how and where to vote.

For the legislative elections in France, the ‘Election Day Information’ feature ran between 29 - 30 June, and 6 - 7 July 2024 and directed users to a voting information page on the Ministry of the Interior's website. Users in metropolitan France and overseas territories clicked on these in-app notifications more than 599K times on Facebook and more than 496K times on Instagram, as shown in the table below:

Election Day Information; Facebook Clicks: Over 599,000
Election Day Information; Instagram Clicks; Over 496,000


Media Literacy Partnerships

Around the EP elections, Meta engaged in several media literacy efforts. This included two campaigns in France to combat misinformation and prevent electoral interference:


A collaboration with the local fact-checking partner AFP Fact Check, producing a Reel video featuring popular French astronaut Thomas Pesquet reviewing a series of pictures and videos that had been shared online as hoaxes. He explains best practices and tools people should leverage when faced with a piece of news that seems unlikely. According to AFP, the videos resulted in nearly 2.5 million views on Instagram and Facebook.


Participation in a multi-platform campaign operated by the French partner NGO Génération Numérique, consisting of a series of educational short videos gathering tips and recommendations on avoiding becoming a victim of misinformation. According to Génération Numérique, the videos reached over 200k users and generated nearly 300k impressions on Instagram and Facebook alone.


Additional efforts included wider campaigns with the European Fact-Checking Standards Network (EFCSN) on how to spot AI-generated and digitally altered media, with the European Disability Forum (EDF). We refer readers to our EP post-elections report for further detail on these and other initiatives.


Ahead of the French legislative elections, Meta continued this investment in media literacy by launching a campaign on Meta owned channels (Facebook and Instagram). This campaign aimed to increase awareness of the tools and processes that Meta deploys on its own platforms (Facebook, Instagram, and WhatsApp) in advance of an election, to help inform French users how Meta works to combat misinformation, prevent electoral interference, and protect electoral candidates. The campaign ran from 20 June until 4 July 2024, a few days before the second round of the election. It reached 2.1 million users in France, generating 10.6 million impressions.


Training political candidates
Ahead of the EP elections, Meta organised training sessions and office hours on our policies and products with French government organisations, political parties, and civil society organisations. Political parties were provided an email alias to contact for any urgent escalations around the election. We additionally launched an EU Election Center (https://www.facebook.com/government-nonprofits/eu) in all 24 EU official languages, including French, to support our government partners. For the legislative elections in France, these same resources were available and further office hours were offered to ensure provision of best practices and support.



Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

Since 2023, researchers in Europe have had access to the Meta Content Library, enabling them to study various topics, including disinformation.


Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.


In France, as a result of our misinformation policies and measures, we labelled over 1.8 million pieces of content on Facebook, and over 65K pieces of content on Instagram, with fact checks in the month leading up to and including the electoral period. 

Content Treated with Misinformation Labels Around the French Elections

The below table shows information on content viewed by users in France which was treated with misinformation labels on Facebook and Instagram between 1 June  – 21 July, 2024, as well as attempted reshares.

Content treated with fact checks after being rated by 3PFCs  | Facebook: Over 1,800,000 | Instagram: Over 65,000
% of reshares attempted that were not completed on treated content  | Facebook: 55.9% | Instagram: 46.6%



In addition, we had the measures outlined below.

Policy expansion to EFCSN 
Additional efforts included wider campaigns with the European Fact-Checking Standards Network (EFCSN) on how to spot AI-generated and digitally altered media, with the European Disability Forum (EDF).

Crisis 2024

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated

Reporting on the service’s response during a crisis

[War of aggression by Russia on Ukraine]


As outlined in our benchmark report, we took a variety of actions with the objectives of:

  • Helping to keep people in Ukraine and Russia safe: We’ve added several privacy and safety features to help people in Ukraine and Russia protect their accounts from being targeted.
  • Enforcing our policies: We are taking additional steps to enforce our Community Standards, not only in Ukraine and Russia but also in other countries globally where content may be shared.
  • Reducing the spread of misinformation: We took steps to fight the spread of misinformation on our services and consulted with outside experts. 
  • Transparency around state-controlled media: We have been working hard to tackle disinformation from Russia coming from state-controlled media. Since March 2022, we have been globally demoting content from Facebook Pages and Instagram accounts from Russian state-controlled media outlets and making them harder to find across our platforms. In addition to demoting, labelling, demonetizing and blocking ads from Russian State Controlled Media, we are also demoting and labelling any posts from users that contain links to Russian State Controlled Media websites.
  • In addition to these global actions, in Ukraine, the EU and UK, we have restricted access to Russia Today, Sputnik, NTV/NTV Mir, Rossiya 1, REN TV and Perviy Kanal and others.
  • On 15 June 2024, we added restrictions to further state-controlled media organisations targeted by the EU broadcast ban under Article 2f of Regulation 833/2014. These included: Voice of Europe, RIA Novosti, Izvestia, Rossiyskaya Gazeta.
  • On 17 September 2024, we expanded our ongoing enforcement against Russian state media outlets. Rossiya Segodnya, RT, and other related entities were banned from our apps globally due to foreign interference activities.






[Israel - Hamas War]
In the spirit of transparency and cooperation we share below the details of some of the specific steps we are taking to respond to the Israel - Hamas War.

Mitigations in place

[War of aggression by Russia on Ukraine]

Our main strategies are in line with what we outlined in our benchmark report, with a focus on safety features in Ukraine and Russia, extensive steps to fight the spread of misinformation (including through media literacy campaigns), tools to help our community access crucial resources, transparency around state controlled media and monitoring/taking action against any coordinated inauthentic behaviour.


This means (as outlined in previous reports) we will continue to: 

  • Monitor for coordinated inauthentic behaviour and other adversarial networks  (See commitment 16 for more information on behaviour we saw from Doppelganger during the reporting period). 
  • Enforce our Community Standards  
  • Work with fact-checkers 
  • Strengthen our engagement with local experts and governments in the Central and Eastern Europe region 






[Israel - Hamas War]
In the wake of the 07/10/2023 terrorist attacks in Israel and Israel’s response in Gaza, expert teams from across Meta took immediate crisis response measures, while protecting people’s ability to use our apps to shed light on important developments happening on the ground. As we did so, we were guided by core human rights principles, including respect for the right to life and security of the person, the protection of the dignity of victims, and the right to non-discrimination - as well as balancing those with the right to freedom of expression. We looked to the UN Guiding Principles on Business and Human Rights to prioritise and mitigate the most salient human rights risks: in this case, that people may use Meta platforms to further inflame an already violent conflict. We also looked to international humanitarian law (IHL) as an important source of reference for assessing online conduct. We have provided a public overview of our efforts related to the war in our Newsroom. The following are some examples of the specific steps we have taken:

Taking Action on Violating Content:


Safety and Security:
  • In addition to this, our teams have detected and taken down a cluster of activity linked to Coordinated Inauthentic Behaviour (CIB) and attributed to Hamas in 2021. These fake accounts attempted to re-establish their presence on our platforms.
  • In Q3 2024, we also removed 15 Facebook accounts, 15 Pages, and 6 accounts on Instagram for violating our policy against coordinated inauthentic behavior. This network originated in Lebanon and targeted primarily Israel. This network posted original content in Hebrew about news and geopolitical events in Israel with generic hashtags like #Israel, #Jerusalem, #Netanyahu, among others. It included posts about Israel’s dependence on US support, claims that Israeli people are leaving the country, claims of food shortages in Israel, and criticism of the Israeli government and its military strikes in the Middle East. 
  • We memorialise accounts when we receive a request from a friend or family member of someone who has passed away, to provide a space for people to pay their respects, share memories and support each other.

Reducing the Spread of Misinformation:
  • We’re working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. When they rate something as false, we move this content lower in Feed so fewer people see it. 
  • We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
  • We’re also giving people more information to help them decide what to read, trust, and share, by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. 
  • We also have limits on message forwarding and we label messages that haven’t originated with the sender so people are aware that something is information from a third party.

User Controls:
We continue to provide tools to help people control their experience on our apps and protect themselves from content they don’t want to see. These include but aren’t limited to:
  • Hidden Words: This tool filters offensive terms and phrases from DM requests and comments.
  • Limits: When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
  • Comment controls: You can control who can comment on your posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis. 
  • Show More, Show Less: This gives people direct control over the content they see on Facebook. 
  • Facebook Reduce: Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed. 
  • Sensitive Content Control: Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. 

More detail on these tools can be found in the chapter sections below.

Oversight Board cases: 
The Oversight Board remains another avenue for review of Meta’s crisis response, and during the reporting period the Board has reviewed and decided on 2 cases relating to the Hamas-Israel war. Details of these cases can be found here:

Policies and Terms and Conditions

War of aggression by Russia on Ukraine

Policy
No further policy updates since our benchmark report

Rationale
We continue to enforce our Community Standards and prioritise people’s safety and well-being through the application of these policies alongside Meta’s technologies, tools and processes. There are no substantial changes to report on for this period. 




Israel - Hamas War
For the duration of the ongoing crisis, Meta has taken various actions to mitigate the possible content risks emerging from the crisis. This includes, inter alia, under the Dangerous Organisations and Individuals Policy, removes imagery depicting the moment an identifiable individual is abducted, unless such imagery is shared in the context of  condemnation or a call to release, in which case we allow with a Mark as Disturbing (MAD) interstitial; and, remove Hamas-produced imagery for hostages in captivity in all contexts. Meta has some further discretion policies which may be applied when content is escalated to us.

Scrutiny of Ads Placements

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

[War of aggression by Russia on Ukraine]
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.

Measures taken to demonetise disinformation related to the crisis
(Commitment 1 and Commitment 2)
As mentioned in our baseline report, our Advertising Standards prohibit ads that include content debunked by third-party fact-checkers and advertisers that repeatedly attempt to post content rated by fact-checkers may also incur restrictions to advertise across Meta technologies.

For the monetisation of initially organic content, (1) per our Content Monetisation Policies, any content that's labelled as false by our third-party fact-checkers is ineligible for monetisation, and (2) any actor found in violation of our Community Standards, including our misinformation policies, may lose the right to monetise their content, per our Partner Monetisation Policies

As mentioned in our baseline report, we prohibited ads or monetisation from all Russian state-controlled media. Before Russian authorities blocked access to Facebook and Instagram, we paused ads targeting people in Russia, and advertisers in Russia are no longer able to create or run ads anywhere in the world.



[Israel - Hamas War]
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Political Advertising

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.




Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.


AI Generated or altered SIEP ads disclosure
(Commitment 3)

Meta announced in November 2023 an AI Disclosure policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy went into effect in early 2024 and is required globally.

Advertisers now have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video or audio recording of the event.

Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If it is determined that an advertiser did not disclose as required, Meta will reject the ad. Repeated failure to disclose may result in penalties against the advertiser.

The AI Disclosure policy helps inform people about digitally altered or created Ads. This way, people will be more aware about the authenticity of messaging, which will help combat Disinformation. 

Integrity of Services

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of aggression by Russia on Ukraine
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.


Measures taken in the context of the crisis to counter manipulative behaviours/TTCs ( Commitment 14)
As mentioned in our baseline report, we have technical teams building scaled solutions to detect and prevent these behaviours, and are partnering with civil society organisations, researchers, and governments to strengthen our defences. We also improved our detection systems to more effectively identify and block fake accounts, which are the source of a lot of the inauthentic activity.

Since the invasion began,  we shared what measures we’ve taken to help keep Ukrainians and Russians safe, our approach to misinformation, state-controlled media and ensuring reliable access to trusted information.

As mentioned in our baseline report, our security teams took down three distinct networks in Russia targeting discourse on the war (announced here, here, and here) and have continued to monitor and enforce against Russian threat actors engaged in coordinated inauthentic behaviour (CIB). We also took action to secure accounts that we believe were targeted by Ghostwriter, a threat actor that has been tracked for some time by the security community. In August 2023, we provided updated analysis on the work we’ve done to remove efforts by a Russian CIB network, known in the security field as “Doppelganger,” to return to our platforms. We also published recommendations on how to improve cross-Internet responses to the domain name abuse we’ve observed in this case. Similarly, our Q4 2023 adversarial threats report detailed how we removed 1,020 Facebook accounts, five Pages, two Groups and 711 Instagram accounts for violating our policy against coordinated inauthentic behaviour. This network originated in Ukraine and targeted audiences in Ukraine and Kazakhstan. The people behind this activity posted primarily in Russian about political events in Ukraine and Kazakhstan.

The Q3 2024 adversarial threats report shared a detailed assessment and breakdown of Doppelganger’s behaviour: As a result of our ongoing aggressive enforcement against recidivist efforts by Doppelganger, its operators have been forced to keep adapting and make tactical changes in an attempt to evade takedowns. These changes have led to degrading the quality of the operation’s efforts, rendering these attempts impossible to comprehend by an average online user. In addition, many of the adversarial shifts that appear primarily on our platforms do not show up elsewhere on the internet where the operators continue using some of their older known tactics. This suggests agility in response to detection by various services and we expect to see more changes over time.


Relevant changes to working practices to respond to the demands of the crisis situation and/or additional human resources procured for the mitigation of the crisis (Commitment 14 -16)
As mentioned in the baseline report, throughout the war, we have mobilised our teams, technologies and resources to combat the spread of harmful content, especially disinformation and misinformation as well as adversarial threat activities such as influence operations and cyber-espionage.

We continue to work with a cross-functional team of experts from across the company, including native Ukrainian and Russian speakers, who are monitoring the platform around the clock, allowing us to respond to issues in real time.





Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.


Removing a Coordinated Inauthentic Behaviour Network
(Commitment 14, Commitment 16)
In Q3, 2024, we removed 15 Facebook accounts, 15 Pages, and 6 accounts on Instagram for violating our policy against coordinated inauthentic behavior. This network originated in Lebanon and targeted primarily Israel. This network posted original content in Hebrew about news and geopolitical events in Israel with generic hashtags like #Israel, #Jerusalem, #Netanyahu, among others. It included posts about Israel’s dependence on US support, claims that Israeli people are leaving the country, claims of food shortages in Israel, and criticism of the Israeli government and its military strikes in the Middle East.


We removed this network before it was able to build authentic audiences on our apps.



Empowering Users

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.


Actions taken against dis- and misinformation content (for example deamplification, labelling, removal etc.) (Commitment 17)


State controlled media:
We continue to take the actions we outlined in our benchmark report. We have taken further action to limit the impact of state controlled media, described above. 

Escalation channel:
This channel continues to operate as outlined in our benchmark report. 

Covert influence campaigns: We have continued to monitor for and remove recidivist attempts by coordinated inauthentic behaviour (CIB) networks that target discourse about the war in Ukraine. Specifically, while we originally removed  two Russian covert influence campaigns, we’ve seen thousands of recidivist attempts to create fake accounts. This covert activity is aggressive and persistent, constantly probing for weak spots across the internet, including setting up hundreds of new spoof news organisation domains.

Promotion of authoritative information, including via recommender systems and products and features such as banners and panels (Commitment 19)

As mentioned in our baseline report, we provided tools to help our community access crucial resources and take action to support people in need.


We continued supporting the Halo Trust and the State Emergency Service of Ukraine to spread authoritative factual information about the risks in contaminated areas, risks related to unexploded ordinances and life-saving information around shelters. Notably we sponsored the targeted ads campaigns of Halo Trust and improved the WhatsApp chat bot run by the State Emergency Service of Ukraine to ensure a safe and secure infoline.


In addition, we provided an ad credits budget to 'Ty Yak?', a national mental health awareness campaign, to promote mental health resources for people affected by the war.


We continue to see funds raised on Facebook and Instagram for nonprofits in support of humanitarian efforts for Ukraine.

We continue to work through our Data for Good program, which empowers humanitarian organizations, researchers, UN agencies, and European policymakers to make more informed decisions on how to support the people of Ukraine.







Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Warning Screens on sensitive content, Sensitive Content Control and Facebook Reduce: (Commitment 17)
The 07/10/2023 attack by Hamas was designated as a Terrorist Attack under Meta’s Dangerous Organisation and Individuals policy. Consistent with that designation, we removed all content showing identifiable victims at the moment of the attack. Following that, people began sharing this type of footage in order to raise awareness and condemn the attacks. Meta’s goal is to allow people to express themselves while still removing harmful content. In turn, we began allowing people to post this type of footage within that context only, with the addition of a warning screen to inform users that it may be disturbing. If the user’s intent in sharing the content is unclear, we err on the side of safety and remove it. 

However, there are additional protections in place to ensure people have choices when it comes to this content. 

Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. We try not to recommend sensitive content in these places by default, but people can also choose to see less, to further reduce the possibility of seeing this content from accounts they don’t follow. 

Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed. Or if preferred, they can turn many of these demotions off entirely. They can also choose to maintain Meta’s current demotions.

These actions ensure that we balance the protection of voice with removing harmful content. In this context, it has allowed for important discussion and condemnation of violence, while also empowering people to make choices in reaction to the content they see on Facebook and Instagram. 


Hidden words Filter
(Commitment 18, Commitment 19)
When turned on, Hidden Words filters offensive terms and phrases from DM requests and comments, so people never have to see them. People can customise this list, to make sure the terms they find offensive are hidden. 

Hidden Words help people choose offensive terms and phrases to hide, so they are protected from seeing them. 

Limits (Commitment 18, Commitment 19,)
When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.

This tool gives people choice about DM and requests they receive, which may be important when engaging online around sensitive topics. 


Comment Controls (Commitment 18, Commitment 19)
People can control who can comment on their posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis. 

This tool gives people control over engagement with what they post on Facebook and Instagram. 


Show more Show less: (Commitment 18, Commitment 19)
Show More, Show Less gives people direct control over the content they see on Facebook. Selecting “Show more” will temporarily increase the amount of content that is like the post a user gave feedback on, while selecting “Show Less” means a user will temporarily see fewer posts like the one that feedback was given on.

This tool provides people with more direct control over what they see, which is important for protecting people's well-being during high profile crisis events. 

Empowering the Research Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools and processes.


Measures taken to support research into crisis related misinformation and disinformation
(Commitment 17-25)

As mentioned in our baseline report, the Data for Good program shares privacy-protected data externally to help tackle social issues like disasters, pandemics, poverty and climate change. In support of the Ukraine humanitarian response, the program's maps have been utilized to provide valuable assistance.

As mentioned in our baseline report, we continued providing baseline population density maps (the high resolution settlement layer) of Ukraine and surrounding countries to humanitarian organisations for supply-chain planning and to aid demining efforts. These are the most accurate in the world with 30 metre resolution and demographic breakouts by combining updated census estimates with satellite imagery (i.e., no Facebook user data).

Our Social Connectedness Index has been used by leading researchers, including the European Commission - Joint Research Centre unit on Demography, Migration and Governance to quantify the rate at which Ukrainian refugees seek shelter in European regions with existing Ukrainian diaspora. 






Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.


Content Library and API tools
(Commitment 26)
As we previously reported, Meta has opened access to tools such as the Content Library and API tools to provide access to near real-time public content from Pages, Posts, Groups and Events on Facebook and public content on Instagram. Details about the content, such as the number of reactions, shares, comments and, for the first time, post view counts are also available. Researchers can search, explore and filter that content on both a graphical User Interface (UI) or through a programmatic API. Together, these tools provide the most comprehensive access to publicly-accessible content across Facebook and Instagram of any research tool built to date.

Individuals from qualified institutions, including journalists that are pursuing scientific or public interest research topics are able to apply for access to these tools through partners with deep expertise in secure data sharing for research, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. This is a first-of-its-kind partnership that will enable researchers to analyse data from the API in ICPSR’s Social Media Archives (SOMAR) Virtual Data Enclave.


Qualified individuals pursuing scientific or public interest research, including journalists can gain access to the tools if they meet all the requirements. 



Empowering the Fact-Checking Community

Outline approaches pertinent to this chapter, highlighting similarities/commonalities and differences with regular enforcement.

War of aggression by Russia on Ukraine

As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Cooperation with independent fact-checkers in the crisis context, including coverage in the EU (Commitment 30-33)

As mentioned in our baseline report, for misinformation that does not violate our Community Standards, but undermines the authenticity and integrity of our platform, we work with our network of independent third-party fact-checking partners.

The details of the network are outlined under the Empowering Fact-Checkers chapter above.

As mentioned in our baseline report, our cooperation with fact-checkers is as outlined in the Fact-Checkers’ Empowerment chapter above. 

In Europe, we partner with 46 fact-checking organisations, covering 36 languages. This includes 29 partners covering 26 countries and 23 different languages in the EU.





Israel - Hamas War
As noted in our baseline report, our policies are based on years of experience and expertise in safety combined with external input from experts around the world. We are continuously working to protect the integrity of our platforms and adjusting our policies, tools, and processes.

Working with fact checker in the region and deploying keyword detection  (Commitment 30)
Meta is working with third-party fact-checkers in the region to debunk false claims. Meta’s third-party fact-checking network includes  coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. We recognise the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.

When they rate something as false, we move this content lower in Feed so fewer people see it.


Content Warning Labels (Commitment 31) 
Meta is adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. We also have limits on message forwarding and label messages that haven’t originated with the sender so people are aware that something is information from a third party.

Meta is supporting people in the region by giving them more information to decide what to read, trust and share by adding warning labels onto relevant content.