Report March 2025
Your organisation description
Advertising
Commitment 1
Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.
We signed up to the following measures of this commitment
Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.5 Measure 1.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 1.3
Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.
QRE 1.3.1
Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.
Google Ads also provides advertisers with additional controls and helps them exclude types of content that, while in compliance with AdSense policies, may not fit their brand or business. These controls let advertisers apply content filters or exclude certain types of content or terms from their video, display, and search ad campaigns. Advertisers can exclude content such as politics, news, sports, beauty, fashion and many other categories. These categories are listed in the Google Ads Help Centre.
Measure 1.5
Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.
QRE 1.5.1
Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.
- Google's Google Ads display and Search Clicks measurement methodology and AdSense ad serving technologies adhere to the industry standards for click measurement.
- Google Ads video impression and video viewability measurement as reported in the Video Viewability Report adheres to the industry standards for video impression and viewability measurement.
- The processes supporting these technologies are accurate. This applies to Google’s measurement technology which is used across all device types: desktop, mobile, and tablet, in both browser and mobile apps environments.
QRE 1.5.2
Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.
Measure 1.6
Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.
QRE 1.6.1
Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.
Since April 2021, advertisers have the ability to use dynamic exclusion lists that can be updated seamlessly and continuously over time. These lists can be created by advertisers themselves or by a third party they trust, such as brand safety organisations and industry groups. Once advertisers upload a dynamic exclusion list to their Google Ads account, they can schedule automatic updates as new web pages or domains are added, ensuring that their exclusion lists remain effective and up-to-date.
QRE 1.6.2
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
QRE 1.6.3
Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.
QRE 1.6.4
Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.
SLI 1.6.1
Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.
Commitment 2
Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.
We signed up to the following measures of this commitment
Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
- In July 2024, Google updated the Disclosure requirements for synthetic content under the Political Content Policy, requiring advertisers to disclose election ads that contain synthetic or digitally altered content that inauthentically depicts real or realistic-looking people or events by selecting the checkbox in the ‘Altered or synthetic content’ section in their campaign settings. Google will then generate an in-ad disclosure based on that checkbox, for certain types of formats.
- After joining the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry effort to help provide more transparency and context for people on AI-generated content, in February 2024, Google recently announced that it had begun integrating C2PA metadata into their ads systems. Google aims to use CP2A signals to inform how it enforces key policies.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 2.2
Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.
QRE 2.2.1
Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.
- Automated mechanisms; and
- Manual reviews performed by human reviewers.
For more information on how the ad review process works, please see the ‘About the ad review process’ page.
Measure 2.3
Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.
QRE 2.3.1
Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.
SLI 2.3.1
Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.
- Destination Requirements (Insufficient Original Content);
- Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
- Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).
| Country | Number of actions taken, for Destination Requirements | Number of actions taken, for Inappropriate Content | Number of actions taken, for Misrepresentation |
|---|---|---|---|
| Austria | 7,422,101 | 60,174 | 66,717 |
| Belgium | 12,660,562 | 59,045 | 116,586 |
| Bulgaria | 6,971,115 | 88,399 | 155,851 |
| Croatia | 2,727,827 | 34,436 | 37,895 |
| Cyprus | 52,668,089 | 113,444 | 963,259 |
| Czech Republic | 22,154,687 | 309,514 | 219,848 |
| Denmark | 156,943,475 | 136,645 | 395,612 |
| Estonia | 2,021,982 | 16,377 | 108,880 |
| Finland | 2,956,655 | 43,135 | 60,524 |
| France | 196,126,998 | 540,361 | 2,367,010 |
| Germany | 131,475,890 | 955,572 | 2,443,336 |
| Greece | 2,720,410 | 30,688 | 135,403 |
| Hungary | 4,030,059 | 87,838 | 138,459 |
| Ireland | 40,613,267 | 1,040,422 | 25,643,951 |
| Italy | 55,368,074 | 328,135 | 2,220,113 |
| Latvia | 1,961,748 | 49,753 | 127,796 |
| Lithuania | 7,357,129 | 149,638 | 198,308 |
| Luxembourg | 1,904,111 | 48,285 | 639,716 |
| Malta | 2,342,282 | 3,807 | 153,093 |
| Netherlands | 75,660,484 | 540,200 | 1,733,070 |
| Poland | 19,165,056 | 714,955 | 2,112,907 |
| Portugal | 2,438,751 | 44,576 | 183,139 |
| Romania | 5,415,231 | 118,864 | 343,813 |
| Slovakia | 3,671,184 | 32,633 | 101,007 |
| Slovenia | 5,550,505 | 28,316 | 53,231 |
| Spain | 107,768,933 | 380,582 | 5,457,434 |
| Sweden | 19,021,742 | 343,419 | 248,193 |
| Iceland | 90,480 | 1,296 | 25,059 |
| Liechtenstein | 1,220,132 | 322 | 1,442 |
| Norway | 3,432,920 | 18,154 | 128,489 |
| Total EU | 949,118,347 | 6,299,213 | 46,425,151 |
| Total EEA | 953,861,879 | 6,318,985 | 46,580,141 |
Measure 2.4
Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.
QRE 2.4.1
Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.
SLI 2.4.1
Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.
- Destination Requirements (Insufficient Original Content);
- Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
- Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).
| Country | Number of Ads Appeals | Number of Successful Appeals | Number of Failed Appeals |
|---|---|---|---|
| Austria | 14,234 | 4,207 | 10,027 |
| Belgium | 18,261 | 10,279 | 7,982 |
| Bulgaria | 15,513 | 4,350 | 11,163 |
| Croatia | 5,071 | 2,472 | 2,599 |
| Cyprus | 113,836 | 39,665 | 74,171 |
| Czech Republic | 46,001 | 9,706 | 36,295 |
| Denmark | 48,601 | 32,199 | 16,402 |
| Estonia | 14,257 | 6,882 | 7,375 |
| Finland | 4,739 | 2,199 | 2,540 |
| France | 66,428 | 20,094 | 46,334 |
| Germany | 200,343 | 47,937 | 152,406 |
| Greece | 3,758 | 1,407 | 2,351 |
| Hungary | 15,212 | 5,850 | 9,362 |
| Ireland | 23,656 | 13,854 | 9,802 |
| Italy | 93,382 | 32,128 | 61,254 |
| Latvia | 5,108 | 1,555 | 3,553 |
| Lithuania | 55,362 | 23,029 | 32,333 |
| Luxembourg | 1,215 | 440 | 775 |
| Malta | 31,292 | 8,573 | 22,719 |
| Netherlands | 323,775 | 137,084 | 186,691 |
| Poland | 141,849 | 37,149 | 104,700 |
| Portugal | 14,029 | 5,704 | 8,325 |
| Romania | 34,736 | 13,454 | 21,282 |
| Slovakia | 8,169 | 6,016 | 2,153 |
| Slovenia | 32,944 | 9,524 | 23,420 |
| Spain | 130,730 | 36,745 | 93,985 |
| Sweden | 39,057 | 12,096 | 26,961 |
| Iceland | 68 | 22 | 46 |
| Liechtenstein | 1,748 | 248 | 1,500 |
| Norway | 3,172 | 1,127 | 2,045 |
| Total EU | 1,501,558 | 524,598 | 976,960 |
| Total EEA | 1,506,546 | 525,995 | 980,551 |
Commitment 3
Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.
We signed up to the following measures of this commitment
Measure 3.1 Measure 3.2 Measure 3.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
If yes, list these implementation measures here
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Measure 3.1
Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.
QRE 3.1.1
Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.
Measure 3.2
Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.
QRE 3.2.1
Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.
Measure 3.3
Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.
QRE 3.3.1
Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.
Crisis and Elections Response
Elections 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
Overview
- Safeguard its platforms;
- Inform voters by surfacing high-quality information;
- Equip campaigns and candidates with best-in-class security features and training; and
- Help people navigate AI-generated content.
Mitigations in place
Across Google, various teams support democratic processes by connecting people to election information like practical tips on how to register to vote or providing high-quality information about candidates. In 2024, a number of key elections took place around the world. In June 2024, voters across the 27 Member States of the European Union took to the polls to elect Members of European Parliament (MEPs). In H2 2024, voters also cast their ballots in the Romanian presidential election and in the second round of the French legislative election. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across its efforts, Google also has an increased focus on the role of artificial intelligence (AI) and the part it can play in the misinformation landscape — while also leveraging AI models to augment Google’s abuse-fighting efforts.
- Enforcing Google policies and using AI models to fight abuse at scale: Google has long-standing policies that inform how it approaches areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines and its Political Content Policies for advertisers. To help enforce Google policies, Google’s AI models are enhancing its abuse-fighting efforts. With recent advances in Google’s Large Language Models (LLMs), Google is building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
- Working with the wider ecosystem: Since Google’s inaugural contribution of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and information quality across Europe, 70 projects have been funded across 24 countries so far. Google also supports numerous civil society, research and media literacy efforts from partners, including the Civic Resilience Initiative, Baltic Centre for Media Excellence, CEDMO and more.
- Ads disclosures: Google expanded its Political Content Policies to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Google’s ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content.
- Content labels on YouTube: YouTube’s Misinformation Policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and YouTube requires creators to disclose when they have created realistic altered or synthetic content, and will display a label that indicates for people when the content they are watching is synthetic. For sensitive content, including election related content, that contains realistic altered or synthetic material, the label appears on the video itself and in the video description.
- A responsible approach to Generative AI products: In line with its principled and responsible approach to its Generative AI products like Gemini, Google has prioritised testing across safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness. Out of an abundance of caution on such an important topic, Google is restricting the types of election-related queries for which Gemini will return responses.
- Provide users with additional context: 'About This Image' in Search helps people assess the credibility and context of images found online.
- Digital watermarking: SynthID, a tool in beta from Google DeepMind, directly embeds a digital watermark into AI-generated images, audio, text, or audio. Google recently expanded SynthID’s capabilities to watermark AI-generated text in the Gemini app and web experience, as well as to video in Veo, Google’s recently announced and most capable generative video model.
- Industry collaboration: Google joined the C2PA coalition and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content. Alongside other leading tech companies, Google also pledged to help prevent deceptive AI-generated imagery, audio or video content from interfering with this year’s global elections. The ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.
- Voting details and Election Results on Google Search: Google put in place a ‘How to Vote’ and ‘How to Register’ feature for the national parliamentary elections in France, which featured aggregated voting information from the French Electoral Commission on Google Search.
- High-quality Information on YouTube: For news and information related to elections, YouTube’s systems prominently surface high-quality content, on the YouTube homepage, in search results and the ‘Up Next’ panel. YouTube also displays information panels at the top of search results and below videos to provide additional context. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
- Ongoing transparency on Election Ads: All advertisers who wish to run election ads in the EU on Google’s platforms are required to go through a verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads are published in Google’s Political Ads Transparency Report, where anyone can look up information such as how much was spent and where it was shown. Google also limits how advertisers can target election ads.
- Security tools for campaign and election teams: Google offers free services like its Advanced Protection Program — Google’s strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Google also partners with Possible, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing users’ Google Accounts.
- Tackling coordinated influence operations: Google’s Threat Intelligence Group helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. Google reports on actions taken in its quarterly bulletin, and meets regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.
- Helpful resources at euelections.withgoogle: Google launched an EU-specific hub at euelections.withgoogle with resources and trainings to help campaigns connect with voters and manage their security and digital presence. In advance of the European Parliamentary elections in 2019, Google conducted in-person and online security training for more than 2,500 campaign and election officials, and, for the 2024 EU Parliamentary elections, Google built on these numbers by directly reaching 3,500 campaigners through in-person trainings and briefings on election integrity and tackling misinformation across the region.
Crisis 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
War in Ukraine
- Continued online services manipulation and coordinated influence operations;
- Advertising and monetisation linked to state-backed Russia and Ukraine disinformation;
- Threats to security and protection of digital infrastructure.
Israel-Gaza conflict
- Humanitarian and relief efforts;
- Supporting Israeli tech firms and Palestinian businesses; and
- Platforms and partnerships to protect our services from coordinated influence operations, hate speech, and graphic and terrorist content.
Mitigations in place
War in Ukraine
- Elevate access to high-quality information across Google services;
- Protect Google users from harmful disinformation;
- Continue to monitor and disrupt cyber threats;
- Explore ways to provide assistance to support the affected areas more broadly.
Israel-Gaza conflict
- Natal - Israel Trauma and Resiliency Centre: In the early days of the war, calls to Natal’s support hotline went from around 300 a day to 8,000 a day. With our funding, they were able to scale their support to patients by 450%, including multidisciplinary treatment and mental & psychosocial support to direct and indirect victims of trauma due to terror and war in Israel.
- International Medical Corps (IMC): As of October 2024, our support helped fund the delivery of two mobile operating theaters, doubling the surgical capacity of IMC’s field hospital, and enabling them to provide over 210,000 health consultations and well over 7,000 (often lifesaving) surgeries, as well as other support such as access to safe drinking water to nearly 200,000 people.