Google Ads

Report March 2025

Submitted

Your organisation description

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

Google sets a high bar for information quality on services that involve advertising and content monetisation. Given that many bad actors may seek to make money by spreading harmful content, raising the bar for monetisation can also diminish their incentives to misuse Google services. For example, Google prohibits deceptive behaviour on Google advertising products.

Google Ads also provides advertisers with additional controls and helps them exclude types of content that, while in compliance with AdSense policies, may not fit their brand or business. These controls let advertisers apply content filters or exclude certain types of content or terms from their video, display, and search ad campaigns. Advertisers can exclude content such as politics, news, sports, beauty, fashion and many other categories. These categories are listed in the Google Ads Help Centre

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

Google partakes in audits including those conducted by independent accreditation organisations such as the Media Rating Council (MRC) and maintains this accreditation via participation in annual audit cycles conducted by the MRC. 

The current MRC accreditation certifies that:

  • Google's Google Ads display and Search Clicks measurement methodology and AdSense ad serving technologies adhere to the industry standards for click measurement.
  • Google Ads video impression and video viewability measurement as reported in the Video Viewability Report adheres to the industry standards for video impression and viewability measurement.
  • The processes supporting these technologies are accurate. This applies to Google’s measurement technology which is used across all device types: desktop, mobile, and tablet, in both browser and mobile apps environments.

For more information about what this accreditation means, please see this help page.

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

See response to QRE 1.5.1.

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google Ads also provides its advertising partners with features that enable them to maintain control over where their ads appear, the format in which their ads run, and their intended audience. 

Since April 2021, advertisers have the ability to use dynamic exclusion lists that can be updated seamlessly and continuously over time. These lists can be created by advertisers themselves or by a third party they trust, such as brand safety organisations and industry groups. Once advertisers upload a dynamic exclusion list to their Google Ads account, they can schedule automatic updates as new web pages or domains are added, ensuring that their exclusion lists remain effective and up-to-date.

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

Not relevant for Google Ads (intended for Signatories that purchase ads).

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

Not relevant for Google Ads (intended for Signatories that provide brand safety tools).

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

Not relevant for Google Ads (intended for Signatories that rate sources).

SLI 1.6.1

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

Not relevant for Google Ads (intended for Signatories that purchase ads).

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In July 2024, Google updated the Disclosure requirements for synthetic content under the Political Content Policy, requiring advertisers to disclose election ads that contain synthetic or digitally altered content that inauthentically depicts real or realistic-looking people or events by selecting the checkbox in the ‘Altered or synthetic content’ section in their campaign settings. Google will then generate an in-ad disclosure based on that checkbox, for certain types of formats.
  • After joining the Coalition for Content Provenance and Authenticity (C2PA), a cross-industry effort to help provide more transparency and context for people on AI-generated content, in February 2024, Google recently announced that it had begun integrating C2PA metadata into their ads systems. Google aims to use CP2A signals to inform how it enforces key policies.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

All newly created ads and ads that are edited by users are reviewed for policy violations. The review of new ads is performed by either, or a combination of:
  • Automated mechanisms; and 
  • Manual reviews performed by human reviewers.

For more information on how the ad review process works, please see the ‘About the ad review process’ page.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

See response to QRE 2.2.1. 

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

Number of own-initiative actions taken on advertisements that affect the availability, visibility, and accessibility of information provided by recipients of Google Ads services, by EEA Member State billing country and policy in H2 2024 (1 July 2024 to 31 December 2024). These actions taken include enforcement against ads and ad assets that violate any of the policy topics in scope for reporting.

Content moderation actions taken at Google’s ‘own initiative’ are considered to be actions taken on content shown to or because the content violates Google Ads policies, or where the content is illegal but action is not taken in response to an Article 9 order or Article 16 notice, as defined by the Digital Services Act (DSA). These can encompass both proactive and reactive enforcement actions. Proactive enforcement takes place when Google employees, algorithms, or contractors flag potentially policy-violating content. Reactive enforcement takes place in response to external notifications, such as user policy flags or legal complaints. 

To ensure a safe and positive experience for users, Google requires that advertisers comply with all applicable laws and regulations in addition to the Google Ads policies. Ads, assets, destinations, and other content that violates Google Ads policies can be blocked on the Google Ads platform and associated networks. 

Ad or asset disapproval
Ads and assets that do not follow Google Ads policies will be disapproved. A disapproved ad will not be able to run until the policy violation is fixed and the ad is reviewed.

Account suspension
Google Ads Accounts may be suspended if Google finds violations of its policies or the Terms and Conditions.

For more information on what happens when an ad or account is violating Google Ads policies, please see the 'What happens if you violate our policies' page. 

Policies in scope: 
  • Destination Requirements (Insufficient Original Content); 
  • Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
  • Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).

Country Number of actions taken, for Destination Requirements Number of actions taken, for Inappropriate Content Number of actions taken, for Misrepresentation
Austria 7,422,101 60,174 66,717
Belgium 12,660,562 59,045 116,586
Bulgaria 6,971,115 88,399 155,851
Croatia 2,727,827 34,436 37,895
Cyprus 52,668,089 113,444 963,259
Czech Republic 22,154,687 309,514 219,848
Denmark 156,943,475 136,645 395,612
Estonia 2,021,982 16,377 108,880
Finland 2,956,655 43,135 60,524
France 196,126,998 540,361 2,367,010
Germany 131,475,890 955,572 2,443,336
Greece 2,720,410 30,688 135,403
Hungary 4,030,059 87,838 138,459
Ireland 40,613,267 1,040,422 25,643,951
Italy 55,368,074 328,135 2,220,113
Latvia 1,961,748 49,753 127,796
Lithuania 7,357,129 149,638 198,308
Luxembourg 1,904,111 48,285 639,716
Malta 2,342,282 3,807 153,093
Netherlands 75,660,484 540,200 1,733,070
Poland 19,165,056 714,955 2,112,907
Portugal 2,438,751 44,576 183,139
Romania 5,415,231 118,864 343,813
Slovakia 3,671,184 32,633 101,007
Slovenia 5,550,505 28,316 53,231
Spain 107,768,933 380,582 5,457,434
Sweden 19,021,742 343,419 248,193
Iceland 90,480 1,296 25,059
Liechtenstein 1,220,132 322 1,442
Norway 3,432,920 18,154 128,489
Total EU 949,118,347 6,299,213 46,425,151
Total EEA 953,861,879 6,318,985 46,580,141

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Notification
Ads that do not follow Google Ads policies will be disapproved or (if appropriate) limited in where and when they can show. This will be shown in the ‘Status’ column as ‘Disapproved’ or ‘Eligible (limited),’ and the ad may not be able to run until the policy violation is fixed and the ad is re-reviewed. By hovering the cursor over the status of the ad, there is additional information, including the policy violation impacting the ad. For more information on how to fix a disapproved ad, see the external Help Centre page

Appeal process
Advertisers have multiple options and pathways to appeal a policy decision directly from their Google Ads account, for instance the 'ads and assets' table, the Policy Manager, or the Disapproved Ads and Policy Questions form. For more information about the appeal process, check the Help Centre page. For account suspensions, advertisers can also appeal following the submit an appeal process

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

Number of content moderation complaints received from advertisers located in EEA Member States during H2 2024 (1 July 2024 to 31 December 2024), broken down by EEA Member State and by complaint outcome. Advertiser complaints were received via Google Ads standardised path for appealing policy decisions. 

Complaint outcomes include initial decision upheld and initial decision reversed. An ‘initial decision’ refers to the first enforcement of Google’s terms of service or product policies. These decisions may be reversed in light of additional information provided by the appellant as part of an appeal or additional automatic, manual review of the content. 

Policies in scope:
  • Destination Requirements (Insufficient Original Content);
  • Inappropriate Content (Dangerous or Derogatory Content, Shocking Content, Sensitive Events);
  • Misrepresentation (Unacceptable Business Practices, Coordinated Deceptive Practices, Misleading Representation, Manipulated Media, Unreliable Claims, Misleading Ad Design, Clickbait Ads, Unclear Relevance, Unavailable Offers, Dishonest Pricing Practices).

Country Number of Ads Appeals Number of Successful Appeals Number of Failed Appeals
Austria 14,234 4,207 10,027
Belgium 18,261 10,279 7,982
Bulgaria 15,513 4,350 11,163
Croatia 5,071 2,472 2,599
Cyprus 113,836 39,665 74,171
Czech Republic 46,001 9,706 36,295
Denmark 48,601 32,199 16,402
Estonia 14,257 6,882 7,375
Finland 4,739 2,199 2,540
France 66,428 20,094 46,334
Germany 200,343 47,937 152,406
Greece 3,758 1,407 2,351
Hungary 15,212 5,850 9,362
Ireland 23,656 13,854 9,802
Italy 93,382 32,128 61,254
Latvia 5,108 1,555 3,553
Lithuania 55,362 23,029 32,333
Luxembourg 1,215 440 775
Malta 31,292 8,573 22,719
Netherlands 323,775 137,084 186,691
Poland 141,849 37,149 104,700
Portugal 14,029 5,704 8,325
Romania 34,736 13,454 21,282
Slovakia 8,169 6,016 2,153
Slovenia 32,944 9,524 23,420
Spain 130,730 36,745 93,985
Sweden 39,057 12,096 26,961
Iceland 68 22 46
Liechtenstein 1,748 248 1,500
Norway 3,172 1,127 2,045
Total EU 1,501,558 524,598 976,960
Total EEA 1,506,546 525,995 980,551

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

Google Advertising works across industry and with civil society to facilitate the flow of information, relevant to tackling disinformation. For example, Google participates in the EU Code of Practice on Disinformation Permanent Task-force’s dedicated Working Groups. These working groups focus on Integrity of Services, Crisis Response, and Advertising, all of which Google takes part in, and which involve civil society and Industry Signatories discussing relevant trends and technological developments. 

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

Note: The below QRE response has been reproduced (in some instances truncated in order to meet the suggested character limit) from the previous report as there is no new information to share now.

Google takes part in the EU Code of Practice on Disinformation Permanent Task-force’s Working Groups on Crisis Response, Integrity of Services, and Advertising - as mentioned in response to QRE 3.1.1. In addition, as Google has publicly communicated, Google’s Threat Analysis Group (TAG) continues to engage with other Industry Signatories to the Code in order to stay abreast of cross-platform deceptive practices, such as operations leveraging fake or impersonated accounts. 

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

Google Advertising frequently engages with third-party organisations in order to explain, collect feedback on, and improve Google Advertising policies. Google Advertising has also exchanged views with experts at numerous policy roundtables, conferences, and workshops - both in Brussels and in the EU capitals.

Please also see QRE 3.1.1 for additional information on the collaboration with third party organisations and government entities.

Crisis and Elections Response

Elections 2024

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated


Overview
In elections and other democratic processes, people want access to high-quality information and a broad range of perspectives. High-quality information helps people make informed decisions when voting and counteracts abuse by bad actors. Consistent with its broader approach to elections around the world, during the various elections across the EU in H2 2024, Google was committed to supporting this democratic process by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. 

To do so, Google will continue its efforts in 2025 to: 
  • Safeguard its platforms;
  • Inform voters by surfacing high-quality information;
  • Equip campaigns and candidates with best-in-class security features and training; and
  • Help people navigate AI-generated content.

Mitigations in place


Across Google, various teams support democratic processes by connecting people to election information like practical tips on how to register to vote or providing high-quality information about candidates. In 2024, a number of key elections took place around the world. In June 2024, voters across the 27 Member States of the European Union took to the polls to elect Members of European Parliament (MEPs). In H2 2024, voters also cast their ballots in the Romanian presidential election and in the second round of the French legislative election. Google was committed to supporting these democratic processes by surfacing high-quality information to voters, safeguarding its platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across its efforts, Google also has an increased focus on the role of artificial intelligence (AI) and the part it can play in the misinformation landscape — while also leveraging AI models to augment Google’s abuse-fighting efforts. 

Safeguarding Google platforms and disrupting the spread of misinformation
To better secure its products and prevent abuse, Google continues to enhance its enforcement systems and to invest in Trust & Safety operations — including at its Google Safety Engineering Centre (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and around the world. Google also continues to partner with the wider ecosystem to combat misinformation. 
  • Enforcing Google policies and using AI models to fight abuse at scale: Google has long-standing policies that inform how it approaches areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines and its Political Content Policies for advertisers. To help enforce Google policies, Google’s AI models are enhancing its abuse-fighting efforts. With recent advances in Google’s Large Language Models (LLMs), Google is building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
  • Working with the wider ecosystem: Since Google’s inaugural contribution of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and information quality across Europe, 70 projects have been funded across 24 countries so far. Google also supports numerous civil society, research and media literacy efforts from partners, including the Civic Resilience Initiative, Baltic Centre for Media Excellence, CEDMO and more.

Helping people navigate AI-generated content
Like any emerging technology, AI presents new opportunities as well as challenges. For example, generative AI makes it easier than ever to create new content, but it can also raise questions about trustworthiness of information. Google put in place a number of policies and other measures that have helped people navigate content that was AI-generated. Overall, harmful altered or synthetic political content did not appear to be widespread on Google’s platforms. Measures that helped mitigate that risk include: 
  • Ads disclosures: Google expanded its Political Content Policies to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Google’s ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content.
  • Content labels on YouTube: YouTube’s Misinformation Policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and YouTube requires creators to disclose when they have created realistic altered or synthetic content, and will display a label that indicates for people when the content they are watching is synthetic. For sensitive content, including election related content, that contains realistic altered or synthetic material, the label appears on the video itself and in the video description.
  • A responsible approach to Generative AI products: In line with its principled and responsible approach to its Generative AI products like Gemini, Google has prioritised testing across safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness. Out of an abundance of caution on such an important topic, Google is restricting the types of election-related queries for which Gemini will return responses.
  • Provide users with additional context: 'About This Image' in Search helps people assess the credibility and context of images found online.
  • Digital watermarking: SynthID, a tool in beta from Google DeepMind, directly embeds a digital watermark into AI-generated images, audio, text, or audio. Google recently expanded SynthID’s capabilities to watermark AI-generated text in the Gemini app and web experience, as well as to video in Veo, Google’s recently announced and most capable generative video model. 
  • Industry collaboration: Google joined the C2PA coalition and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content. Alongside other leading tech companies, Google also pledged to help prevent deceptive AI-generated imagery, audio or video content from interfering with this year’s global elections. The ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters.

Informing voters surfacing high-quality information
In the build-up to elections, people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways Google makes it easy for people to find what they need, and which were deployed during elections that took place across the EU in 2024: 
  • Voting details and Election Results on Google Search: Google put in place a ‘How to Vote’ and ‘How to Register’ feature for the national parliamentary elections in France, which featured aggregated voting information from the French Electoral Commission on Google Search. 
  • High-quality Information on YouTube: For news and information related to elections, YouTube’s systems prominently surface high-quality content, on the YouTube homepage, in search results and the ‘Up Next’ panel. YouTube also displays information panels at the top of search results and below videos to provide additional context. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
  • Ongoing transparency on Election Ads: All advertisers who wish to run election ads in the EU on Google’s platforms are required to go through a verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads are published in Google’s Political Ads Transparency Report, where anyone can look up information such as how much was spent and where it was shown. Google also limits how advertisers can target election ads.

Equipping campaigns and candidates with best-in-class security features and training
As elections come with increased cybersecurity risks, Google works hard to help high-risk users, such as campaigns and election officials, civil society and news sources, improve their security in light of existing and emerging threats, and to educate them on how to use Google’s products and services. 
  • Security tools for campaign and election teams: Google offers free services like its Advanced Protection Program — Google’s strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. Google also partners with Possible, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing users’ Google Accounts.
  • Tackling coordinated influence operations: Google’s Threat Intelligence Group helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. Google reports on actions taken in its quarterly bulletin, and meets regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organisations build holistic election security programs and harden their defences with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations. A recent publication from the team gives an overview of the global election cybersecurity landscape, designed to help election organisations tackle a range of potential threats.
  • Helpful resources at euelections.withgoogle: Google launched an EU-specific hub at euelections.withgoogle with resources and trainings to help campaigns connect with voters and manage their security and digital presence. In advance of the European Parliamentary elections in 2019, Google conducted in-person and online security training for more than 2,500 campaign and election officials, and, for the 2024 EU Parliamentary elections, Google built on these numbers by directly reaching 3,500 campaigners through in-person trainings and briefings on election integrity and tackling misinformation across the region.

Google is committed to working with government, industry and civil society to protect the integrity of elections in the European Union — building on its commitments made in the EU Code of Practice on Disinformation. 

Crisis 2024

[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].

Threats observed or anticipated


War in Ukraine

Overview
The ongoing war in Ukraine has continued throughout 2024, and Google continues to help by providing cybersecurity and humanitarian assistance, and providing high-quality information to people in the region. The following list outlines the main threats observed by Google during this conflict:

  1. Continued online services manipulation and coordinated influence operations;
  2. Advertising and monetisation linked to state-backed Russia and Ukraine disinformation;
  3. Threats to security and protection of digital infrastructure.

Israel-Gaza conflict

Overview
 Following the Israel-Gaza conflict, Google has actively worked to support humanitarian and relief efforts, ensure platforms and partnerships are responsive to the current crisis, and counter the threat of disinformation. Google identified a few areas of focus for addressing the ongoing crisis:

  • Humanitarian and relief efforts;
  • Supporting Israeli tech firms and Palestinian businesses; and
  • Platforms and partnerships to protect our services from coordinated influence operations, hate speech, and graphic and terrorist content.

Mitigations in place


War in Ukraine

The following sections summarise Google’s main strategies and actions taken to mitigate the identified threats and react to the war in Ukraine.

1. Online services manipulation and malign influence operations
Google’s Threat Analysis Group (TAG) is helping Ukraine by monitoring the threat landscape in Eastern Europe and disrupting coordinated influence operations from Russian threat actors. Google has also announced new long-term partnerships across Central and Eastern Europe.

In the Baltics, Google entered into long-term partnerships with the Civic Resilience Initiative and the Baltic Centre for Media Excellence. These two organisations have received €1.3 million in funding from Google to build on their impactful work towards increasing media literacy, building further resilience and actively tackling disinformation in Lithuania, Latvia and Estonia. Furthermore, Google is partnering with the Charles University in Prague, the main research centre of the Central European Digital Media Observatory (CEDMO) project, and providing €1 million in funding for CEDMO to further expand its research into information disorders, and work to increase the level of media and digital literacy in Poland, Czechia and Slovakia.

2. Advertising and monetisation linked to Russia and Ukraine disinformation
By H2 2024, Google had paused the majority of commercial activities in Russia – including ads serving in Russia, ads on Google’s properties and networks globally for all Russian-based advertisers, new Cloud sign ups, the payments functionality for most of Google’s services, AdSense ads on state-funded media sites, and monetisation features for YouTube viewers in Russia. Due to the war in Ukraine, Google paused ads containing content that exploits, dismisses, or condones the war. In addition, Google paused the ability of Russia-based publishers to monetise with AdSense, AdMob, and Ad Manager in August 2024. Free Google services such as Search, Gmail and YouTube are still operating in Russia. Google will continue to closely monitor developments.

3. Threats to security and protection of digital infrastructure
Google expanded eligibility for Project Shield, Google’s free protection against Distributed Denial of Service (DDoS) attacks, shortly after the war in Ukraine broke out. The expansion aimed to allow Ukrainian government websites and embassies worldwide to stay online and continue to offer their critical services. Since then, Google has continued to implement protections for users and track and disrupt cyber threats. 

TAG has been tracking threat actors, both before and during the war, and sharing their findings publicly and with law enforcement. TAG’s findings have shown that government-backed actors from Russia, Belarus, China, Iran, and North Korea have been targeting Ukrainian and Eastern European government and defence officials, military organisations, politicians, NGOs, and journalists, while financially motivated bad actors have also used the war as a lure for malicious campaigns. 

Google is continuing to provide critical cybersecurity and technical infrastructure support by donating 50,000 new Google Workspace licences to the Ukrainian government. By providing these licences and a year of free access to Google Workspace solutions, including Google’s cloud-first, zero-trust security model, Google can help provide Ukrainian public institutions with the security and protection they need to deal with constant threats to their digital systems. In February 2023, Google also announced an extension of the free access to premium Google Workspace for Education features for 250 universities and colleges until the end of August 2023.

Google aims to continue to follow the following approach when responding to future crisis situations: 
  • Elevate access to high-quality information across Google services;
  • Protect Google users from harmful disinformation;
  • Continue to monitor and disrupt cyber threats;
  • Explore ways to provide assistance to support the affected areas more broadly.

Future measures
Google is continually making investments in products, programs and partnerships to help fight disinformation, both in Ukraine and globally. Google will continue to monitor the situation and take additional action as needed.


Israel-Gaza conflict

Humanitarian and relief efforts
Google.org provided $6 million in Google.org funding, with $3 million to Israel organisations focused on mental health support, and $3 million in support to Gaza organisations focused on humanitarian aid and relief, including $1 million to Save the Children, $1 million to Palestinian Red Crescent, and $1 million to International Medical Corps (IMC). Specifically, Google’s humanitarian and relief efforts with these organisations include: 
  • Natal - Israel Trauma and Resiliency Centre: In the early days of the war, calls to Natal’s support hotline went from around 300 a day to 8,000 a day. With our funding, they were able to scale their support to patients by 450%, including multidisciplinary treatment and mental & psychosocial support to direct and indirect victims of trauma due to terror and war in Israel. 
  • International Medical Corps (IMC): As of October 2024, our support helped fund the delivery of two mobile operating theaters, doubling the surgical capacity of IMC’s field hospital, and enabling them to provide over 210,000 health consultations and well over 7,000 (often lifesaving) surgeries, as well as other support such as access to safe drinking water to nearly 200,000 people.

In addition, Google employees also directed more than $11 million in funding including employee donations and matching funding from Google.org to organisations providing aid and support in Israel and Gaza. 

Supporting Israeli tech firms and Palestinian businesses
Across Europe and Israel, Google is committed to supporting startups as they work at the forefront of innovation: striving to solve some of the most critical issues facing the world. These pioneering startups and businesses often struggle to access the support, expertise and tools they need to help them scale. In light of the Israel-Gaza conflict, Google is investing $8 million to support Israeli tech firms and Palestinian businesses. Of that investment, Google is providing $4 million to support Israeli AI startups and offer access to Google's knowledge, expertise (e.g. Cloud support), and mentorship opportunities in Israel and $4 million to support Palestinian startups and businesses. In addition, Google has announced that it will provide loans and grants to 1,000 Palestinian small businesses in partnership with local and global non-profit organisations, and will also provide seed grants to 50 Palestinian tech startups in hopes to preserve 4,500 jobs and create additional job opportunities. 

Platforms and partnerships
As the conflict continues, Google is committed to tackling misinformation, hate speech, graphic content and terrorist content by continuing to find ways to provide support through its products. For example, Google has deployed language capabilities to support emergency efforts including emergency translations, and localising Google content to help users, businesses and NGOs. Google has also pledged to help its partners in these extraordinary circumstances. For example, when schools closed in October 2023, the Ministry of Education in Israel used Meet as their core teach-from-home platform and Google provided support. Google has been in touch with Gaza-based partners and participants in its Palestine Launchpad program, its digital skills and entrepreneurship program for Palestinians, to try to support those who have been significantly impacted by this crisis.