Microsoft

Report March 2025

Submitted
Microsoft welcomes the opportunity to file this fifth report on our compliance with the commitments of the strengthened 2022 EU Code of Practice on Disinformation, covering the second half of 2024. At Microsoft, we are committed to instilling trust and security across our products and services, and across the broader web. We recognise that information integrity is a key element in empowering users to access the information they need and freely express themselves. We also recognize that there is not a one size fits all approach to this work, and instead there needs to be a whole of society strategy that recognizes that not all services or platforms are the same and that there are a variety of efforts that can be effective in improving the information environment and empowering the public.

One opportunity is to continue employing AI as a resource in assisting and streamlining important work in detecting and assessing cyber enabled foreign influence operations. The harmful use of AI can also pose challenges in the information integrity space, including improved efficiency of deceptive and malicious images and videos, as malicious threat actors continue to build their capacity to create highly deceptive content efficiently. This requires continuous improvement and response to changing tactics. Microsoft’s services are fully committed to utilising best in class tools and technology to help mitigate the risks of its services being misused.

Microsoft is taking a cross-product, whole-of-company approach to ensure the responsible implementation of AI. This starts with our Responsible AI Principles. Building on those principles in June of 2022, Microsoft released our Responsible AI Standard v.2 and Information Integrity Principles to help set baseline standards and guidance across product teams. Recognizing that there is an important role for government, academia and civil society to play in the responsible deployment of AI, we also created a roadmap for the governance of AI across the world as well as creating a vision for the responsible advancement of AI, both inside Microsoft and throughout the world, including specifically in Europe. For more information on Microsoft’s commitment to Responsible AI and ongoing internal and external efforts, we encourage you to review our Responsible AI hub, which offers a range of information, tools, and resources related to the ethical and responsible use of AI technologies. It includes detailed information about Microsoft’s internal Responsible AI processes and tools which can be used to responsibly develop and deploy AI products, including our first annual Responsible AI Transparency Report. In addition, Microsoft recently released a white paper focused on policy steps that can be taken to reduce the harms of abusive AI-generated content.

Serving as a leader in AI research, we are committed to proactively publicize our threat detection efforts for the benefit of the AI community, regulators, and broader society. As such, we have adopted six focus areas to combat the harmful use of deceptive AI:
  1. A strong safety architecture
  2. Durable media provenance and watermarking
  3. Safeguarding our services from abusive content and conduct
  4. Robust collaboration across industry and with governments and civil society
  5. Modernized legislation to protect people from the abuse of technology
  6. Public awareness and education

Additionally, we will continue to build upon these approaches to Responsible AI. For example, recognizing both the enormous potential for generative and other forms of AI to transform the world of work in positive ways and the potential risks AI presents in that context, LinkedIn published its framework of Responsible AI Principles, which is inspired by and aligned with Microsoft’s Responsible AI Principles. LinkedIn provides more details on these principles in our response to Commitment 15.

Since our last report, Microsoft has continued to work with EU Member States and EU institutions to protect elections from cyber enabled influence operations by malicious threat actors. As part of that work, Microsoft and LinkedIn, along with 25 other companies, continued efforts to meet the commitments of the Tech Accord to Combat Deceptive Use of AI in 2024 Elections (Tech Accord). We believe the success of the Tech Accord and our work together have contributed to the limited impact of deceptive AI-generated election content throughout the elections across the European Union in 2024.

Meeting the Tech Accord’s commitments made it more difficult for malicious threat actors to use legitimate tools to create deceptive AI-generated election content, while simultaneously simplifying the process for users to identify authentic content. To meet its Tech Accord commitments, Microsoft moved forward with several important initiatives that are detailed further in this report. For example:
  • Microsoft is harnessing the data science and technical capabilities of our AI for Good Lab and Microsoft Threat Analysis Center (MTAC) teams to better assess whether abusive content—including that created and disseminated by malicious threat actors—is synthetic or not. Microsoft AI for Good has been improving our detection model (image, video) to assess whether media was generated by AI. The model is trained on approximately 200,000 examples of AI and real content. AI for Good continues to invest in creating sample datasets representing the latest generative AI technology. When appropriate, the team calls on the expertise of Microsoft’s Digital Crimes Unit to invest in and operationalize the early detection of AI-powered criminal activity and respond fittingly, through the filing of affirmative civil actions to disrupt and deter that activity and through threat intelligence programs and data sharing with customers and government.
  • As part of our commitments related to public awareness and engagement, Microsoft ran a campaign titled Check. Recheck. Vote. containing a series of public messages and stood up an AI and Elections website focused on engaging voters about the risks of deceptive AI and where to find authoritative election information. This campaign ran across the EU, UK, and the US in the lead up to major elections. Globally, the campaign reached hundreds of millions of people, with millions interacting with the content, connecting them with official election information.
  • We developed a dedicated web portal – Microsoft-2024 Elections – political candidates and election authorities can report to us a concern about a deepfake of themselves or the election process that would violate our policy on deceptive AI-generated content.
  • In advance of elections across the EU, we kicked off a global effort to engage campaigns and elections authorities to deepen understanding of the possible risks of deceptive AI in elections and empower those campaigns and election officials to speak directly to their voters about these steps they can take to build resilience and increase confidence in the election. In 2024, we delivered nearly 200 training sessions for political stakeholders in 25 countries, reaching over 4300 participants. This includes almost fifty separate training events with over 500 participants across EEA, including in France prior to the parliamentary elections.

Microsoft is committed to advancing information integrity and believes that including content credentials is an important driver for this. We were a founding member of the Coalition for Content Provenance and Authenticity (C2PA). To achieve transparency, support information integrity, and empower our users, we are leveraging C2PA’s “content credentials” open standard across several products. For example, since 15 May 2024, content containing the “Content Integrity” technology has been automatically labelled on LinkedIn, with users beginning to see the “Cr” icon on images and videos that contain C2PA metadata.

During the reporting period, Microsoft continued piloting Content Integrity Tools, which allowed users to add content credentials to their own authentic content. Designed as a pilot program primarily to support the 2024 election cycle and gather feedback about Content Credentials-enabled tools, during the reporting period of this report, the tools were available to political campaigns in the EU, as well as to elections authorities and select news media organizations in the EU and globally. These tools included a partnership and collaboration with fellow Tech Accord signatory, TruePic. Announced in April 2024, this collaboration leveraged TruePic’s mobile camera SDK enabling campaign, election, and media participants to capture authentic images, videos and audio directly from a vetted and secure device. Called the “Content Integrity Capture App” (an app that makes it easy to directly capture images with C2PA enabled signing) launched for both Android and Apple and can be used by participants in the Content Integrity Tools pilot program.

Beyond our commitment to combat deceptive use of AI during the electoral process, we implemented additional actions safeguarding candidates, election campaigns, election authorities, and voters:
  • Microsoft’s Campaign Success Team supported political parties and campaigns around the world to navigate the world of AI, combat the spread of cyber influence campaigns, and protect the authenticity of their own content and images.
  • Microsoft’s Election Communications Hub continued to support democratic governments around the world as they build secure and resilient election processes.
  • Microsoft established a Virtual Situation Room, bringing together resources across the company to monitor, support, and protect elections in France and UK.
  • Bing Search implemented a multifaceted approach to election integrity and integrated specialised answers and information panels for the elections across the European Union, with a link to official sources of information, which included voting information relevant to each EU Member State.

Microsoft continued its work with other trusted third-parties as part of a larger effort to empower Microsoft users to access the trusted information they are seeking. Microsoft also announced $2M in societal resilience grants with OpenAI and several organizations benefited from the grants during this reporting period. Additionally, WITNESS, received a grant to improve journalists’ ability to counter AI threats to elections. Training sessions were conducted ahead of the 2024 elections in Ghana, Georgia, and Venezuela, reaching 250 global participants. Microsoft's collaboration with WITNESS also includes co-leading the Deepfakes Rapid Response Force.

  • Microsoft continues to provide pro-bono advertising space across Microsoft surfaces to disseminate media literacy campaigns, averaging 50 million impressions per month. Beginning in March 2024 and continuing through Fall 2024, Microsoft launched a new “Be Informed, Not Misled” campaign from the News Literacy Project. Microsoft also continues their partnership with the Trust Project, boosting their campaign to build audience literacy on evaluating the credibility of the content they encounter.
  • In May 2024, Microsoft, in collaboration with OpenAI, launched the Societal Resilience Grants to support various organizations in promoting AI literacy, ethical AI use, and societal resilience against AI-related challenges. The grants were awarded to the Older Adults Technology Services from AARP, International IDEA, Partnership on AI, Coalition for Content Provenance and Authenticity (C2PA), and WITNESS. These initiatives have reached national election bodies in 26 countries, 500,000 older adults, and 250 global journalists, demonstrating a comprehensive approach to addressing AI threats and fostering responsible AI practices.

These initiatives underscore Microsoft's commitment to fostering a resilient and informed society in the age of AI. These grants build on an existing effort by Microsoft to support media, AI, and information literacy globally. We have continued our work with leading news and media literacy nonprofits, including the News Literacy Project (NLP), a collaboration led by The Trust Project on the Trust Indicators, and Verified, to develop campaigns built on industry research and best practices. Microsoft provided funding for the research and development of public awareness and education campaigns and supported partners with threat intelligence insights, technical expertise, and increased visibility through in-kind ad space on Microsoft platforms. Microsoft also worked to reach young learners with dynamic and entertaining content that builds knowledge and skills. For instance,

Microsoft has subscribed to the Code of Practice with the following services:
  • Bing Search is an online search engine with the primary objective of connecting users to the most relevant search results from the web. Users come to Bing with a specific research topic in mind and expect Bing to provide links to the most relevant and authoritative third-party websites on the Internet that are responsive to their search terms. Therefore, addressing misinformation or disinformation in organic search results often requires a different approach than may be appropriate for other types of online services, as over-moderation of content in search could have a significant negative impact on the right to access information, freedom of expression, and media plurality. Therefore, Bing must carefully balance these fundamental rights and interests as it works to ensure that its algorithms return the most high-quality content available that is relevant to the user’s queries, working to avoid causing harm to users without unduly limiting their ability to access answers to the questions they seek. In some cases, different features may require different interventions based on functionality and user expectations. While Bing’s remediation efforts may on occasion involve removal of content from search results (where legal or policy considerations warrant removal), in many cases, Bing has found that actions such as targeted ranking interventions, or additional digital literacy features such as Answers pointing to high authority sources and content provenance indicators, are more effective. Bing regularly reviews the efficacy of its measures to identify additional areas for improvement and works with internal and external subject matter experts in key policy areas to identify new threat vectors or improved mechanisms to help prevent users from being unexpectedly exposed to harmful content in search results that they did not expressly seek to find. During the Reporting Period, the nature of Bing generative AI experiences evolved. In October 2024, Microsoft launched a separate, standalone consumer service known as Microsoft Copilot at copilot.microsoft.com, which offers conversational experiences powered by generative AI, and the Copilot in Bing (formerly known as Bing Chat) generative AI experience was phased out. Bing continues to offer generative AI experiences, such as Bing Image Creator and Bing Generative Search, which was launched this Reporting Period. Bing Generative Search utilizes AI to deliver a unique experience by not only optimizing search results but presenting information in a user-friendly, cohesive layout. Results also include citations and links that enable users to explore further and evaluate websites for themselves. For both of these AI-powered experiences, Bing has partnered closely with Microsoft’s Responsible AI team to proactively address AI-related risks and continues to evolve these features based on user and external stakeholder feedback.
  • LinkedIn is a real identity online social networking service for professionals to connect and interact with other professionals, grow their professional network and brand, and seek career development opportunities. LinkedIn is part of its members’ professional identity and has a specific purpose. Activity on the platform and content members share can be seen by current and future employers, colleagues, potential business partners and recruitment firms, among others. Given this audience, members by and large tend to limit their activity to professional areas of interest and expect the content they see to be professional in nature. LinkedIn is committed to keeping its platform safe, trusted, and professional and respects the laws that apply to its services. On joining LinkedIn, members agree to abide by LinkedIn’s User Agreement and its Professional Community Policies, which expressly forbid members from posting information that is false or misleading.
  • Microsoft Advertising is our proprietary advertising platform, which serves the vast majority of ads displayed on Bing Search and provides advertising to most other Microsoft services that display ads, as well as many third-party services. Microsoft Advertising works both with advertisers, who provide it with advertising content, and publishers, such as Bing Search, who display these advertisements on their services. Microsoft Advertising employs a distinct set of policies and enforcement measures with respect to each of these two categories of business partners to prevent the spread of disinformation, including through discouraging and reducing the dissemination and monetization of disinformation through advertising.

As a company, we continued our efforts during the reporting period to empower users to better understand the information they consume across our platforms and products. For example, Bing compiled a specialized dataset of European Parliament election related queries in different EU languages for use by the research community and to support transparency; researchers can apply using the form found here. Over the course of the next reporting period, we will continue to make this information transparent and public. Specifically, we will continue to focus on the following areas:
  • Further de-funding the mechanisms malicious threat actors are using to push their narratives and propaganda and regularly evaluating and improving user and advertiser policies as needed.
  • Ensuring Microsoft and LinkedIn AI products are developed consistent with Microsoft’s Responsible AI Standards and LinkedIn's Responsible AI Principles, as relevant, and that risks associated with AI systems are mitigated to provide safe, trustworthy, and ethical experiences for users and, further, ensuring that our information integrity principles are integrated into AI systems included in Microsoft products.
  • Continuing to monitor foreign information influence operations and actioning such intelligence appropriately through defensive search and other techniques. This includes working with trusted third parties Microsoft uses to inform its work detecting and disrupting these influence operations. This also includes adding trusted third parties in additional languages, ensuring global coverage for our information integrity work.
  • Strengthening our efforts and expanding our funding in the areas of media literacy and critical thinking, aiming to include vulnerable groups and having greater language access. As part of our focus areas and commitments under the Tech Accord we will increase our partnerships to increase AI literacy efforts and build greater understanding of provenance and other trustworthiness indicators.
  • Supporting good faith research into disinformation and broader disinformation trends and tactics.
  • Continue to share learnings pertaining to generative AI and Responsible AI practices as products and services evolve and new threats emerge. In addition, Microsoft will continue to regularly evaluate, implement, and share best practices for addressing disinformation trends as we navigate the technological changes posed by the malicious use of AI.
  • Develop new partnerships to support EU-specific risks and continue to explore further ways to help users evaluate content on our services.
  • Enhance existing research tooling to provide enhanced data reporting and continue to deliver relevant data and research to support research into the spread of disinformation.
  • Educating users on generative AI features, including their risks and limitations, and providing the broader public and research community with information on our approach to Responsible AI
  • Implementing and regularly evaluating measures to support safe and democratic elections in the EU and to direct users to high authority sources of information about elections.

Unless stated otherwise, data provided under this report covers a reporting period of 1 July 2024 to 31 December 2024 (“Reporting Period”).

Download PDF

Elections 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
2024 FRENCH PARLIAMENTARY ELECTIONS

LinkedIn
is an online professional networking site with a real identity requirement, which means that content posted by our members is visible to that member’s professional network, including colleagues, managers, and potential future employers. As a result of LinkedIn’s professional context, our members come to LinkedIn for economic opportunity, and as such, do not tend to post misinformation, nor does misinformation content gain traction on LinkedIn. Nonetheless, LinkedIn may be subject to certain members inadvertently posting misinformation during elections. 

Bing Search  anticipated instances of information manipulation with possible actor intent to manipulate search algorithms and lead users to data voids and low-authority content related to elections. As part of its regular information integrity operations, Bing detected information manipulation themes related to the 2024 French Parliamentary Election, which have been ingested to inform defensive search interventions, along with special How to Vote answer implemented pointing to authoritative sources. 


2024 ROMANIAN PRESIDENTIAL ELECTIONS

LinkedIn
is an online professional networking site with a real identity requirement, which means that content posted by our members is visible to that member’s professional network, including colleagues, managers, and potential future employers. As a result of LinkedIn’s professional context, our members come to LinkedIn for economic opportunity, and as such, do not tend to post misinformation, nor does misinformation content gain traction on LinkedIn. Nonetheless, LinkedIn may be subject to certain members inadvertently posting misinformation during elections. 

Bing Search anticipated instances of information manipulation with possible actor intent to manipulate search algorithms and lead users to data voids and low-authority content related to elections. As part of its regular information integrity operations, Bing detected information manipulation themes related to the 2024 Romanian Presidential Election, which have been ingested to inform defensive search interventions. 
Mitigations in place
2024 FRENCH PARLIAMENTARY ELECTIONS

LinkedIn
’s Professional Community Policies expressly prohibit false and misleading content, including misinformation and disinformation, and its in-house Editorial team provides members with trustworthy content regarding global events, including French elections. LinkedIn had approximately 1,443 content moderators globally (for 24/7) coverage, with 180 content moderators located in the EU as at 31 December 2024, and includes specialists in a number of languages including French. These reviewers use policies and guidance developed by a dedicated content policy team and experienced lawyers, and work with external fact checkers as needed. When LinkedIn sees content or behaviour that violates its Professional Community Policies, it takes action, including the removal of content or the restriction of an account for repeated abusive behaviour. 

Political ads are banned on LinkedIn, which includes prohibitions on ads that exploit a sensitive political issue, including European Elections. LinkedIn also does not provide a mechanism for content creators to monetise the content they post on LinkedIn.   

LinkedIn continues to mature its crisis response processes. In addition to the increase in resource allocation and process improvements, best practices include: 1) quickly coordinating with industry peers regarding the exchange of threat indicators; 2) engaging with external stakeholders regarding trends and TTPs; 3) continuously providing updated policy guidance to internal teams to assist with the removal of misinformation; and 4) continuing to proactively provide localised trustworthy information to our members. 

LinkedIn has continued to mature its crisis response playbook by continually monitoring crisis situations globally, expanding internal teams that work on crisis response, and maturing our processes to respond more efficiently and effectively to crisis situations. LinkedIn will continue to follow its processes related to the removal of misinformation, and continually increase investments in resource allocation and process improvements where necessary to respond to the demands of the crisis.  

LinkedIn also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System for the French Parliamentary elections.

Bing Search takes a multifaceted approach to protecting election integrity and regularly updates its processes, policies, and practices to adapt to evolving risks, trends, and technological innovations. This approach includes: (1) defensive search interventions; (2) regular direction of users to high authority, high quality sources; (3) removal of auto suggest and related search terms considered likely to lead users to low authority content; (4) partnerships with independent organisations for  threat intelligence on information manipulation, civic integrity and nation state affiliated actors to  inform potential algorithmic interventions and contribute to broader research community; (5) special information panels and answers to direct users to high authority sources concerning elections and voting; (6) internal working groups dedicated to addressing company-wide election initiatives; (7) establishing special election-focused product feature teams; (8) conducting internal research on content provenance and elections; (9) evaluating and undertaking red-team testing for generative AI features with respect to elections ; (10) ensuring Responsible AI reviews for all AI features; (11) undertaking comprehensive risk assessments related to elections and electoral processes; (12) developing and continuing to improve targeted monitoring both for web search and Bing generative AI experiences; (13) restricting generative responses for certain types of election-related content; (14) leveraging blocklists and classifiers in Bing generative AI experiences to restrict generation of images or certain types of content concerning political candidates  and certain election-related topics (15) integrating information on political parties, candidates, and elections from local election authorities (including in the EU) or high authority third party sources to inform defensive interventions and election-related product mitigations; and (16) regularly evaluating whether additional measures, metrics, or mitigations should be implemented. These measures are integrated into Bing Search and Bing generative AI experiences, along with the additional safeguards discussed at QRE 14.1.1 and QRE 14.1.2 and other measures discussed throughout this report.

Bing also maintains an incident response process for cross-functional teams to prioritize high-risk incidents and track the investigation, fixes, and post-incident analysis. Internal escalation processes are set up to ensure urgent cases– including sensitive issues related to elections or election-related content -- are addressed expediently with high priority. Bing also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System and coordinates with Democracy Forward Election Hubs on incidents.

Bing also undertakes internal post-election reviews, as appropriate, to evaluate product and mitigation performance, reflect on challenges and learnings, and identify potential areas for improvement. These reviews occur both in product review settings and in broader cross-functional teams dedicated to elections at Microsoft.  

Microsoft’s Democracy Forward team continues to expand its collaborations with organizations that provide information on authoritative sources, ensuring that queries about global events will surface reputable sites. 

While not announced during the current reporting period, it is worth mentioning that in February 2024, Microsoft and LinkedIn came together with the tech sector at the Munich Security Conference to take a vital step forward against AI deepfakes, which will make it more difficult for malicious threat actors to use legitimate tools to create deepfakes. This focuses on the work of companies that create content generation tools and calls on them to strengthen the safety architecture in AI services by assessing risks and strengthening controls to help prevent abuse. This includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system. The accord brings the tech sector together to detect and respond to deepfakes in elections and will help advance transparency and build societal resilience to deepfakes in elections.

We combined this work with the launch of an expanded Digital Safety Unit. This will extend the work of our existing digital safety team, which has long addressed abusive online content and conduct that impacts children or that promotes extremist violence, among other categories. This team has special ability in responding on a 24/7 basis to weaponized content from mass shootings that we act immediately to remove from our services. The accord’s commitments oblige Microsoft and the tech sector to continue to engage with a diverse set of global civil society organizations, academics, and other subject matter experts. These groups and individuals play an indispensable role in the promotion and protection of the world’s democracies.
In advance of the EU elections this summer we kicked off a global effort to engage campaigns and elections authorities to deepen understanding of the possible risks of deceptive AI in elections and empower those campaigns and election officials to speak directly to their voters about these risks steps they can take to build resilience and increase confidence in the election. In 2024, we have conducted nearly 200  training sessions for political stakeholders in 25 countries, reaching over 4300 participants. This includes almost 50 separate training events with over 500 participants across EEA, including in France prior to the parliamentary elections.

As part of Microsoft’s commitments related to public awareness and engagement, Microsoft ran a campaign titled Check. Recheck. Vote.containing a series of public messages and stood up an AI and Elections website focused on engaging voters about the risks of deceptive AI and where to find authoritative election information. This campaign ran across the EU, France, UK, and the US in the lead up to major elections. Globally, the campaign reached hundreds of millions of people, with millions interacting with the content, connecting them with official election information. 

In addition, Microsoft is harnessing the data science and technical capabilities of our AI for Good Lab and MTAC teams to better assess whether abusive content—including that created and disseminated by foreign actors—is synthetic or not. Microsoft AI for Good lab has been developing detection models (image, video) to assess whether media was generated or manipulated by AI. The model is trained on approximately 200,000 examples of AI and real content. AI for Good continues to invest in creating sample dataset representing the latest generative AI technology. When appropriate, we call on the expertise of Microsoft’s Digital Crimes Unit to invest in and operationalize the early detection of AI-powered criminal activity and respond appropriately, through the filing of affirmative civil actions to disrupt and deter that activity and through threat intelligence programs and data sharing with customers and government. 

We are also empowering candidates, campaigns and election authorities to help us detect and respond to deceptive AI targeting elections. In February 2024, we launched the Microsoft-2024 Elections site where candidates in a national or federal election can directly report deceptive AI election content on Microsoft consumer services. This reporting tool allows for 24/7 reporting by impacted election entities who have been targeted by deceptive AI found on Microsoft platforms. 


2024 ROMANIAN PRESIDENTIAL ELECTIONS

LinkedIn
’s Professional Community Policies expressly prohibit false and misleading content, including misinformation and disinformation, and its in-house Editorial team provides members with trustworthy content regarding global events, including European Elections. LinkedIn had approximately 1,443 content moderators globally (for 24/7) coverage, with 180 content moderators located in the EU as at 31 December 2024. These reviewers use policies and guidance developed by a dedicated content policy team and experienced lawyers, and work with external fact checkers as needed. When LinkedIn sees content or behaviour that violates its Professional Community Policies, it takes action, including the removal of content or the restriction of an account for repeated abusive behaviour. 

Political ads are banned on LinkedIn, which includes prohibitions on ads that exploit a sensitive political issue, including European Elections. LinkedIn also does not provide a mechanism for content creators to monetise the content they post on LinkedIn.   

LinkedIn continues to mature its crisis response processes. Including 1) quickly coordinating with industry peers regarding the exchange of threat indicators; 2) engaging with external stakeholders regarding trends and TTPs; 3) continuously providing updated policy guidance to internal teams to assist with the removal of misinformation; and 4) continuing to proactively provide localised trustworthy information to our members. 

LinkedIn has continued to mature its crisis response playbook by continually monitoring crisis situations globally, expanding internal teams that work on crisis response, and maturing our processes to respond more efficiently and effectively to crisis situations. LinkedIn will continue to follow its processes related to the removal of misinformation, and continually increase investments in resource allocation and process improvements where necessary to respond to the demands of the crisis.  

LinkedIn also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System for the Romanian Presidential elections.

Bing Search
takes a multifaceted approach to protecting election integrity and regularly updates its processes, policies, and practices to adapt to evolving risks, trends, and technological innovations. This approach includes: (1) defensive search interventions; (2) regular direction of users to high authority, high quality sources as part of the search algorithm; (3) removal of auto suggest and related search terms considered likely to lead users to low authority content; (4) partnerships with independent  organisations for threat intelligence on information manipulation, civic integrity and nation state affiliated actors to  inform potential algorithmic interventions and contribute to broader research community; (5) special information panels and answers to direct users to high authority sources concerning elections and voting; (6) internal working groups dedicated to addressing company-wide election initiatives; (7) establishing special election-focused product feature teams; (8) conducting internal research on content provenance and elections; (9) evaluating and undertaking red-team testing for generative AI features with respect to elections and political content; (10) ensuring Responsible AI reviews for all AI features; (11) undertaking comprehensive risk assessments related to elections and electoral processes; (12) developing and continuing to improve targeted monitoring both for web search and  Bing generative AI experiences; (13) restricting generative AI responses for certain types of election-related content; (14) leveraging blocklists and classifiers in generative AI experiences to restrict generation of images or certain types of content concerning political candidates and certain election-related topics; (15) integrating information on political parties, candidates, and elections from local election authorities (including in the EU) or high authority third party sources to inform defensive interventions and election-related product mitigations; and (16) regularly evaluating whether additional measures, metrics, or mitigations should be implemented. These measures are integrated into Bing Search and Bing generative AI experiences, along with the additional safeguards discussed at QRE 14.1.1 and QRE 14.1.2 and other measures discussed throughout this report.

Bing also participated in the Election Rapid Response System and roundtable discussion in November 2024  with EU member state authorities and the European Commission to discuss election-related learnings and general election response. Bing also undertakes internal post-election reviews, as appropriate, to evaluate product and mitigation performance, reflect on challenges and learnings, and identify potential areas for improvement. These reviews occur both in product review settings and in broader cross-functional teams dedicated to elections at Microsoft.  

Bing also maintains an incident response process for cross-functional teams to prioritize high-risk incidents and track the investigation, fixes, and post-incident analysis. Internal escalation processes are set up to ensure urgent cases– including sensitive issues related to elections or election-related content -- are addressed expediently with high priority. Bing also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System and coordinates with Democracy Forward Election Hubs on incidents.

Throughout the reporting period and in line with your commitments under the Tech Accord Microsoft and LinkedIn continued to take  vital steps forward against AI deepfakes, which make it more difficult for malicious actors to use legitimate tools to create deepfakes targeting candidates, campaigns and election authorities. This work focused on  content generation tools,  strengthening the safety architecture in AI services by assessing risks and strengthening controls to help prevent abuse. This includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who generate deceptive AI targeting elections. 

We combined this work with the launch of an expanded Digital Safety Unit. This will extend the work of our existing digital safety team, which has long addressed abusive online content and conduct that impacts children or that promotes extremist violence, among other categories. This team has special ability in responding on a 24/7 basis to weaponized content from mass shootings that we act immediately to remove from our services. The accord’s commitments oblige Microsoft and the tech sector to continue to engage with a diverse set of global civil society organizations, academics, and other subject matter experts. These groups and individuals play an indispensable role in the promotion and protection of the world’s democracies.

In advance of the EU elections this summer we kicked off a global effort to engage campaigns and elections authorities to deepen understanding of the possible risks of deceptive AI in elections and empower those campaigns and election officials to speak directly to their voters about these risks steps they can take to build resilience and increase confidence in the election.  This year we have conducted almost 200 training sessions for political stakeholders in 25 countries, reaching over 4300 participants. This includes almost fifty separate training events with  nearly 500 participants...

As part of our commitments related to public awareness and engagement, Microsoft ran a campaign titled Check. Recheck. Vote. containing a series of public messages and stood up an AI and Elections websitefocused on engaging voters about the risks of deceptive AI and where to find authoritative election information. This campaign ran across the EU, UK, and the US in the lead up to major elections. Globally, the campaign reached hundreds of millions of people, with millions interacting with the content, connecting them with official election information. 

In addition, Microsoft is harnessing the data science and technical capabilities of our AI for Good Lab and MTAC teams to better assess whether abusive content—including that created and disseminated by foreign actors—is synthetic or not. Microsoft AI for Good lab has been developing detection models (image, video) to assess whether media was generated or manipulated by AI. The model is trained on approximately 200,000 examples of AI and real content. AI for Good continues to invest in creating sample dataset representing the latest generative AI technology. When appropriate, we call on the expertise of Microsoft’s Digital Crimes Unit to invest in and operationalize the early detection of AI-powered criminal activity and respond appropriately, through the filing of affirmative civil actions to disrupt and deter that activity and through threat intelligence programs and data sharing with customers and government. 

We are also empowering candidates, campaigns and election authorities to help us detect and respond to deceptive AI targeting elections.  We launched the Microsoft-2024 Elections site where candidates in a national or federal election can directly report deceptive AI election content on Microsoft consumer services. This reporting tool allows for 24/7 reporting by impacted election entities who have been targeted by deceptive AI found on Microsoft platforms.