Transparency Centre
Commitment 34
To ensure transparency and accountability around the implementation of this Code, Relevant Signatories commit to set up and maintain a publicly available common Transparency Centre website.
We signed up to the following measures of this commitment
Measure 34.1 Measure 34.2 Measure 34.3 Measure 34.4 Measure 34.5
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft is committed to the proper functioning of the Transparency Center website and will therefore continue its engagement in the Transparency Center subgroup in order to assess the necessity of technical adjustments and new actions to improve the website. Microsoft will thereby contribute to, where necessary, making the website more user-friendly and easily accessible for users ahead of the next reporting period.
Commitment 35
Signatories commit to ensure that the Transparency Centre contains all the relevant information related to the implementation of the Code's Commitments and Measures and that this information is presented in an easy-to-understand manner, per service, and is easily searchable.
We signed up to the following measures of this commitment
Measure 35.1 Measure 35.2 Measure 35.3 Measure 35.4 Measure 35.5 Measure 35.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft will upload its March 2025 Report to the Transparency Centre website in a timely manner, which includes clear and simple information on the new or existing policies and actions that each service has implemented based on our Subscription document that is applicable to this reporting period.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Within the context of the work of the Transparency Center subgroup, Microsoft will assess the necessity of technical adjustments and contribute to actions where necessary, that are aimed at making the website more user-friendly and easily accessible for users ahead of the next reporting period.
Commitment 36
Signatories commit to updating the relevant information contained in the Transparency Centre in a timely and complete manner.
We signed up to the following measures of this commitment
Measure 36.1 Measure 36.2 Measure 36.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
By uploading this report, Microsoft updated the Transparency Centre with relevant information related to its new policies and implementation actions.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft is committed to providing regular updates as set out under Measures 36.1, 36.2 and 36.3.
Measure 36.3
Signatories will update the Transparency Centre to reflect the latest decisions of the Permanent Task-force, regarding the Code and the monitoring framework.
QRE 36.1.1
With their initial implementation report, Signatories will outline the state of development of the Transparency Centre, its functionalities, the information it contains, and any other relevant information about its functioning or operations. This information can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
The Transparency Centre was successfully launched in February 2023. We continue to upload our report according to the approved deadlines
QRE 36.1.2
Signatories will outline changes to the Transparency Centre's content, operations, or functioning in their reports over time. Such updates can be drafted jointly by Signatories involved in operating or adding content to the Transparency Centre.
The administration of the Transparency Centre website has been transferred fully to the community of the Code’s signatories, with VOST Europe taking the role of developer.
SLI 36.1.1
Signatories will provide meaningful quantitative information on the usage of the Transparency Centre, such as the average monthly visits of the webpage.
Website metrics
Between 1 July 2024 to 31 December 2024, the common Transparency Centre has been visited by 20,255 unique visitors. The Signatories’ reports were downloaded 5,626 times by 1,275 unique visitors. More specifically, Microsoft’s previous COPD report was downloaded 153 times by 82 unique users.
Permanent Task-Force
Commitment 37
Signatories commit to participate in the permanent Task-force. The Task-force includes the Signatories of the Code and representatives from EDMO and ERGA. It is chaired by the European Commission, and includes representatives of the European External Action Service (EEAS). The Task-force can also invite relevant experts as observers to support its work. Decisions of the Task-force are made by consensus.
We signed up to the following measures of this commitment
Measure 37.1 Measure 37.2 Measure 37.3 Measure 37.4 Measure 37.5 Measure 37.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft has actively engaged in and contributed to the work of the Task-force and relevant Subgroups and Working Groups that were active during the reporting period.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft is committed to continuing its active engagement in and contribution to the Task-force and service relevant Subgroups and Working Groups in the upcoming six-month period.
Measure 37.6
Signatories agree to notify the rest of the Task-force when a Commitment or Measure would benefit from changes over time as their practices and approaches evolve, in view of technological, societal, market, and legislative developments. Having discussed the changes required, the Relevant Signatories will update their subscription document accordingly and report on the changes in their next report.
QRE 37.6.1
Signatories will describe how they engage in the work of the Task-force in the reporting period, including the sub-groups they engaged with.
Microsoft has actively engaged in and contributed to all the Task-force Plenary meetings as well as to the meetings of all Subgroups and Working Groups active in the current reporting cycle under the Task-force.
As part of each Subgroup and Working Group that has taken place during the reporting period, Microsoft has actively contributed to the development of the deliverables that were collectively agreed.
Microsoft has continuously engaged with all Signatories of the Code, offering its perspectives on issues unique to its subscribed services and responding to ad-hoc inquiries related to various actions taken by its subscribed services. Microsoft appreciates the added value and insights that the Task-Force has created for each Signatory individually as well as the collective community of Signatories. Microsoft looks forward to continue its constructive cooperation within the Code of Practice’s governance framework as relevant.
Monitoring of the Code
Commitment 38
The Signatories commit to dedicate adequate financial and human resources and put in place appropriate internal processes to ensure the implementation of their commitments under the Code.
We signed up to the following measures of this commitment
Measure 38.1
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
A dedicated cross-company team continues to ensure proper tracking and compliance with the Code of Practice across all applicable geographical areas, consisting of relevant product members from all subscribed services, attorneys, members of the European Government Affairs team and Democracy Forward Team. Budget items from across Microsoft teams have been used to ensure compliance including ongoing investment in trusted third parties.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Measure 38.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
QRE 38.1.1
Relevant Signatories will outline the teams and internal processes they have in place, per service, to comply with the Code in order to achieve full coverage across the Member States and the languages of the EU.
Microsoft has a dedicated cross-company team to ensure proper tracking and compliance with the Code of Practice across all applicable geographical areas, consisting of relevant product members from all subscribed services as well as relevant lawyers, members of the European Government Affairs team and Democracy Forward Team. In addition, we implemented an internal tracking process that captured all relevant commitments, responsible entity and persons responsible for compliance with the Code of Practice. Moreover, regular review of new product features take place to assess potential impacts and compliance under the Code.
Commitment 39
Signatories commit to provide to the European Commission, within 1 month after the end of the implementation period (6 months after this Code’s signature) the baseline reports as set out in the Preamble.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
This Commitment is only relevant for the Baseline Reports, which were provided to the European Commission in January 2023.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Commitment 40
Signatories commit to provide regular reporting on Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs). The reports and data provided should allow for a thorough assessment of the extent of the implementation of the Code’s Commitments and Measures by each Signatory, service and at Member State level.
We signed up to the following measures of this commitment
Measure 40.1 Measure 40.2 Measure 40.3 Measure 40.4 Measure 40.5 Measure 40.6
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Commitment 41
Signatories commit to work within the Task-force towards developing Structural Indicators, and publish a first set of them within 9 months from the signature of this Code; and to publish an initial measurement alongside their first full report.
We signed up to the following measures of this commitment
Measure 41.1 Measure 41.2 Measure 41.3
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
The second report on Structural Indicators by TrustLab was published in September 2024.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Commitment 42
Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Task-force.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
During the reporting period, Microsoft has been an active participant in and contributor to the Task-force’s Elections Working Group, in particular in view of elections that took place in the EU as well as in the context of ongoing discussions towards an Elections Rapid Response System.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft will continue its participation in the Task-force’s Crisis Response Subgroup and Elections Working Group, as relevant.
Commitment 43
Relevant Signatories commit to provide, in special situations like elections or crisis, upon request of the European Commission, proportionate and appropriate information and data, including ad-hoc specific reports and specific chapters within the regular monitoring, in accordance with the rapid response system established by the Taskforce.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
Yes
If yes, list these implementation measures here
Microsoft has provided its March 2025 Report in accordance with the revised Harmonised Reporting Template and underlying methodologies as jointly developed by Signatories in the Monitoring and Reporting Subgroup under the Code’s Task-force.
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
Yes
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Microsoft will continue its active engagement in the respective Task-force Subgroups to keep the Harmonised Reporting Template and underlying methodologies up to date, where necessary in view of its experience with reporting.
Commitment 44
Relevant Signatories that are providers of Very Large Online Platforms commit, seeking alignment with the DSA, to be audited at their own expense, for their compliance with the commitments undertaken pursuant to this Code. Audits should be performed by organisations, independent from, and without conflict of interest with, the provider of the Very Large Online Platform concerned. Such organisations shall have proven expertise in the area of disinformation, appropriate technical competence and capabilities and have proven objectivity and professional ethics, based in particular on adherence to auditing standards and guidelines.
We signed up to the following measures of this commitment
In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?
No
If yes, list these implementation measures here
Not applicable
Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?
No
If yes, which further implementation measures do you plan to put in place in the next 6 months?
Not applicable
Crisis and Elections Response
Elections 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
2024 FRENCH PARLIAMENTARY ELECTIONS
LinkedIn is an online professional networking site with a real identity requirement, which means that content posted by our members is visible to that member’s professional network, including colleagues, managers, and potential future employers. As a result of LinkedIn’s professional context, our members come to LinkedIn for economic opportunity, and as such, do not tend to post misinformation, nor does misinformation content gain traction on LinkedIn. Nonetheless, LinkedIn may be subject to certain members inadvertently posting misinformation during elections.
Bing Search anticipated instances of information manipulation with possible actor intent to manipulate search algorithms and lead users to data voids and low-authority content related to elections. As part of its regular information integrity operations, Bing detected information manipulation themes related to the 2024 French Parliamentary Election, which have been ingested to inform defensive search interventions, along with special How to Vote answer implemented pointing to authoritative sources.
2024 ROMANIAN PRESIDENTIAL ELECTIONS
LinkedIn is an online professional networking site with a real identity requirement, which means that content posted by our members is visible to that member’s professional network, including colleagues, managers, and potential future employers. As a result of LinkedIn’s professional context, our members come to LinkedIn for economic opportunity, and as such, do not tend to post misinformation, nor does misinformation content gain traction on LinkedIn. Nonetheless, LinkedIn may be subject to certain members inadvertently posting misinformation during elections.
Bing Search anticipated instances of information manipulation with possible actor intent to manipulate search algorithms and lead users to data voids and low-authority content related to elections. As part of its regular information integrity operations, Bing detected information manipulation themes related to the 2024 Romanian Presidential Election, which have been ingested to inform defensive search interventions.
Mitigations in place
2024 FRENCH PARLIAMENTARY ELECTIONS
LinkedIn’s Professional Community Policies expressly prohibit false and misleading content, including
misinformation and disinformation, and its in-house Editorial team provides members with trustworthy content regarding global events, including French elections. LinkedIn had approximately 1,443 content moderators globally (for 24/7) coverage, with 180 content moderators located in the EU as at 31 December 2024, and includes specialists in a number of languages including French. These reviewers use policies and guidance developed by a dedicated content policy team and experienced lawyers, and work with external fact checkers as needed. When LinkedIn sees content or behaviour that violates its Professional Community Policies, it takes action, including the removal of content or the restriction of an account for repeated abusive behaviour.
Political ads are banned on LinkedIn, which includes prohibitions on ads that exploit a sensitive political issue, including European Elections. LinkedIn also does not provide a mechanism for content creators to monetise the content they post on LinkedIn.
LinkedIn continues to mature its crisis response processes. In addition to the increase in resource allocation and process improvements, best practices include: 1) quickly coordinating with industry peers regarding the exchange of threat indicators; 2) engaging with external stakeholders regarding trends and TTPs; 3) continuously providing updated policy guidance to internal teams to assist with the removal of misinformation; and 4) continuing to proactively provide localised trustworthy information to our members.
LinkedIn has continued to mature its crisis response playbook by continually monitoring crisis situations globally, expanding internal teams that work on crisis response, and maturing our processes to respond more efficiently and effectively to crisis situations. LinkedIn will continue to follow its processes related to the removal of misinformation, and continually increase investments in resource allocation and process improvements where necessary to respond to the demands of the crisis.
LinkedIn also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System for the French Parliamentary elections.
Bing Search takes a multifaceted approach to protecting election integrity and regularly updates its processes, policies, and practices to adapt to evolving risks, trends, and technological innovations. This approach includes: (1) defensive search interventions; (2) regular direction of users to high authority, high quality sources; (3) removal of auto suggest and related search terms considered likely to lead users to low authority content; (4) partnerships with independent organisations for threat intelligence on information manipulation, civic integrity and nation state affiliated actors to inform potential algorithmic interventions and contribute to broader research community; (5) special information panels and answers to direct users to high authority sources concerning elections and voting; (6) internal working groups dedicated to addressing company-wide election initiatives; (7) establishing special election-focused product feature teams; (8) conducting internal research on content provenance and elections; (9) evaluating and undertaking red-team testing for generative AI features with respect to elections ; (10) ensuring Responsible AI reviews for all AI features; (11) undertaking comprehensive risk assessments related to elections and electoral processes; (12) developing and continuing to improve targeted monitoring both for web search and Bing generative AI experiences; (13) restricting generative responses for certain types of election-related content; (14) leveraging blocklists and classifiers in Bing generative AI experiences to restrict generation of images or certain types of content concerning political candidates and certain election-related topics (15) integrating information on political parties, candidates, and elections from local election authorities (including in the EU) or high authority third party sources to inform defensive interventions and election-related product mitigations; and (16) regularly evaluating whether additional measures, metrics, or mitigations should be implemented. These measures are integrated into Bing Search and Bing generative AI experiences, along with the additional safeguards discussed at QRE 14.1.1 and QRE 14.1.2 and other measures discussed throughout this report.
Bing also maintains an incident response process for cross-functional teams to prioritize high-risk incidents and track the investigation, fixes, and post-incident analysis. Internal escalation processes are set up to ensure urgent cases– including sensitive issues related to elections or election-related content -- are addressed expediently with high priority. Bing also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System and coordinates with Democracy Forward Election Hubs on incidents.
Bing also undertakes internal post-election reviews, as appropriate, to evaluate product and mitigation performance, reflect on challenges and learnings, and identify potential areas for improvement. These reviews occur both in product review settings and in broader cross-functional teams dedicated to elections at Microsoft.
Microsoft’s Democracy Forward team continues to expand its collaborations with organizations that provide information on authoritative sources, ensuring that queries about global events will surface reputable sites.
While not announced during the current reporting period, it is worth mentioning that in February 2024, Microsoft and LinkedIn came together with the tech sector at the Munich Security Conference to take a vital step forward against AI deepfakes, which will make it more difficult for malicious threat actors to use legitimate tools to create deepfakes. This focuses on the work of companies that create content generation tools and calls on them to strengthen the safety architecture in AI services by assessing risks and strengthening controls to help prevent abuse. This includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system. The accord brings the tech sector together to detect and respond to deepfakes in elections and will help advance transparency and build societal resilience to deepfakes in elections.
We combined this work with the launch of an expanded Digital Safety Unit. This will extend the work of our existing digital safety team, which has long addressed abusive online content and conduct that impacts children or that promotes extremist violence, among other categories. This team has special ability in responding on a 24/7 basis to weaponized content from mass shootings that we act immediately to remove from our services. The accord’s commitments oblige Microsoft and the tech sector to continue to engage with a diverse set of global civil society organizations, academics, and other subject matter experts. These groups and individuals play an indispensable role in the promotion and protection of the world’s democracies.
In advance of the EU elections this summer we kicked off a global effort to engage campaigns and elections authorities to deepen understanding of the possible risks of deceptive AI in elections and empower those campaigns and election officials to speak directly to their voters about these risks steps they can take to build resilience and increase confidence in the election. In 2024, we have conducted nearly 200 training sessions for political stakeholders in 25 countries, reaching over 4300 participants. This includes almost 50 separate training events with over 500 participants across EEA, including in France prior to the parliamentary elections.
As part of Microsoft’s commitments related to public awareness and engagement, Microsoft ran a campaign titled
Check. Recheck. Vote.containing a series of public messages and stood up an AI and Elections
website focused on engaging voters about the risks of deceptive AI and where to find authoritative election information. This campaign ran across the EU, France, UK, and the US in the lead up to major elections. Globally, the campaign reached hundreds of millions of people, with millions interacting with the content, connecting them with official election information.
In addition, Microsoft is harnessing the data science and technical capabilities of our AI for Good Lab and MTAC teams to better assess whether abusive content—including that created and disseminated by foreign actors—is synthetic or not. Microsoft AI for Good lab has been developing detection models (image, video) to assess whether media was generated or manipulated by AI. The model is trained on approximately 200,000 examples of AI and real content. AI for Good continues to invest in creating sample dataset representing the latest generative AI technology. When appropriate, we call on the expertise of Microsoft’s Digital Crimes Unit to invest in and operationalize the early detection of AI-powered criminal activity and respond appropriately, through the filing of affirmative civil actions to disrupt and deter that activity and through threat intelligence programs and data sharing with customers and government.
We are also empowering candidates, campaigns and election authorities to help us detect and respond to deceptive AI targeting elections. In February 2024, we launched the Microsoft-2024 Elections site where candidates in a national or federal election can directly report deceptive AI election content on Microsoft consumer services. This reporting tool allows for 24/7 reporting by impacted election entities who have been targeted by deceptive AI found on Microsoft platforms.
2024 ROMANIAN PRESIDENTIAL ELECTIONS
LinkedIn’s Professional Community Policies expressly prohibit false and misleading content, including
misinformation and disinformation, and its in-house Editorial team provides members with trustworthy content regarding global events, including European Elections. LinkedIn had approximately 1,443 content moderators globally (for 24/7) coverage, with 180 content moderators located in the EU as at 31 December 2024. These reviewers use policies and guidance developed by a dedicated content policy team and experienced lawyers, and work with external fact checkers as needed. When LinkedIn sees content or behaviour that violates its Professional Community Policies, it takes action, including the removal of content or the restriction of an account for repeated abusive behaviour.
Political ads are banned on LinkedIn, which includes prohibitions on ads that exploit a sensitive political issue, including European Elections. LinkedIn also does not provide a mechanism for content creators to monetise the content they post on LinkedIn.
LinkedIn continues to mature its crisis response processes. Including 1) quickly coordinating with industry peers regarding the exchange of threat indicators; 2) engaging with external stakeholders regarding trends and TTPs; 3) continuously providing updated policy guidance to internal teams to assist with the removal of misinformation; and 4) continuing to proactively provide localised trustworthy information to our members.
LinkedIn has continued to mature its crisis response playbook by continually monitoring crisis situations globally, expanding internal teams that work on crisis response, and maturing our processes to respond more efficiently and effectively to crisis situations. LinkedIn will continue to follow its processes related to the removal of misinformation, and continually increase investments in resource allocation and process improvements where necessary to respond to the demands of the crisis.
LinkedIn also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System for the Romanian Presidential elections.
Bing Search takes a multifaceted approach to protecting election integrity and regularly updates its processes, policies, and practices to adapt to evolving risks, trends, and technological innovations. This approach includes: (1) defensive search interventions; (2) regular direction of users to high authority, high quality sources as part of the search algorithm; (3) removal of auto suggest and related search terms considered likely to lead users to low authority content; (4) partnerships with independent organisations for threat intelligence on information manipulation, civic integrity and nation state affiliated actors to inform potential algorithmic interventions and contribute to broader research community; (5) special information panels and answers to direct users to high authority sources concerning elections and voting; (6) internal working groups dedicated to addressing company-wide election initiatives; (7) establishing special election-focused product feature teams; (8) conducting internal research on content provenance and elections; (9) evaluating and undertaking red-team testing for generative AI features with respect to elections and political content; (10) ensuring Responsible AI reviews for all AI features; (11) undertaking comprehensive risk assessments related to elections and electoral processes; (12) developing and continuing to improve targeted monitoring both for web search and Bing generative AI experiences; (13) restricting generative AI responses for certain types of election-related content; (14) leveraging blocklists and classifiers in generative AI experiences to restrict generation of images or certain types of content concerning political candidates and certain election-related topics; (15) integrating information on political parties, candidates, and elections from local election authorities (including in the EU) or high authority third party sources to inform defensive interventions and election-related product mitigations; and (16) regularly evaluating whether additional measures, metrics, or mitigations should be implemented. These measures are integrated into Bing Search and Bing generative AI experiences, along with the additional safeguards discussed at QRE 14.1.1 and QRE 14.1.2 and other measures discussed throughout this report.
Bing also participated in the Election Rapid Response System and roundtable discussion in November 2024 with EU member state authorities and the European Commission to discuss election-related learnings and general election response. Bing also undertakes internal post-election reviews, as appropriate, to evaluate product and mitigation performance, reflect on challenges and learnings, and identify potential areas for improvement. These reviews occur both in product review settings and in broader cross-functional teams dedicated to elections at Microsoft.
Bing also maintains an incident response process for cross-functional teams to prioritize high-risk incidents and track the investigation, fixes, and post-incident analysis. Internal escalation processes are set up to ensure urgent cases– including sensitive issues related to elections or election-related content -- are addressed expediently with high priority. Bing also implemented a specialized intake and operations process under the Elections Working Group Rapid Response System and coordinates with Democracy Forward Election Hubs on incidents.
Throughout the reporting period and in line with your commitments under the Tech Accord Microsoft and LinkedIn continued to take vital steps forward against AI deepfakes, which make it more difficult for malicious actors to use legitimate tools to create deepfakes targeting candidates, campaigns and election authorities. This work focused on content generation tools, strengthening the safety architecture in AI services by assessing risks and strengthening controls to help prevent abuse. This includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who generate deceptive AI targeting elections.
We combined this work with the launch of an expanded Digital Safety Unit. This will extend the work of our existing digital safety team, which has long addressed abusive online content and conduct that impacts children or that promotes extremist violence, among other categories. This team has special ability in responding on a 24/7 basis to weaponized content from mass shootings that we act immediately to remove from our services. The accord’s commitments oblige Microsoft and the tech sector to continue to engage with a diverse set of global civil society organizations, academics, and other subject matter experts. These groups and individuals play an indispensable role in the promotion and protection of the world’s democracies.
In advance of the EU elections this summer we kicked off a global effort to engage campaigns and elections authorities to deepen understanding of the possible risks of deceptive AI in elections and empower those campaigns and election officials to speak directly to their voters about these risks steps they can take to build resilience and increase confidence in the election. This year we have conducted almost 200 training sessions for political stakeholders in 25 countries, reaching over 4300 participants. This includes almost fifty separate training events with nearly 500 participants...
As part of our commitments related to public awareness and engagement, Microsoft ran a campaign titled
Check. Recheck. Vote. containing a series of public messages and stood up an AI and Elections
websitefocused on engaging voters about the risks of deceptive AI and where to find authoritative election information. This campaign ran across the EU, UK, and the US in the lead up to major elections. Globally, the campaign reached hundreds of millions of people, with millions interacting with the content, connecting them with official election information.
In addition, Microsoft is harnessing the data science and technical capabilities of our AI for Good Lab and MTAC teams to better assess whether abusive content—including that created and disseminated by foreign actors—is synthetic or not. Microsoft AI for Good lab has been developing detection models (image, video) to assess whether media was generated or manipulated by AI. The model is trained on approximately 200,000 examples of AI and real content. AI for Good continues to invest in creating sample dataset representing the latest generative AI technology. When appropriate, we call on the expertise of Microsoft’s Digital Crimes Unit to invest in and operationalize the early detection of AI-powered criminal activity and respond appropriately, through the filing of affirmative civil actions to disrupt and deter that activity and through threat intelligence programs and data sharing with customers and government.
We are also empowering candidates, campaigns and election authorities to help us detect and respond to deceptive AI targeting elections. We launched the Microsoft-2024 Elections site where candidates in a national or federal election can directly report deceptive AI election content on Microsoft consumer services. This reporting tool allows for 24/7 reporting by impacted election entities who have been targeted by deceptive AI found on Microsoft platforms.
Crisis 2024
[Note: Signatories are requested to provide information relevant to their particular response to the threats and challenges they observed on their service(s). They ensure that the information below provides an accurate and complete report of their relevant actions. As operational responses to crisis/election situations can vary from service to service, an absence of information should not be considered a priori a shortfall in the way a particular service has responded. Impact metrics are accurate to the best of signatories’ abilities to measure them].
Threats observed or anticipated
WAR OF AGGRESSION BY RUSSIA ON UKRAINE
LinkedIn is an online professional networking site with a real identity requirement, which means that content posted by our members is visible to that member’s professional network, including colleagues, managers, and potential future employers. As a result of LinkedIn’s professional context, our members do not tend to post misinformation, nor does misinformation content gain traction on LinkedIn. Nonetheless, LinkedIn may be subject to certain members inadvertently posting misinformation during crisis situations.
Microsoft Advertising, in its role as an online advertising network, may be subject to the malicious use of its advertising services through either the spreading of misleading or deceptive advertising content or the funneling of advertising revenue to sites spreading disinformation.
Bing Search has observed instances of information manipulation with possible actor intent to manipulate search algorithms and lead users to data voids and low-authority content related to the Russia-Ukraine war. Themes included narratives involving Ukrainian immigrants in different countries, specific countries’ support to Ukraine (often in the context of local elections) etc.
ISRAEL-HAMAS CONFLICT
LinkedIn is an online professional networking site with a real identity requirement, which means that content posted by our members is visible to that member’s professional network, including colleagues, managers, and potential future employers. As a result of LinkedIn’s professional context, our members come to LinkedIn for economic opportunity, and as such, do not tend to post misinformation, nor does misinformation content gain traction on LinkedIn. Nonetheless, LinkedIn may be subject to certain members inadvertently posting misinformation during crisis situations.
Bing Search has observed instances of data void manipulation to show low-authority content to unsuspecting users related to the Israel-Hamas conflict. This type of search algorithm manipulation could potentially be used as a tactic to spread disinformation. Other themes observed have included foreign influence operations speculating on the evolution of conflict in the area; alleged relations between Ukraine and Hamas; and information manipulation on military operations or the impact of the conflict.
Mitigations in place
WAR OF AGGRESSION BY RUSSIA ON UKRAINE
Microsoft has been actively involved in identifying and helping counter Russia’s cyber and influence operations aimed against Ukraine. In addition to supporting nonprofits, journalists, and academics within Ukraine, Microsoft’s Threat Analysis Center (MTAC) team closely tracks cyber-enabled influence operations. MTAC analysts focused on Europe/Eurasia report on a wide range of Russian influence tactics used to malign or diminish support for Ukraine: propaganda and disinformation published across different languages; people-to-people and party-to-party engagement; real-world provocations; and those that blend cyber and influence activity, like hack-and-leak campaigns. MTAC’s work includes analysing the ways these methods are leveraged to target audiences in Central and Eastern Europe.
In June of 2022, Microsoft issued its "
Defending Ukraine" report and a follow up report issued in December 2022, both of which detailed the relentless and destructive Russian cyberattacks and influence operations, that we have directly observed in the hybrid war Russia is waging against Ukraine. Microsoft followed those reports with a report in March of 2023, outlining how Russia was regrouping for additional offensive measures against Ukraine including cyber and influence operations and a
report in December of 2023 assessing Russian influence and cyber operations, including Russia’s anti-Ukraine messaging to Israel and elsewhere. In February of 2024, Microsoft and OpenAI Issued a threat
report on activity by adversaries utilizing AI capability. This report identified recent Russian activity including activities targeting Ukraine.
LinkedIn’s Professional Community Policies expressly prohibit false and misleading content, including
misinformation and disinformation, and its in-house Editorial team provides members with trustworthy content regarding global events, including the war in Ukraine. LinkedIn had approximately 1,443 content moderators globally (for 24/7 coverage), with 180 content moderators located in the EU as of 31 December 2024, and includes specialists in a number of languages including English, German, French, Russian, and Ukrainian. These reviewers use policies and guidance developed by a dedicated content policy team and experienced lawyers, and work with external fact checkers as needed. When LinkedIn sees content or behaviour that violates its Professional Community Policies, it takes action, including the removal of content or the restriction of an account for repeated abusive behaviour. LinkedIn has been banned in Russia since 2016 and has implemented the European bans on Russian state media. In addition to not operating in Russia, political ads are banned on LinkedIn, which includes prohibitions on ads that exploit a sensitive political issue, including the current Russia-Ukraine war. LinkedIn also does not provide a mechanism for content creators to monetise the content they post on LinkedIn.
LinkedIn continues to mature its crisis response processes including 1) quickly coordinating with industry peers regarding the exchange of threat indicators; 2) engaging with external stakeholders regarding trends and TTPs; 3) continuously providing updated policy guidance to internal teams to assist with the removal of misinformation; and 4) continuing to proactively provide localised trustworthy information to our members.
LinkedIn has continued to mature its crisis response playbook by continually monitoring crisis situations globally, expanding internal teams that work on crisis response, and maturing our processes to respond more efficiently and effectively to crisis situations. LinkedIn will continue to follow its processes related to the removal of misinformation, and continually increase investments in resource allocation and process improvements where necessary to respond to the demands of the crisis.
Bing Search has implemented the following measures: (1) Defensive search interventions; (2) regular direction of users to high authority, high quality sources as part of search algorithms; (3) removed auto suggest and related search terms considered likely to lead users to low authority content as part of moderation; (4) authority demotion of identified nation state affiliated information manipulation actor domains and (5) partnerships with independent organizations to maintain threat intelligence and inform potential algorithmic interventions. These measures are also integrated into Bing generative AI experiences, along with the additional safeguards discussed at QRE 14.1.1 and QRE 14.1.2 and other measures discussed throughout this report.
ISRAEL-HAMAS CONFLICT
LinkedIn’s Professional Community Policies expressly prohibit false and misleading content, including misinformation and disinformation, and its in-house Editorial team provides members with trustworthy content regarding global events, including the Israel-Hamas conflict. LinkedIn had approximately 1,443 content moderators globally (for 24/7 coverage), with 180 content moderators located in the EU as at 31 December 2024, , and includes specialists in languages supported on LinkedIn. These reviewers use policies and guidance developed by a dedicated content policy team and experienced lawyers, and work with external fact checkers as needed. When LinkedIn sees content or behaviour that violates its Professional Community Policies, it takes action, including the removal of content or the restriction of an account for repeated abusive behaviour. Political ads are banned on LinkedIn, which includes prohibitions on ads that exploit a sensitive political issue, including the current Israel-Hamas conflict. LinkedIn also does not provide a mechanism for content creators to monetise the content they post on LinkedIn.
LinkedIn continues to mature its crisis response processes including 1) quickly coordinating with industry peers regarding the exchange of threat indicators; 2) engaging with external stakeholders regarding trends and TTPs; 3) continuously providing updated policy guidance to internal teams to assist with the removal of misinformation; and 4) continuing to proactively provide localised trustworthy information to our members.
LinkedIn has continued to mature its crisis response playbook by continually monitoring crisis situations globally, expanding internal teams that work on crisis response, and maturing our processes to respond more efficiently and effectively to crisis situations. LinkedIn will continue to follow its processes related to the removal of misinformation, and continually increase investments in resource allocation and process improvements where necessary to respond to the demands of the crisis.
Bing Search: As part of its regular practices, Bing search employs (1) Defensive search interventions; (2) regular direction of users to high authority, high quality sources as part of search algorithms; (3) removal of auto suggest and related search terms considered likely to lead users to low authority content as part of moderation; (4) authority demotion of identified nation state affiliated information manipulation actor domains; and (5) partnerships with independent organizations for threat intelligence to inform potential algorithmic interventions. These measures are also integrated into Bing generative AI features, along with the additional safeguards discussed at QRE 14.1.1 and QRE 14.1.2 and other measures discussed throughout this report.