Monday, December 23, 2024

AI is creating unparalleled opportunities for businesses of every size and across every industry. We are seeing our customers embrace AI services to drive innovation, increase productivity and solve critical problems for humanity, such as the development of breakthrough medical cures and new ways to meet the challenges of climate change.

At the same time, there are legitimate concerns about the power of the technology and the potential for it to be used to cause harm rather than benefits. It’s not surprising, in this context, that governments around the world are looking at how existing laws and regulations can be applied to AI and are considering what new legal frameworks may be needed. Ensuring the right guardrails for the responsible use of AI will not be limited to technology companies and governments: every organization that creates or uses AI systems will need to develop and implement its own governance systems. That’s why today we are announcing three AI Customer Commitments to assist our customers on their responsible AI journey.

AI customer commitments graphic

First, we will share what we are learning about developing and deploying AI responsibly and assist you in learning how to do the same. Microsoft has been on a responsible AI journey since 2017, harnessing the skills of nearly 350 engineers, lawyers and policy experts dedicated to implementing a robust governance process that guides the design, development and deployment of AI in safe, secure and transparent ways. More specifically we are:

  • Sharing expertise: We are committed to sharing this knowledge and expertise with you by publishing the key documents we developed during this process so that you can learn from our experiences. These include our Responsible AI Standard, AI Impact Assessment Template, AI Impact Assessment Guide, Transparency Notes, and detailed primers on the implementation of our responsible AI by design approach.
  • Providing training curriculum: We will also share the work we are doing to build a practice and culture of responsible AI at Microsoft, including key parts of the curriculum that we use to train Microsoft employees.
  • Creating dedicated resources: We will invest in dedicated resources and expertise in regions around the world to respond to your questions about deploying and using AI responsibly.

Second, we are creating an AI Assurance Program to help you ensure that the AI applications you deploy on our platforms meet the legal and regulatory requirements for responsible AI. This program will include the following elements:

  • Regulator engagement support: We have extensive experience helping customers in the public sector and highly regulated industries manage the spectrum of regulatory issues that arise when dealing with the use of information technology. For example, in the global financial services industry, we worked closely for a number of years with both customers and regulators to ensure that this industry could pursue digital transformation on the cloud while complying with its regulatory obligations. One learning from this experience has been the industry’s requirement that financial institutions verify customer identities, establish risk profiles and monitor transactions to help detect suspicious activity, the “know your customer” requirements. We believe that this approach can apply to AI in what we are calling “KY3C,” an approach that creates certain obligations to know one’s cloud, one’s customers and one’s content. We want to work with you to apply KY3C as part of our AI Assurance Program.

Know Your Customer graphic

  • Risk framework implementation: We will attest to how we are implementing the AI Risk Management Framework recently published by the U.S. National Institute of Standards and Technology (NIST) and will share our experience engaging with NIST’s important ongoing work in this area.
  • Customer councils: We will bring customers together in customer councils to hear their views on how we can deliver the most relevant and compliant AI technology and tools.
  • Regulatory advocacy: Finally, we’ll play an active role in engaging with governments to promote effective and interoperable AI regulation. The recently launched Microsoft blueprint for AI governance presents our proposals to governments and other stakeholders for appropriate regulatory frameworks for AI. We have made available a presentation of this blueprint by Microsoft Vice Chair and President Brad Smith and a white paper discussing it in detail.

Third, we will support you as you implement your own AI systems responsibly, and we will develop responsible AI programs for our partner ecosystem.

  • Dedicated resources: We will create a dedicated team of AI legal and regulatory experts in regions around the world as a resource for you to support your implementation of responsible AI governance systems in your businesses.
  • Partner support: Many of our partners have already created comprehensive practices to help customers evaluate, test, adopt and commercialize AI solutions, including creating their own responsible AI systems. We are launching a program with selected partners to leverage this expertise to assist our mutual customers in deploying their own responsible AI systems. Today we can announce that PwC and EY are our launch partners for this exciting program.

Ultimately, we know that these commitments are only the start, and we will have to build on them as both the technology and regulatory conditions evolve. But we are also excited by this opportunity to partner more closely with our customers as we continue on the responsible AI journey together.

Tags: AI, Responsible AI

Source

0 Comments

Leave a Comment