As 2023 begins, it’s clear that AI is playing a larger role in society, as people look to AI to address global issues ranging from disease detection to natural disaster prediction. And it’s playing an important role in our company, as advances in AI have made it possible to improve searches, photos, and translating more languages in real-time than ever before.
These benefits reflect a powerful and helpful new technology — one that is core to Google products — and one that must be developed thoughtfully and responsibly. That’s why, in 2018, we became one of the first companies to issue AI Principles, and build guardrails for stated applications we will not pursue. The AI Principles offer a framework to guide our decisions on research, and product design and development — and ways to think about solving the numerous design, engineering and operational challenges associated with any emerging technology.
But, as we know, issuing principles is one thing — applying them is another. Recently, we published our 4th annual AI Principles Progress Update — our review of our commitment to responsibly develop emerging technologies like artificial intelligence. This new report involves our most comprehensive look at how we put the AI Principles into practice. We believe a formalized governance structure to support the implementation of our AI Principles – and rigorous testing and ethics reviews — is necessary to put the principles into practice.
Without a strong governance structure, it would be impossible to apply principles to an emerging technology. Because AI is still a nascent technology, and many risks are yet to be discovered and defined, strong governance puts in place the processes to identify and mitigate AI risks before launching AI-enabled products.
Our report is our assessment of our progress on this front in 2022. We have found that by following these principles in our work, we’ve seen clear evidence that building AI with fairness, safety, privacy and accountability leads to applications that are better at their concrete goals of helping people navigate the world around them. Fittingly, responsibly and ethically developed products become successful products.
We also believe that defining and minimizing AI risks is especially urgent in 2023. As AI plays an increasingly important role in the economy and society, it is important that we continue to advance responsible practices in this space, and engage with regulators, civil society, and impacted communities to understand and manage AI’s risks and maximize its benefits.
People will see the most benefit from the development and deployment of AI if, and only if, those developing it discover, share and follow practices for developing it ethically. In our report, we share details on our thoughts, including our three pillars of AI Principles governance, to help others build a structured approach across research, operations and product teams.
Three pillars of AI Principles governance
Google’s approach to AI Principles governance rests on a corporate-wide end-to-end commitment to three pillars:
- AI Principles serve as our ethical charter and inform our product policies. In this year’s report, we discuss products we’ve announced in 2022 that align with the AI Principles, as well as 3 in-depth case studies, including how we make tough decisions on what or what not to launch, and how to efficiently address responsible AI issues such as fairness across multiple products.
- Education and resources provide ethics training and technical tools to test, evaluate and monitor the application of the AI Principles to all of Google’s products and services. We’re sharing for the first time details of a new company-wide tool for monitoring products’ responsible AI maturity, and updates on technical approaches to fairness, data transparency and more.
- Structures and processes include risk assessment frameworks, ethics reviews, and Executive accountability. In the report, we dive deep into how we identify and measure risk in our AI Principles reviews, as well as a behind the scenes look at how we approach assessing new AI applications for surveillance concerns as an example of how we define and assess AI applications that we will not pursue.
As we look forward to Google’s 25th year in 2023, we believe our more formalized AI Principles governance approach is key to our company’s overall innovation strategy for the next 25 years and beyond. Google’s mission remains to organize the world’s information and make it universally accessible and useful, and developing AI responsibly is in service of that mission.
By Marian Croak VP, Responsible AI and Human-Centered Technology / Jen Gennai Director of Responsible Innovation
Source Google Cloud