There’s widespread consensus that we’re in the throes of the fourth industrial revolution Artificial intelligence and its sister technologies is transforming virtually every business. Yet with AI’s enormous potential comes great responsibility. The majority (77%) of CEOs say that AI threatens to increase vulnerability and disruption to the ways they do business. Unfortunately, the call for responsible AI has taken a backseat for many companies. Only 25% of companies say that they definitely prioritize considering the ethical implications of an AI solution before investing in it, according to research by PwC. 

In 2016, Amazon, Facebook, Google, DeepMind, Microsoft, and IBM came together to found the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI). Since it was founded, the nonprofit coalition has amassed more than 100 partners, including members from industry, academia, and civil society. The partnership marks an important shift towards prioritizing responsible AI. But it’s merely a small step in the right direction. 

Here are three important steps that companies need to embrace in order to progress towards responsible AI. 

1. Prioritize explainability and interpretability  

An important prerequisite for responsible AI is explainability and interpretability. According to PwC’s research, 84% of CEOs agree that AI-based decisions need to be explainable in order to be trusted. The proverbial “black box” of AI needs to be opened. Black box models should be supplanted with models that are interpretable. As PwC warns, a lack of interpretability can “expose an organization to operational, reputational, and financial risks. To instill trust in AI systems, people must be enabled to look “under the hood” at their underlying models, explore the data used to train them, expose the reasoning behind each decision, and provide coherent explanations to all stakeholders in a timely manner.” As IBM’s AI ethics policy states, “Allow for questions. A user should be able to ask why an AI is doing what it’s doing on an ongoing basis. This should be clear and up front in the user interface at all times.”

Explainability goes hand in hand with documentation throughout the entire AI lifecycle, from model design to implementation and use. Design and decision-making processes should be documented. Additionally, it should be clear when and why AI systems make mistakes. AI is inevitably going to fail. By making all aspects of AI development transparent, we can empower humans’ judgment to kick in and avoid many negative fallouts. 

2. Make responsibility concrete and pervasive

It’s easy to pay lip service to the importance of responsible AI. This tendency is so pervasive that we now have a dedicated term to encompass the practice—“ethics washing”, also known as “ethics theater”. Ethics washing involves falsifying or exaggerating a company’s promotion of “AI for good” initiatives. As a recent report from research institute AI Now highlights, “While we have seen a rush to adopt such codes, in many instances offered as a means to address the growing controversy surrounding the design and implementation of AI systems, we have not seen strong oversight and accountability to backstop these ethical commitments.” 

While corporate governance bodies are becoming increasingly common, the public is, oftentimes, afforded little insight into the operations and, especially, the interventions enacted by these governance bodies. We don’t know how they are holding themselves accountable. It’s not enough to simply set up an advisory committee. Companies need to be clear on how they are evaluating and debating ethical issues, embrace a more open and encompassing discussion related to these issues, and communicate revelations and insights to the broader community.   

The companies that truly place responsible AI on a pedestal will make responsible AI an organization-wide imperative. The Wharton School of the University of Pennsylvania has emphasized that responsible AI requires engagement from the entire C-Suite. Ideally, AI engagement goes deeper and pervade an entire organization. Kush Saxena, Chief Technology Officer at Mastercard for example, has called for mandatory ethics training for all AI programmers. Only with organization-wide accountability can we truly move the needle towards responsible AI. 

3. Minimizing bias

Bias pervades AI tools. Take, for example, software to predict future criminals that is biased against blacks or a recruiting tool used by Amazon riddled with bias against women. Companies must commit to mitigating bias. This mitigation shouldn’t be an afterthought—it should guide AI development. Before developing algorithms, AI designers and developers should come together to discuss and identify potential biases in data, identify the potential ramifications of these biases, and then proactively take steps to minimize them. 

Minimizing bias also involves constructive dissent, a phrase embraced by Rumman Chowdhury, the Responsible AI lead for Accenture. Chowdhury has explained, “Successful governance of AI systems need to allow ‘constructive dissent’ — that is, a culture where individuals, from the bottom up, are empowered to speak and protected if they do so. It is self-defeating to create rules of ethical use without the institutional incentives and protections for workers engaged in these projects to speak up.” If people feel empowered with high levels of psychological safety to voice concerns, this will fuel more productive conversations pertaining to responsible AI and, especially, mitigating bias. 

Finally, minimizing bias requires diversity. A recent report from the AI Now Institute revealed that 80% of AI professors, 85% of AI research staff at Facebook, and 90% of AI employees at Google are male. Racial diversity among AI researchers and industry leaders is minimal. Only by bringing diverse perspectives and backgrounds to the table will we be able to mitigate bias and build responsible AI tools. 

Elon Musk has dubbed AI more dangerous than nuclear weapons. Safiya Umoja Noble, an associate professor at UCLA, has called AI one of the major human rights issues of the 21st century. The discussion of AI should be at the forefront of our public and corporate discourse. By prioritizing explainability and interpretability, making responsibility concrete and pervasive, and minimizing bias, we can ensure that the potentially disastrous and fatal implications of AI don’t overshadow the benefits and promises.