How to spot human bias in the tech your company uses
Throughout the pandemic, technology decision-makers have quickly adopted new solutions to streamline remote and hybrid business processes. But this onboarding process shined a light on a long-standing problem: the inherent biases introduced to tech products by humans.
Although many tech solutions are powered by AI, the tech industry has historically lacked an accountable vetting process to account for potential biases in the data these products’ algorithms are built on. The result is often scaled exclusionary practices in the companies that rely on AI-enabled tech to run their businesses.
Tech giants like Google and Apple have committed to sweeping changes, offering hope for a future where inclusion and equity are standard in industry processes—especially in hiring. However, real change also requires an examination of the past and present biases humans have brought to the tech solutions companies use. And leaders must start raising awareness of inherent biases in the technology they use to make change happen.
A pervasive and overlooked issue
The tech industry struggles with transparency, accountability, and awareness, hampering efforts to reform bias in products before they start. Additionally, tech’s historic lack of diversity can manifest itself in the products people create.
Consider Amazon, which came under fire in 2018 for using an AI recruiting tool that introduced bias against women. The company’s machine-learning team built computer programs that filtered out top talent from job applications and used AI to give applicants a score on a range of one (poor) to five (great) stars. While this idea worked in theory, the system did not inclusively rate job candidates for the company’s software developer openings in practice.
Why? Because the tool’s AI was trained to sort out candidates based on 10-year old resume data of past applicants—who were primarily men. As a result, the company’s system favored male applicants and downgraded the resumes of female candidates. And because the process was automated, it scaled to create an exclusionary hiring system across the organization. This is just one example of how data weaknesses can amplify bias.
3 ways to evaluate bias in new tech
We’re only in the beginning stages of rooting out bias in tech, so awareness is key. But while it’s easy to blame tech for bias, bias ultimately comes from poor data provided by a human. Some factors may be out of your control when it comes to human-placed bias, but there are steps you can take to identify bias in the tech products your organization uses and hold vendors accountable for anti-bias tech offerings.
Prioritize equitable and anti-biased decision-making
To prevent bias from seeping into tech decisions, every employee should be educated on equitable and anti-bias practices—especially if your company writes algorithms for any of its products. Make this training standard to ensure it’s part of your organization’s operational fabric. Also, apply anti-bias data checks if you leverage automated processes. You don’t want automation to scale potential biases like it did with Amazon’s AI recruiting tool.
Additionally, consider creating a tech purchasing committee with a series of different leaders like HR professionals, data scientists, and DEI experts to evaluate all tech purchases for potential biases. And make sure to evaluate purchasing decisions based on ethical considerations as well as business considerations. Ultimately, biased tech products create exclusionary side effects, which can negatively impact your culture and operations.
Push vendors for transparency
Make the transparent evaluation of data a priority for the partner companies and vendors you work with. Look for organizations with formal and proven processes that evaluate data for bias and regularly publish these findings. Public companies, like Amazon, Apple, and Microsoft, are required to publish annual environmental, social and governance (ESG) reports that document their impacts across these three areas, but private companies aren’t required to publish ESGs.
Without widespread transparency on biased data, you won’t have the baseline information from tech vendors to make ethical decisions. So, push for proven confirmation that their data has been analyzed for bias. Also, voice concerns about companies (public or private) that don’t share information about their anti-bias efforts in their business processes and don’t be afraid to walk away if you’re not convinced.
Prioritize inclusive features
Consider differences in culture, language and disability when deciding on a tech solution because human bias exists in these areas as well. For example, Zoom is addressing language and disability by adding closed captioning with live translation to their calls, allowing non-English speakers and those who are hard of hearing to feel included.
In adopting new tech, consider whether the user experience is palatable for multiple generations of workers, not just younger employees. Ageism is rampant in tech products, and it’s up to you to offer training and guidance for all members of your organization.
Take the first step
As you engage in your own DEI efforts, it’s essential to root out human biases in your organization’s internal datasets and those you acquire from third-party vendors. By pushing for more transparency and accountability, adopting a more detailed purchase decision process and making inclusive products a priority, you can raise awareness and help lead the charge in eliminating human bias in tech.
Rachel Brennan is the VP of Product Marketing at Bizagi.
Discover Past Posts