AI ethics start with the data, says Appen CEO
Scientists, industry and governments in Australia agree the potential of artificial intelligence is profound and profitable, estimated to be worth AU$22.17 trillion to the global economy by 2030. But the enthusiasm carries with it caution about AI misuse or abuse.
There are fears the technology could heighten and automate biases, leading to poorer outcomes for certain groups.
In a bid to curb irresponsible development and misuse, the federal government has released broad AI ethics principles, currently being trialled by some of the country’s biggest businesses. The tests will be used to form official guidance to help organisations to apply the principles in their work.
But many organisations are already moving ahead without the official guidance and experts are warning them to build AI ethics into their applications from day one.
According to Mark Brayan, CEO of ASX-listed AI data company Appen, most Australian organisations are still in a test and learn phase of AI. It’s critical, Brayan says, that those organisations ensure AI applications make decisions that align with humans “values and intents” now before they start making more important decisions on our behalf.
“Your favourite assistant not being able to answer a question because it still lacks the knowledge or doesn’t understand your accent is still ok, but companies have had to shut down their AI systems because they were biased, discriminating against women or people of colour, for example.
“When we read articles about autonomous vehicles crashing, that’s another example of malfunctioning AI not properly reading its environment, because the dataset it was fed with isn’t entirely accurate or didn’t include enough training examples of the use case the car encountered in the real world.”
Brayan runs Appen, a company that supplies much of the data used by the world’s biggest AI and machine learning companies to train their models. Appen crowdsources the data through more than a million people across the globe annotating and curating data.
“This heavy groundwork is absolutely essential, and yet, it is the untold chapter of the AI story,” Brayan tells Which-50.
“The industry depends on these people. In order to create high calibre, trustworthy, and scalable AI and ML, it’s imperative that we create a good environment for the crowd to fulfil their important role.”
Australian companies are learning fast about AI and the importance of the data it is built on, Brayan says, albeit from a relatively low base compared to global leaders.
“Most [Australian] organisations are still very much in a test and learn phase with limited data volumes, which is why we still see use cases where AI systems are biased or displaying inappropriate behaviours.
“We also have the case where old companies that have been around for decades think they are geared to build good AI systems because they have gathered data over such a long period of time, but they don’t realise this data can also be biased based on the type of customers they have had over this long period of time.”
For example, an organisation which has had a traditional customer base of white, middle aged males will have amassed data which reflects this, creating the potential for AI models to discriminate against people outside that group.
But the problem afflicts tech giants too. In 2018 it emerged that Amazon had to scrap a multiyear AI project it had hoped would automate its hiring process by analysing resumes with machine learning. The problem, Amazon eventually realised, was the models, having been built on years of amazon employee data, skewed in favour of male candidates – a reflection of the gender diversity imbalance in technical roles at Amazon.
Amazon eventually disbanded the program, first revealed by Reuters, and insists it was never relied upon for recruitment.
According to Brayan, companies are catching on to the fundamental problems of poor or misrepresentative data.
“I believe companies are becoming more AI-aware and beginning to recognise the need for quality data, which is evidenced by the continuous increase in the volume of businesses from various industries coming to us with requests for well-trained and unbiased data sets.”
While Appen’s corporate headquarter remains in Sydney, most of its business happens outside of Australia. Large US tech companies have more resources and have been developing AI applications longer, Brayan says.
But Australia still has an opportunity to produce “world-class niche applications” in line with the development of technology here in general, Brayan says.
“We have some great FinTech companies, for example, and others in mining and medical technology due to deep knowledge and experience in these sectors. It’s reasonable to see Australia exporting some very valuable focused AI products that fill gaps in the products developed in other countries.”
Indeed, Australia’s official AI roadmap, developed by the CSIRO’s Data61 unit, recommends Australia adopt a specialised approach to AI, then export its solutions to the rest of the world.
According to the roadmap, Australia would benefit by focusing on three key areas of AI applications – natural resources and environment; health ageing and disability; and cities towns and infrastructure.
However, for now at least, the local industry will have to do it without much direct government funding. According to the Australian Computer Society, while governments of other advanced economies are pouring billions into AI investment, Australia’s contribution is a “modest” 60 million.
Brayan says while direct government funding is always welcome, a better use of resources is in the enablement of AI development through education and research.
Mechanisms like favourable tax treatments for start-ups on R&D spend and employee equity participation, ecosystems for funding and incubators could promote a more sustainable local industry, according to Brayan.
“The world has learned that it’s better to teach someone to fish than feed them, so it’s better to enable businesses to grow and survive on their own rather than carry them too much.
“The world of AI needs innovation, and that may take a nudge from governments, but it also needs healthy, sustainable businesses that can support customers and develop new products.”
Discover Past Posts