Why the EU Must Further Interrogate Trust, Excellence, and Bias in AI
Source: https://towardsdatascience.com/why-the-eu-must-further-interrogate-trust-excellence-and-bias-in-ai-715a4265e6b4?source=rss------artificial_intelligence-5

Why the EU Must Further Interrogate Trust, Excellence, and Bias in AI
The following is an open response to the “White Paper On Artificial Intelligence — A European approach to excellence and trust” published by the European Commission on February 19, 2020. That Document can be found here: https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
This white paper represents an important shift in European AI policy; a move from the language of individual rights to the language of social relationship. I commend the authors for ensuring that the future of European AI is grounded in clearly defined communal values and shared fundamental rights such as human dignity and privacy protection. Similarly, the relational language present throughout the document admirably demonstrates that the Commission is taking seriously the invitations of the Sustainable Development Goals, the European Green Deal, and similar multi-lateral agreements in submitting their recommendations. The two building blocks of this policy document, setting up a policy framework centered on an ‘ecosystem of excellence’ and an ‘ecosystem of trust,’ are well-grounded in a relational vision that shows promise for an economic policy framework that will represent the wishes of a diversity of populations across the socio-economic spectrum.
However, there are several elements of this white paper that could use greater nuance and more intentional engagement. Most notably, there is a distinct lack of definition for the term ‘trust’ in this document. Though the argument is soundly made that trust is important going forward the questions of “trust in what?” and “trust according to whom?” are not adequately answered. Similarly, there is a lack of critical socio-political analysis surrounding diversity, equal representation of identities in the further creation and enforcement of these recommendations, and downstream impacts to a variety of communities that may be impacted. The theory behind the human-centered approach the document recommends is well-defined but the question of which humans will be looped in and catered to in the creation and application of this approach are not explicitly stated. There is reasonable concern that without adequate attention to representation and diversity in instituting this human-centered approach it may just further marginalize those that already find themselves in powerless situations in the current technological landscape.
I would also invite the authors to be more explicit when defining ‘excellence.’ The document argues that the EU can foster an ecosystem of excellence by more effectively working with nation-states, focusing the efforts of the research and innovation community, providing resources to better address the shortfalls in skills present in the current economic market, and more. Unfortunately, the document assumes a definition of excellence that is never fully explored. The closest the document gets to a definition is when speaking of the creation of an ecosystem that accelerates the adoption of solutions based on AI using a strategy of stimulating research and innovation along the entire value chain. I critique this partial exploration of what an ‘ecosystem of excellence’ is defined as because I am concerned about the downstream impacts of unbridled growth without clear intention. Again, this partial definition leaves room for the continuation of economic systems that lift up institutions and individuals already in power at the expense of the already marginalized, smaller and newer companies, and the earth itself.
Finally, I would like to comment on the analysis of diversity and bias that the Commission presents. The document correctly identifies the difficulty in addressing bias within AI technology and the immense harm bias in AI technology can cause as it becomes more ubiquitous across the world. However, the authors stop just short of answering the pivotal question of who should be accountable for harmful bias present within AI technology and how EU law should enforce addressing bias in technology systems. I appreciate the well-argued nuance of the difficulty addressing bias in AI, however I would highly recommend the Commission add several specific mechanisms that they recommend for working through those difficulties.
Related posts
Discover Past Posts