Change, Instability and Disruption

By Dirk Knemeyer and Jonathan Follett

How do we navigate a possible future where AI and emerging technologies remake the landscape of large-scale systems such as science, technology, society, and policy? We spoke with Sam Arbesman, Scientist-in-Residence at Lux Capital and the author of two award-winning books, most recently “Overcomplicated: Technology at the Limits of Comprehension”, to better understand the larger context and implications of this emerging world — a future of change, instability, and complexity.

Figure 01: A future of change, instability and complexity.
[Photo: Book cover of “Overcomplicated: Technology at the Limits of Comprehension” by Samuel Arbesman.]

Computational Creativity and the Question of Authorship

If the future of human work and creativity is computational, then who gets the credit? Computational creativity — using AI to augment human creativity—is already emerging, but the results, so far, are uneven. Computational music composition, for example, is in a remarkably advanced state while computational engineering is relatively nascent by comparison. “There is a lot happening obviously in art and music and design,” says Arbesman. “And there’s been a lot of computational creativity in science, in terms of actually computationally generating hypotheses or computationally testing. Being able to do science at scale, in a way that we might not have been able to do before, has a lot of very interesting implications for how we think about creativity in general.”

Figure 02: Computational science at scale has interesting implications for how we think about creativity.
[Photo: “Closeup of microscope” by Michael Longmire on Unsplash]

Another important, related area of computational creativity is computationally generated computer code or program synthesis. “Let’s say you have a computer function you want to write. So, instead of writing it, you specify the desired output, given certain inputs. And, the program will actually write the code for you in that function,” says Arbesman. “Now a lot of this has not been done in a way where a non-programmer could very easily write large, entire computer programs. We are not anywhere near that, yet. But, I do think … computational creativity has a lot of really interesting potential there.”

In a world where AI augments our work and software outputs human-machine co-creations, credit and ownership becomes a legal as well as social issue. “So when machines and AI are creating art and music, how do we think about credit? How do we think about copyright? How does all this stuff work?” says Arbesman. “I think there’s still a large degree of open questions of how we think about this.” If an AI system, developed by an artist or musician, generates something creative and novel that a person couldn’t have generated on their own, or the person is not really even sure how it was created, is the AI the co-author? “I think there’s some interesting implications there for how we think about what is credit, what is ownership? There are a lot of legal scholars speaking about this, and there’s also a body of legal work already that is beginning to be used for how to think about this. It’s going to open a lot of really interesting conversations.”

Understanding the Risks Inherent in Complex Systems

Given the complexity of artificial intelligence and emerging technologies, but also the speed of both their adoption and the change that their very presence initiates, there are a variety of risks about which we should be diligent.

“In the world of technology and engineering, we think that because we have designed a system, it should be logical and rational and amenable to human understanding. [We think] that, if we can apply our brains to these systems, we should be able to understand them,” says Arbesman.

This assumption makes sense, on its face. But, as he further describes, this is not necessarily the case. “When you look at … just the amount of computer code within a car, for example, these things are so much larger than anything else that we might be comfortable with, as a single individual reading through and understanding.” Software systems, which have evolved over time, have huge amounts of legacy code that no one fully understands with lots of interacting parts. “In many ways they do have the hint of the biological,” says Arbesman, indicating not only the extent of their complexity but also how they begin to emulate other, seemingly dissimilar, systems.

“In technology, we have to move a little bit away from this traditional engineering mindset and move towards a biological mindset — take some of the ideas of how biologists might query complex biological systems and use them for our own technological systems,” Arbesman explains.

To assess things as a scientist would assess a biological system gives us a far better chance of understanding, as well as potentially controlling, the complex systems with which we’re dealing. “I think in many cases, as technologies become more and more complex and we use AI — which might have millions and millions of parameters being set by a complex relationship of some algorithm and a huge amount of data being poured into a system — when a system goes wrong, you almost don’t know why the system goes wrong.”

Figure 03: A biological mindset gives us a far better chance of understanding complex systems.
[Photo: “Painted Desert, Arizona” by USGS on Unsplash]

To illustrate this point about systemic complexity, Arbesman cites an example of catastrophic defects in a slightly earlier set of technologies — the case study of unintended acceleration in Toyota’s vehicles, which resulted in substantial recalls during 2009–2011. “About 10 years ago, a number of cars manufactured by Toyota would occasionally speed up without people knowing why. In some cases, these cars would crash, and actually, in some cases, people died. It was a very serious problem, even though it was referred to by this euphemistic term of ‘unintended acceleration’.”

The US Attorney General conducted a four-year investigation into the causes of unintended acceleration at Toyota. It resulted in a fine of $1.2 billion for the company for concealing safety defects, which included floor mats and sticky throttle pedals. Notably, the Toyota Electronic Throttle Control System (ETCS) and its software was not included as one of the defects. However, this is not the end of the story. In a separate trial, Bookout/Schwarz v. Toyota Motor Corporation, the jury awarded $3 million compensation to the plaintiffs. A key point in the trial was whether defects in the Electronic Throttle Control System caused the fatal crash.

This fascinating description of the analysis of Toyota’s software, comes from Safety Research and Strategies, Inc.: “Michael Barr, a well-respected embedded software specialist, spent more than 20 months reviewing Toyota’s source code at one of five cubicles in a hotel-sized room, supervised by security guards, who ensured that entrants brought no paper in or out, and wore no belts or watches. Barr testified about the specifics of Toyota’s source code, based on his 800-page report.”

Barr’s testimony included this statement (emphasis added): “There are a large number of functions that are overly complex. By the standard industry metrics some of them are untestable, meaning that it is so complicated a recipe that there is no way to develop a reliable test suite or test methodology to test all the possible things that can happen in it. Some of them are even so complex that they are what is called unmaintainable, which means that if you go in to fix a bug or to make a change, you’re likely to create a new bug in the process. Just because your car has the latest version of the firmware — that is what we call embedded software — doesn’t mean it is safer necessarily than the older one…. And that conclusion is that the failsafes are inadequate. The failsafes that they have contain defects or gaps. But on the whole, the safety architecture is a house of cards. It is possible for a large percentage of the failsafes to be disabled at the same time that the throttle control is lost.” Despite the jury decision in Bookout/Schwarz v. Toyota Motor Corporation, Toyota continues to dispute that their ETCS is flawed.

“I think in this case, [Toyota] ended up making their systems more complex than they needed to be, which as a result made them less understandable, and therefore more likely to actually fail. But, there are many situations where, as we look at the overall complexity of systems, and sometimes the lack of explainability of these technologies, it’s going to have implications for how we think about blame and liability,” says Arbesman. This is particularly true of complex AI systems.

Liability around emerging technology is always confusing, initially, before laws get on the books and regulations are established. Historically, though, there is at least the pretense of understanding — that a judge or a jury or some other arbiter has a robust understanding of the mechanics of a situation and levies a decision in a mindful way. Already, we can see that with complex software, such as in the Toyota example, that understanding is not always possible. The problem of complexity becomes even more difficult with deep learning AI systems that teach themselves outside of human purview when training, and are perhaps incomprehensible, impossible to reverse-engineer once deployed.

“There should be a way — when a decision by an enormously predictive and powerful AI is reached — that we at least have some way of knowing how that decision was made or how that prediction was made,” says Arbesman. We need to be able to audit AI decision making. “There is this trend — and I think we’re going to see this more and more — of trying to create explainable artificial intelligence and machine learning systems,” says Arbesman. “I think that’s going to be very important.” To mitigate risks, like the kind of problems illustrated by the Toyota case, AI needs explainable systems. Arbesman’s optimism around explainable artificial intelligence is heartening, but there is at least a question of whether it will even prove possible, at least in software driven from a machine learning perspective. This will be an area to keep an eye on.

Addressing the Societal Risks of Automation

Aside from the risks posed by the complexity of AI systems themselves, there are potential problems caused by the impact of this software and automation on society as a whole. These extend to crucial topics such as the future of work and employment, as well as existential concerns about human life. Increased automation has the potential to “create a certain amount of loss of jobs or even the loss of nearly all jobs,” says Arbesman. “I think we need to have more of a conversation around how we think about the future of meaning and purpose in everyone’s lives.” What do we want the future to hold, as a society?

“Even in the best case ‘Star Trek’ post-scarcity scenario of the world, where everyone’s needs are met, everyone’s able to live really nicely, no one needs to work and [can] indulge in all their hobbies and do whatever they want, the question still becomes: ‘How are people still living the lives they truly want to live?’” says Arbesman. How do people view their lives with meaning, in this scenario? “I think we just need to be having these conversations now, rather than later. If there is going to be a huge fraction of the population that does not need to work any longer, how do we make sure that this population feels like it’s contributing to society and doing creative work, and fulfilling a certain amount of potential and purpose, if they no longer need to get a paycheck?”

But first, we need to get to this post-scarcity world in a way that’s not hugely destructive to our existing society. “I can see we have the world now on the edge of a huge amount of automation and loss of jobs. Maybe, in several hundred years, we’re going to be in this wonderful post-scarcity utopia. But, between now and then there could be a huge amount of disruption. I think, the earlier we begin having these philosophical conversations as a society, the better we’re going to be, because when people are already losing their jobs … a huge swath of the population, then it’s already too late. So, we need to really be having these conversations right now,” says Arbesman.

Creative Next is a podcast exploring the impact of AI-driven automation on the lives of creative workers, people like writers, researchers, artists, designers, engineers, and entrepreneurs. This article accompanies Season 2, Episode 12 — Our Complex Future.


AI and Our Complex Future was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.