Why AI needs humans as much as humans need AI, especially during a novel event like a global pandemic

Photo by Bret Kavanaugh on Unsplash

At some point, everyone’s seen some kind of clickbait article talking about how some new AI technology or some new robot will eventually replace us humans. Or at least they’ll make most of our labor obsolete. Of course, as clickbait articles tend to be, they greatly exaggerate the current technologies and how close certain technology is to replacing human labor. For example, you watch a video like this Wired documentary and you’d think self-driving cars are about to take over the roads any day now. In fact, I think if you took someone not involved in the AI/machine learning/data science community and asked them what kind of technology they thought of when you said AI, probably one of the first things they’d come up with are self-driving cars. However, this Vox article quotes Daniel Sperling, the founding director of the Institute of Transportation Studies at the University of California, Davis:

…fully driverless cars — which don’t require anyone in the car at all and can go anywhere — are “not going to happen for many, many decades, maybe never.”

Just as self-driving cars are not about to replace human drivers, there are many areas where humans are not as close to being replaced as some of these articles would have you believe. In other words, AI is not as infallible as it’s made out to be, and that’s becoming even more clear with the current pandemic.

The current COVID-19 pandemic has been a novel experience in the modern age. The last time we experienced a pandemic of this magnitude and social impact was the 1918 flu epidemic, and as you’d imagine, data from that pandemic is not nearly as detailed or complete as what we have now. Additionally, the world in 1918 is vastly different to the world we live in today for so many reasons.

All predictive models work under one basic assumption, that the future will act the same way as the past. Data will always behave in the same way, and all that will change is how different variables interact with each other. Models only know what the data tells them, and nothing else. It doesn’t have general world knowledge, or even just a greater contextual knowledge (unless of course it’s reflected somewhere in the data the model is given).

All of this means many predictive models are going to be way off, as this article highlights. Models don’t account for this level of catastrophic event. Plus even if features were added to models that may account for pandemic-specific issues, the truth is we really don’t know what the future looks like. We really don’t know when we’ll stop social distancing, besides the fact we probably won’t be doing it forever, and we don’t know when people are going to even feel safe doing the things they used to. Moreover, as many people are saying, we may never go back to the way things used to be. If that’s the case, predictive models, especially the more data-hungry ones, are going to be more notably off for a good amount of time.

This is where humans come in. We’re able to take new events and make educated predictions using very little data, because we have the much broader internal database of empathy and an understanding of how we humans behave. An algorithm may not predict where a person would go shopping now, but a human could. We might predict someone might go to a smaller store because it’s less crowded, or closer to home. We can see changes in data and say, “Oh that makes sense for [insert reasons here].” That same change would not make sense at all to an algorithm that doesn’t understand exactly how a pandemic and the threats and anxiety associated with it impact human behavior.

All of this isn’t to say that we’re up the creek without a paddle and all algorithms are now useless. Algorithms could still contain valuable insight, insight that we could combine with our own knowledge. We could look at the output of an algorithm and present it as, “this is what would normally happen according to our models, and this is what is currently happening, so if these events happen in the future, this is how we could expect our model to be wrong and how we could expect it to be right.” The truth is, humans and AI have always needed each other. Even past the fact that humans created AI, AI still needs data from us, and humans can provide domain and general knowledge that an algorithm cannot. Data science has proven to be an invaluable tool in today’s data-driven world. Models can sift through mountains of data that humans cannot, but humans are much better at adapting to novel situations. This is why it should never be about one replacing the other, it’s about capitalizing on each other’s strengths and working together as one.


Humans and AI: The Dream Team was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.