We are living in the age of algorithms. These formulas govern decisions across all parts of life and have allowed companies to become more customer-oriented and profitable — think Netflix’s subscriber-friendly personalization, or the daily purchases attributed to Amazon’s vast recommendation engine. But what happens when algorithms are used to evaluate customers?

Companies are increasingly adopting algorithms to evaluate information provided by customers and make favorable or unfavorable decisions about them. For example, Raya, the private dating and networking app, uses algorithms to decide which applicants to admit as new members, while Zendrive evaluates customers’ driving skills to determine their car insurance premiums, and the global financial institution ING uses algorithms to make decisions about loan applications.

The prevalence of algorithms in customer-facing decisions raises an interesting set of questions about how customers react to personally relevant decisions taken by algorithms versus humans. For instance, would customers evaluate a bank differently depending on whether their loan application was accepted by a loan algorithm versus a loan officer? And what if their request was instead rejected? Understanding the impact of customer reactions can help managers make better decisions about when and how to deploy algorithms in customer-facing functions. We conducted a series of studies that showed that managers’ intuitions about this issue are frequently wrong.

Customer Reactions to Decisions by Algorithms Versus Humans

To learn more about how managers think about the effects of algorithms in customer-facing decisions, we first conducted a series of in-depth interviews and a survey with managers from different industries. We asked managers how they thought customers would react to being accepted or rejected by algorithms or humans. Most managers expected that customers would react less positively to being rejected by algorithms but react similarly to being accepted by algorithms versus humans. However, the data we gathered on customer reactions told a very different story.

In fact, our research revealed a pattern of results that is the exact opposite of the intuition of the managers we surveyed. We investigated customer reactions to favorable and unfavorable decisions made by algorithms versus humans across diverse contexts, such as loan and membership applications. In the experimental design used in most of our studies, we informed customers that their application was accepted or rejected either by an algorithmic or human decision maker and then asked them to evaluate the company to which they’d submitted their application.

Here is what we found: When their applications are accepted, customers react more positively when the decision is made by a human rather than by an algorithm. For instance, customers evaluate a company more positively and are more likely to recommend it to others when their applications are accepted by a human rather than by an algorithm. This difference emerges because customers find it easier to take credit for a favorable outcome (“My request was accepted because I am special and I deserve it”) when the decision is made by a human rather than by an algorithm. Whereas algorithms are perceived as reducing each customer to just numbers, humans are perceived as more likely to acknowledge customers for who they are and for their own merits.

What happens when customers’ applications are rejected? In our studies, customers evaluated the company similarly regardless of who (or what) made the decision to reject their requests. To protect their self-worth, people are motivated to blame others for failures. When their requests are rejected, customers display a tendency to blame both algorithms and humans to justify the unfavorable outcome. For instance, they blame humans for making subjective and biased decisions and blame algorithms for ignoring their unique attributes. In other words, when decisions don’t go our way, we are just as likely to blame humans for being human as we are algorithms for being algorithmic.

Ways to Improve Customers’ Reactions

Our research offers several insights for managers on how they can best design their customer evaluation processes and methods to communicate how algorithmic decisions are made. In practice, employees often oversee algorithmic decisions to make sure the algorithm is working properly. For example, in the context of loans, loan officers may monitor the algorithm when it evaluates customer applications. In this sense, one might expect that communicating the role of human oversight to customers would be enough to overcome the less positive reactions to favorable algorithmic decisions that we found in our studies. However, our findings revealed that this expectation is incorrect: As long as the algorithm makes the acceptance decision, (passive) human oversight does little to improve responses, and customers still react less positively to favorable decisions in these cases as well.

Our research suggests a viable solution to mitigate the risk of less positive customer reactions: Make the algorithm more humanlike. In one of our studies, customers evaluated a company more positively when their request was accepted by a more humanlike algorithm (“Sam”). Humanizing algorithms is a common practice in customer interactions, including those involving customer service or product recommendations. To make their algorithms more approachable, companies can present algorithms using humanlike descriptors. For example, algorithms are often given a human name (like Amazon’s Alexa or ING’s Bill), a human-looking avatar, or both (like Ikea’s virtual assistant, Anna).

Algorithmic Transparency

What happens when companies don’t tell their customers who made the decision? Do customers react as if the decision were made by a human, by an algorithm, or somewhere in between? Our data indicates that customers respond as if the decision were made by a human. Of course, this may change as more and more customer applications are evaluated by algorithms, but for now, the desire to make customers happy versus the desire to be transparent presents companies with an ethical dilemma.

There are many ongoing discussions about algorithmic transparency and about what governments should do to ensure that companies disclose how they use algorithms to make decisions that have potentially important repercussions for people. For example, the Organisation for Economic Co-operation and Development, along with the U.S. and U.K. governments, has recently issued a list of principles on algorithmic decision-making and transparency around the use of algorithms in decision processes.

Our research offers insights for companies on how algorithms in customer-facing decisions can affect people’s reactions to positive and negative decision outcomes. The results contradict managerial intuitions and highlight the importance of paying attention to customer reactions to the automation of customer-facing decisions. Before introducing an algorithm in customer-facing tasks, executives should assess the potential impact on customers’ attitudes toward their organization. Although managers tend to worry more about customers’ negative reactions to algorithms in the case of bad news, our research shows that they should instead be more concerned about customers’ negative reactions to algorithms in the case of good news. Our research thus suggests that despite the cost efficiency and predictive accuracy of algorithms, there is a significant upside to having humans continue to make customer-facing decisions.