Creating alignment and actionable results

The barriers to organizational transformation

Confidence, accuracy and speed in making decisions is the holy grail for organizational transformation. With all the work taking place in decision science, AI, and data science, we still find ourselves weary from long meetings seeking the collective confidence to know we are making the right decision. The situation has been made worse by the pandemic.

I recently spoke with the Chief Investment Officer of a top-tier endowment fund. In his position, he oversees investment committees, each of which allocates billions of dollars of funds through collaborative decisions. He knew that an improvement in their process of making investment decisions could have a significant positive impact on returns, and he was sure that the path to improved accuracy in decisions must be through technology. Specifically, he wanted to capture as many different perspectives as possible, make the discussions more transparent, and document the decision process to back-test it against results so they could learn how to improve.

Traditionally, collaborative decisions can be tricky in all of these areas, and are often fraught with unintentional bias towards senior leaders, or the ‘loudest voice in the room’. In addition, they are difficult to manage at scale, and even more challenging when the participants weigh in from multiple locations or different cultures. It is not by accident that most decisions are made with a less than ideal number of ‘voices’, as anyone who has tried to reach a consensus across a large group can no doubt attest. The ‘echo chamber’ effect results in billions of dollars of lost opportunity across the global investment community every year. Fortunately, recent developments in collective intelligence, artificial intelligence, and natural language processing make it possible to solve many of these challenges to efficient, accurate decision making. This article articulates a method to increase confidence, speed and accuracy in organizational decisions by harnessing the collective power of these technological advances. By bringing together the insight and reasoning skills of people and the optimizing and refining clarity of artificial intelligence, we can begin to improve one of the most human and error prone aspects of business, the ‘art’ of well-informed and accurate decisions.

Before we begin, it is helpful to understand a few of the key challenges we need to address. These are the “big three” sources of dysfunction representing lost opportunities to effective decisions:

  1. Collective alignment on the reasons for a decision: Poor alignment often results in an impasse, a frustrating end to a meeting, or a desire for leadership to override an otherwise open process. Too often the process of finding collective alignment on critical decisions invokes organizational politics, which often lead to lingering participant dissatisfaction, detachment, poor follow-up and implementation, and long term erosion of confidence in leadership.
  2. Systemic bias: Unconscious systemic bias keeps organizations in a rut. Innovation is too often stymied by normative patterns of decision making and organizational structure rooted in years of “experience”. Valuable diverse thoughts fall to the floor while entrenched patterns and loud voices rule the day. Hidden bias relating to educational background, culture, age, gender, and seniority all conspire to silently kill potentially compelling and innovative ideas.
  3. Evidence-based accuracy tracking: Insufficient recording of the reasons and influences underpinning a decision create opportunities for a revisionist history where comments surface such as“I think I was on leave when that decision was made,” or the always-helpful “I had my doubts all along”. Healthy organizations learn to shift the focus from personal accountability to collective organizational learnings taken directly from an audit-trail of evidence.

Advances in collective intelligence and artificial intelligence offer a breakthrough opportunity to overcome these three barriers and provide a transformative new approach to making well-informed and predictively accurate organizational decisions.

Collective Reasoning: A process that results in learning a group’s reasons or beliefs about a decision or prediction

Creating alignment with collective reasoning

The solution to all three barriers and a path to confident, fast and accurate decision-making is to rapidly learn shared beliefs and priorities about the evidence supporting a decision while trying to eliminate bias. Without technology this is often done in meetings with an objective facilitator focused on creating shared priorities and alignment. This manual process does not scale well across locations and time zones, nor is it relevant in a time of crisis such as a pandemic. In addition, the meeting process often fails due to ‘the loudest voice in the room’, status in the organization, passion or the powers of persuasion ruling over evidence, logic and diverse perspectives. Clearly, this is a massive opportunity for a collaborative technology solution.

Work in collective intelligence has demonstrated that collectively we are smarter. Approaching a decision from multiple perspectives is a known best practice. As we enter a new age of collaboration and a generation that thinks of participation as a fundamental aspect of employment, we can find inspiration in a series of excellent books on the power of collective intelligence. Scott Page’s “Difference” lays out the power of diversity in thinking to increasing predictive accuracy. Philip Tetlock and Dan Gardner document and analysis demonstrated accuracy of collective intelligence in “Superforecasting”. Finally, Scott Page and Tom Malone contributed their substantial expertise in their very well received books, “The Model Thinker” and “Superminds”, respectively. In summary, these books show various ways the science of collective intelligence promised new ways to think about the accuracy.

Collective Intelligence Summary: Multiple perspectives and diverse backgrounds increase prediction accuracy and reduce systematic bias.

Collective reasoning goes a step beyond collective intelligence. In collective intelligence predictions are done completely independently. A diverse group is given a task to make a predicted outcome, each individual works independently and the results are analyzed. Reasons for the “why” behind different perspectives of a decision are analyzed independently.

In Collective reasoning we separate the prediction or rating of a decision from the reasons for the rating. Collective reasoning captures the dynamic potential often observed in great brainstorming sessions where we learn from each other’s thinking; it collectively creates new approaches to a decision. Collective reasoning captures the incredible power of collaborative thinking, moving beyond the mathematical calculus of simply aggregating a series of independent thoughts and capturing the power of the collective mind in creativity, problem solving and alignment.

An illustrative example

Let’s illustrate this with a simple example of making a decision to invest or not invest in an early-stage company — a startup. We can all relate to the need for confidence, speed and accuracy in betting on a startup that may have the next big idea.

Imagine you have been invited to be part of a select group of people to evaluate whether to invest in a new stealth startup or not. Collectively, you have the task of predicting whether the company will be successful in attracting major investors. You have access to all of the company’s materials, you participate in a Q&A with the founding team. You have all the materials you need to make a considered decision and prediction. The company you are evaluating has highly technical components to their business. While you don’t know their identity, you know that there are others with strong technical backgrounds appropriate for the task.

You are asked to rate the company on a scale of 1 to 10 on the following attributes:

  1. Compelling business opportunity, considering market, competition, etc.
  2. Team — is this the right team to lead the company in the target market?
  3. Network effects of early advisors and investors — can they help propel the company?
  4. Investment conviction — is this an investment you would make or recommend?

After reviewing the materials, you provide your input score and the reasons that caused you to give the score you gave. Once you submit your thinking, you are shown a sample of other’s reasons and their cause for their score. You do not see the other contributor’s score. Note you do not see who is behind the reasoning. You are asked to prioritize the reasons that align with your thinking. This process creates increased focus on the evidence driving a perspective on a decision and reduces the bias of organizational position or team politics. Once you have submitted the prioritized list of reasons, reading them may have caused you to go back and rethink your rating or other thoughts you might have about the decision. This is an important part of the group process of moving toward points of alignment.

Figure 1

Learning relevance with scientific accuracy

We freeze frame the example to note the opportunity for a technological and scientific solution to collective reasoning. In fact, the roots of learning group preferences go back nearly 100 years to the law of comparative judgment first stated by L.L. Thurstone in 1927. The only way to learn ‘predictive’ preferences is an A/B test, an economic tradeoff. Many of us have been through the process at the optometrist’s office of answering “A or B?” … “B or A?” over and over and over again. Marketers are well versed in A/B tests as standard means of learning product preferences. But does this work for groups at scale in a dynamic discovery environment where the number of items to compare could require massive numbers of A/B tests? AI technology offers a solution.

Each time an individual in the evaluation team described above ranks a list of preferences, we can create a “web” of who prioritized whose reasons. The web from a session like we outlined above is shown below. The nodes are individual contributors, the links are generated by prioritizing each other’s ideas:

Web Representation of Group Interactions

Relevance is learned from using a hidden Markov process. The sample list shown in the right panel of Figure 1 is a dynamically generated list based on prior rating events. The “hidden” part of the Markov process is a function that gradually shifts from “discovery mode” getting inputs on new ideas to “prioritization mode” attempting to estimate the list of reasons that best represent the groups rank ordered reasons. Note that the context of the sample list is determined by the “feature” under consideration. Team assessment is an example “feature” of the overall model. The figure below shows the learning curve of the algorithm. Note its rate of convergence is driven by the number of rating events but it converges rather quickly for the top rated items (near the origin of the plot).

Learning Relevance

From that web of connections we learn influence ranking of various members and we use natural language processing technologies to learn the reasons and topics that represent the collective reasoning associated with assessment of the business opportunity, the team, etc. Learning relevance vs. sentiment provides a meaningful signal of what items are both highly relevant. An example for an investment are shown in the figure below.

Relevance vs. Sentiment

In this particular case, the evaluating group sees revenue as highly important but sentiment indicates that discussion about revenue is mixed. Detail on why this is true is learned by looking into the relevant reasons assigned to this topic.

This is the essence of a collective reasoning system. It automates the acquisition of knowledge applied to the decision at hand.

Integrating human and artificial intelligence

Today we face unequaled challenges regarding how investors and corporations deploy capital and successfully chart a path for corporate transformation and growth. Substantial investments in AI have impacted automating certain business processes but the promises of AI have fallen short when it comes to planning, strategic thinking, investments and corporate transformation. Myopic focus on statistical learning with its heavy reliance on historical data has limited the vision for what AI can do in catalyzing innovative solutions to our biggest challenges. The need for creative collective action goes beyond corporate strategy. The need for clear collective planning and action extends to responses to climate change and public policy. In this article, I present a platform using collective human intelligence and AI developed over a number of years that promises to catalyze and empower collective creative prediction and action.

The limits of second generation AI, statistical learning AI are clearly addressed in Judea Pearl’s recent book, “The Book of Why”. He states:

“If I could sum up the message of this book in one pithy phrase, it would be that you are smarter than your data. Data do not understand causes and effects; humans do.”

He goes on to explain:

“While probabilities encode our beliefs about a static world, causality tells us whether and how probabilities change when the world changes, be it by intervention or by act of imagination.”

In a MIT Technology Review interview, one of the fathers of deep learning, Yoshua Bengio, stated:

“I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. But we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information.”

There are strong arguments for the integration of human and machine intelligence being fundamental to next generation AI systems. Humans remain superior in learning from extremely small data sets and in imaginative discovery and predictions. Trust and transparency are critical to AI applications in areas such as financial investments and corporate transformation decisions. Human-empowered AI and collective intelligence suggest ways to guide the course of AI’s future.

Collective reasoning: From data acquisition to cognitive models

If you want to get input on a decision using tool sets available today, you typically turn to survey tools, polls, or open-ended input via emails and messaging tools.Today’s tools are geared to collecting individual perspectives on a topic or decision. For example, let’s say you have a group of people from whom you would like feedback on an investment decision and you just want to get input on three of the questions we posited above — business, team, network effects. Each participant is asked to score the questions from 1 to 10 (10 being the highest) and to give their reasons or thinking about their answer. The figure shows graphically a model for that information collection exercise:

Individual comments without relevance ranking

The blue dots represent the people inputting their views (23 participants). The pink dots represent their scores and reasons (113 reasons for scores submitted). Some are far more verbose on reasons than others but they all provide scores for each question. This graph shows results obtained when you use any of the methods mentioned above: survey, poll or email. Calculating the mean scores for a group is easy and survey tools do that for you automatically. Summarizing the 113 reasons or thinking is hard unless you use the human brain and read each one. Unless you bring them all together and facilitate a group discussion you will never know the top three or four reasons that represent the thinking of the group about the investment decision. Sentiment analysis is only mildly predictive. What is severely missing is the ability to ask:

How is the group aligned around the reasoning for this investment decision?

Suppose we introduce a mechanism that allows them to rank each other’s reasons from a sample as mentioned above. Using a relevance learning algorithm, we can filter out the reasons that have lower relevance to the group to radically simplify the analysis process. Note we are seeing the early stages of how this group reasons together. This is a first step in collective reasoning, filtering relevance by a AI augmented peer review process:

Peer Reviewed Relevant Comments

Note that the person on the lower left has many reasons that aren’t particularly relevant to her/his peers. (Have you ever felt like you are that person in a meeting?) If we only look at high relevance reasons (e.g. scores >50%), we have to reduce the complexity of the problem a bit. Note we now have to read ~50 reasons if we want a deeper dive into what the group thinks.

Using some of the latest NLP technologies, as we are dynamically learning the relevance of reasons. We can also learn topics (in green) of critical importance to the decision. Topics like: good market or poor go-to-market make it easier to summarize the group’s reasoning. Note that since the topics or themes are just collections of reasons, they will have a theme relevance score, so it is now possible to put the themes the groups are thinking about in priority order based on the group’s collective judgment. In this particular case, it turns out that there are five key themes with positive, negative and neutral sentiments associated with each theme further simplifying the results. Since the reasons are attached to a quantitative score, we can combine that score with sentiment analysis to get a much more predictive and precise read on the true intention of the person’s thinking on the decision.

Comments Organized into Themes

However, recall that the initial purpose of this exercise was to do what the group recommends with regard to this investment decision. By using a structure process like the one depicted in Figure 1, we can link all the collective reasoning into a prediction. The themes shown in green and their included relevant reasons shown in pink ultimately influence ratings on the features, thus influencing a predictive score. In this case, the group gives this investment a 79% score. Such scoring methods can be trained using ground truth data, integrating human and collective intelligence, to provide a framework for a whole new approach to collective decision making by organizations.

Predictive Model with Relevant Reasons and Themes

Each decision process produces a collective reasoning model of a decision: a collective cognitive model of what the group believes will be the outcome of a decision, such as a decision to invest or pass. The model allows inspection from different perspectives. For example, we can start with the predictive outcome and ask “Why”. What is the reason for the decision? The figure below shows a perspective on the business quality aspect of the decision.

Reasons and themes behind scores and predictions

The feature relating to quality of the business received a mean score of 8.

However, as you can see, there is diversity in opinion. Some thought there was a good business model while others thought not. Collective reasoning allows exploring diversity of opinion; a critically important point to be made here. Collective reasoning is not about “herd mentality,” but explores the ramifications of diverse opinions reasoned by diverse individuals and how it blends into a collective prediction or decision.

Rating Distribution

Collective reasoning leverages findings from collective intelligence, specifically the diversity prediction theorem:

Collective group error = group error — diversity score

For the collective reasoning system described here we calculate both the quantitative diversity score and the language diversity score. The latter is made possible by turning language into geometry using the latest in embedding techniques. Each reason is a point in language space that has a calculable distance from other reasons. We have shown that:

the greater the cognitive diversity of the team the greater the predictive accuracy

By linking reasons (causes) to a score, we have a causal relationship between the reasoning and the score. The net effect is that we now have a causal network for a prediction or a causal cognitive model for a decision. Over time this can be generalized so that we have a foundation for building computational models of the collective reasoning and intelligence of teams of individuals across broad decision domains.

In summary, the collective reasoning system described has the potential to radically transform the way to capture and improve the intelligence of decision makers in an organization. For those with an interest in AI you will recognize the result as a Bayesian Belief Network or causal network. In effect, we have automated the process of acquiring knowledge from a group of experts or team members in an organization.

I have presented in this short piece a solution path to creating faster, accurate, organizational decisions. Each collaborative process is driven by a decision “scorecard” that captures the factors you need to consider to make a decision. Each process results in a score and a complete record of the knowledge and thinking going into the decision. The model can be archived and used as an independent resource for organizational learning. The model simulates the thinking of the group and as such is a causal model, a knowledge model, a mini intelligence of the expertise and thinking that resulted in the decision.

If you have a “scorecard” of the factors you need to consider to make your decision, consider using collective reasoning as described above to automate and streamline your decision-making process. It can be done asynchronously, remotely, and you don’t have to go to a meeting. You can meet once you are aligned, it will be a much more pleasant experience. You will have a permanent record of the complete reasoning process your team has gone through to make the decision.

If you do not have a “scorecard” for your decision, create one. And that leads to another piece to follow later or contact me at

Transforming Organizational Decision Making with Collective Reasoning was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.