Human biases, stereotypes and social dynamics are hindering critical decision making processes in organizations. Hiring, promoting, firing people is a political game that too often has little to do with the real value generated by individuals. It's well known that digital solutions can help collecting and analyzing information to make informed decisions, but a more radical solution requires to delegate the critical decisions to the machines.
It’s no news that human beings are biased, that our decisions are influenced by multiple factors that distort our perception and misguide our choices. According to the psychologists Banaji and Greenwald, our perception of the social world is led by stereotypes, that allow us to feel more familiar with the unknown. In a rapid glance, we naturally detect multiple features of an individual, and combining assumptions of each feature, we generate a profile accurate enough to interact with an acceptable level of predictability. If we need a direction to the train station, would we prefer to interact with a middle-aged, female, white woman in a black coat holding a suitcase, or to the young, black, strongly built, male, well dressed man seated at the coffee table drinking a beer? We combine multiple stereotypes, generate a unique profile, and make our choices. It’s efficient, rapid, and minimizes an otherwise overwhelming uncertainty. It’s also a biased, subjective process that it’s highly influenced by the social context, holding a high potential for rational mistakes.
What happens when this process is applied to critical decisions like hiring an external expert, promoting an executive, or selecting individuals to be laid off in a delayering process? What is an acceptable margin of error to consider the process fair, meritocratic, objective? And what level of inconsistency of decisions between multiple raters we can afford? Is it fine if George from HR would select me because I hold an Harvard degree (I must be smart!) and I’ve been working in Africa for the last three years (cosmopolitan, open minded and adventurous), while that day Joanna replaces him, and prefers an older (must be wiser), French (elegant and sophisticated) woman (more emotionally intelligent) holding a Stanford degree (more innovative)?
Suddenly the importance of what value we can add to the company is diluted in the noise of stereotypes that try to capture who we are and minimize uncertainty.
Ambiguity of organizational decisions is also a fertile land for politics, nepotism and corruption. When the decision making crietria are unclear, and the processes are not objective, managers can easily enter the market of favours, exchanges and transactions to build their careers and remove barriers. The error margin generated by our cognitive limits is leveraged by the Machiavellian mind, that can hide her agenda behind buraucracy, confusing committees and empty processes.
Historically, when we perceive human limits that hinder the effectiveness of our work, we desperately look at answers from the world of the machines. Today these machines come in the form of digital, cloud-based solutions, that leverage artefacts in the form of advanced people analytics, predictive algorithms and artificial intelligence.
To understand how digital solutions may improve decision making processes, let’s explore how they can impact three fundamental steps of our organizational decisions: collecting and analyzing relevant information, making decisions based on the analysis, communicating the decisions.
Collecting and analyzing relevant information:
Once upon a time, HR managers piled up on their desks a mountain of CVs, to be read, highlighted, shredded, shortlisted, moved in different piles and, at times, lost.
An increasing number of companies today leverages digital solution that allow automated parsing of CVs, extracting and codifying information from thousands of individuals. To promote managers, companies use digital/cloud-based performance management systems that provide all necessary information about an individual (roles, experiences, ratings, tenure etc.) in synthetic dashboards that enable informed decisions based on facts and data.
It’s easy to see how digital is improving the decision making process leveraging a key weakness of humans: handling huge data sets in a systematic way. People interpret information based on their cognitive categories, seeing only what they want to see, and ignoring critical data. When the dataset size increases, this is even more true, incentivizing the use of heuristics to accelerate the process. No surprises here in seeing how digital can help.
Things get more interesting when we need to transition from the analysis to the decision itself. It’s quite easy to accept the use of technology for the former, while the latter is still a tabu for most organizations. Imagine what would be the effect of knowing that our next promotion is determined by a machine, that the final decision for our recruitment in the firm of our dreams is made by an algorithm, or even worse, that we can be fired by a computer. Most of us would agree that it feels wrong and scary. Enter career committees, leadership meetings, and other board room, boring, long conversations to inform, ponder, debate, rate and vote decisions. They are everywhere, and they are terribly biased. Thee decisions are influenced by obvious factors like hierarchy, power of the speaker and politics, and less obvious factors like recency of critical events, similarity of the candidate to the individuals in the committee, success/failure of past candidates similar for certain aspects, like nationality, gender, age, sexual orientation, personality. The selection of future leaders, more than following the logic of scientific research, is more often than not similar to choosing a football team to root for. Empowering a machine to make a decision for us, following a rigorous scientific approach, would turn the decision making process upside-down.
While decisions shape our organizations, they way they are communicated determine the amount of resistance from the impacted stakeholders. It’s much more acceptable to be discarded for a promotion if we know it in a timely, clear manner, based on objective data, clear criteria and communicated in person with empathy. Big changes have similarities with grief, and they need to be communicated with an analogue, tactful approach to be more acceptable.
Artificial Intelligence is evolving fast, but the level of maturity of this technology is still far away from enabling empathic relationship between humans and machines. Chatboxes and digital assistants are more and more sophisticated, but well prepared humans are still ahead in the game. We don’t like to deliver tough messages, but we still (have the potential to) do it better.
Machines can improve fairness and objectiveness of critical decisions, but we need caution before embracing the whole set of digitally-enabled solutions as the holy grail that will fix all our problems. Whenever we empower the machines to take control of what traditionally humans were controlling, there are consequences that go beyond productivity and effectiveness.
Leveraging machine-led decisions will create organizations with absolute meritocracy. This implies three significant consequences:
- Personal features not related to the decision making processes will become irrelevant. Political preferences, sexual orientation, gender, age, ethnic group will all be ignored. Minorities will finally have the same chances to get represented in the leadership team, if they can demonstrate the right skills and results, regardless of their background
- All the energies spent today by employees to manage political processes, relationships of interest, upward branding and impression management will suddenly lose effectiveness. People will stop bothering about "what does the leadership team think about me", and start bothering about "how can I add objective value to the organization that can be measured?"
- Leaders will need to define exact criteria for selection, promotion and other critical talent management decisions. This will kill ambiguity of people strategy. Being forced to decide these priorities in quantitative terms (e.g. weight of specific achievements, skills, performance scores, experiences), will help organizations to clarify their strategies in a linear way that can be easily tested, communicated and understood
The whole concept of delegating critical decisions to machines is a revolution that not many organizations would embrace easily. Imagine what could be the reaction of unions and other workers' associations! Being a radical cultural revolution, accepting that machines are deciding for us would generate outraging resistance from the workforce, long before any analysis or appreciation of the potential benefits. As usual, the more revolutionary the approach, the stronger the resistance. However, if we ignore the issue and maintain the status quo, we will never increase diversity, minimize pay gaps, maximize performance, boost innovation and create workplaces that are fair and meritocratic.
Other interesting challenges are at the philosophical level: is it good for humanity to lose control of critical decisions? How can ethics be embedded in machine-led decisions? Making decisions on a limited number of objective and quantitative factors is better than the traditional conversations, group thinking, and human analysis?
My view is quite radical: our emotional and social complexity is amazing in so many sectors of our lives, but when we need absolutely fair decisions like choosing the leaders of our organizations, it often plays against us. Recognizing that other people may have different perspectives on meritocracy, an hybrid system where decisions are managed by humans and machines together, it's probably a more acceptable answer in this day and age.
Finally, there is the dystopic scenario where the machines will take over our organization and make intentional, mean decisions to harm humans and exclude them from society. An ongoing debate about how to audit the decision making algorithms of machines is necessary to ensure positive technological progress, similarly to what happens in other fields like self-driving cars, industry 4.0 and healthcare robotics.
A first step that can be tested without major resistance, is to use machine-led decisions as a starting point in the existing decision making processes. Leaders decide criteria for appointment of candidates to open positions, for promotions and other critical processes. Each criteria needs to be linked to a quantitative documented information - like performance ratings, achievement of targets, number of people managed, sales results, upward feedback, client feedback, number of awards etc. - and weighted based on the strategic importance of each criteria for each position. Intelligent algorithms can also calibrate scores based on recognized patterns - like individual/functional/regional tendency to give high or low ratings, consistency across raters, statistical credibility of the rater. After knowing what the rational analysis says about a promotion, selection or any other important career milestones, leaders will need to build a solid business case to justify any decision going in a different direction, creating an additional obstacle to internal politics.
A second step toward full machine empowerment, is a delegation of decisions to digital algorithms with (limited) veto power for humans, whom can accept or decline, but not make, the final decisions. This will go beyond a simple conversation starter, because certain names will not even be an option for certain positions, simply because not provided by the machine that doesn't find any rational reason to support them.
The final step is full empowerment, where the leadership team is informed of the decision directly by the algorithm. Although it will ensure the maximal fairness, it's still a bit utopistic in the near future.
Each step is a cultural revolution, and can be embraced only when the previous step is successfully implemented and fully accepted by the organization. Pilots in the most progressive teams are recommended to start testing the practical implications of these changes.
The work of professors Anthony Greenwald and Mazharin Banaji has strongly inspired my thinking about stereotypes, biases and human limits, as well as the historical behavioral economics work of Amon Tversky and Daniel Khaneman.