top of page

The Glue That Binds: Explainability and Trust in Artificial Intelligence


The algorithms that dictate our online experiences are exceedingly complex, multilayered, and are often unique to each person. Unwinding them is proving to be incredibly difficult, and unwinding the damage they inflict on society will be a challenge facing generations to come. So how do we avoid the mistakes of the past as we create the foundational technologies of the future? It starts with explainability, and it matters as much to humans as it does to the machines that run the algorithms that we are increasingly dependent on.


Cognitive Dissonance and Trust


As humans, we have a natural inclination to seek understanding and make sense of our world. When something happens or a decision is made, most of us will want to know why it occurred or why a particular choice was made. The ability to explain the reasoning behind decisions provides a sense of comprehension and helps us build a mental model of cause and effect. In the absence of explainability, uncertainty and ambiguity prevail, which can lead to cognitive dissonance and discomfort. Developing an understanding of the reasons behind actions, decisions or outcomes, reduces our cognitive dissonance and promotes our basic human need for a sense of stability and control over our environment.


People often rely on experts and authority figures to make decisions or provide guidance. The transparency and explainability of these decisions and guidance play a crucial role in establishing trust. When individuals understand the underlying reasoning, we are more likely to trust the process and accept the outcomes, even if we don't necessarily agree with them. Lack of explainability can lead to skepticism, suspicion, and reduced trust in individuals or systems.


Perceived Fairness, Empowerment and Autonomy


Humans have a strong sense of fairness and equity. When people are affected by decisions or actions, they want to ensure that the criteria and decisioning that led to those outcomes are fair and just. Explainability allows individuals to assess the fairness of a decision-making process, identify potential biases or errors, and evaluate the legitimacy of the outcomes. Perceived fairness contributes to feelings of justice as well as psychological well-being.


Providing explanations empowers individuals by giving them a deeper understanding of the factors that influence their lives. When people understand the reasons behind certain outcomes, they can adapt their behavior, make informed choices, or actively participate in decision-making processes. Explainability supports individuals' autonomy and agency, and makes them feel like they are part of the process, which is essential for their psychological well-being.


Error Detection and Learning


Explainability facilitates error detection and learning opportunities. When individuals can understand the reasons for an incorrect outcome or a mistake, they can identify potential areas of improvement. This process of learning from explanations helps individuals develop new knowledge, skills, and strategies to avoid similar errors in the future. By promoting learning and growth, explainability contributes to individuals' psychological development and self-efficacy.


Explainability Is The Glue That Binds


By understanding why an algorithm made a particular choice or prediction, we can trust its judgment and feel more confident in relying on it. When we can see the reasoning behind decisions, we feel more comfortable and have greater faith in the process. When algorithms are explainable, we can better understand their workings, we can identify their errors and make them more reliable. By embracing explainability, we foster collaboration between humans and algorithms both from the perspectives of understanding and reliability. Where humans and algorithms entwine, explainability becomes the glue that binds. It is the bridge that connects our quest for comprehension with the digital minds that aid our decision-making.


Explainability though, is not always in the interest of the companies that create and operate the algorithms. For example, the models that govern our social media experiences are designed specifically to maximize the amount of time users spend on their platforms. As key drivers of revenue, these algorithms are considered both proprietary and highly valuable, so striking the right balance between algorithmic transparency and maintaining a competitive edge remains a complex task.


Creating explainability for Large Language Models poses an altogether different challenge. The size of the models that make LLMs possible, the layers of interconnected artificial neural networks with millions or even billions of parameters, the quantity of training data, their scale and complexity, and that they continuously evolve and improve as they are exposed to more prompts and data, makes it difficult to analyze and unpack their internal workings. Efforts are being made by LLM developers and third-parties to develop methodologies that shed light on LLM’s decision-making processes, provide interpretability for their outputs, and increase transparency regarding biases and limitations.


We are exploring ways to apply a psychological understanding of LLM language inputs (user prompts) to improve the quality and relevance of LLM outputs by making psychologically-informed inferences about users and their intentions from their prompts and prompt history. We are also exploring how to reduce the likelihood of socially undesirable outputs by applying psychological and behavioural science perspectives to filter and refine the outputs these models generate.


Protection From Unintended Consequences


Thankfully, explainability is increasingly being recognized as table-stakes in industries where artificial intelligence is used to inform decisions that can have outsized impacts on people’s lives - like in healthcare, insurance, and human resources. The algorithms that will define the future of these industries will affect us in ways we can’t yet anticipate. Explainability provides an added level of insurance against future Frankenstein technologies.


Receptiviti was early to embrace an open science-based approach. Explainability is exceedingly important to our customers and it has fostered their deep trust in our science and technology, which is critical because our customers integrate our science and technology into their technologies and processes. Our validated models are based on years of published research into how psychology and psychological change manifests in language, and how language provides tremendous insight into psychological states and processes, personality, motivations and human behaviour. We democratize our science by making our research public and our models available through web-based and containerized APIs. In this rather unique way, we enable our customers to understand the scientific foundations of our models, and ensure that their technologies provide tangible benefits to their users – and to society as a whole – in ways that are explainable, understandable, and logical.

We designed our platform from the ground up to foster trust, and in doing so we believe that we are demonstrating that algorithmic explainability and business viability need not be mutually exclusive.

Trusted by industry leaders:

Subscribe to the blog

bottom of page