image_pdfimage_print

AI in the enterprise is making headlines in a big way again, which makes it the perfect time to revisit challenges associated with this highly disruptive technology. Are you ready for it in your organization—not just how to use it, but how to use it responsibly?

The Pros of AI Are “The Why.” The Cons Should Guide “The How.”

Chatbots are just one practical, everyday use of artificial intelligence (AI). This year, with unprecedented access to large language models (LLMs) for various business use cases, the topic of AI and its implications is being questioned—again. 

Let’s start with the strengths. One of the greatest pros of AI is its ability to complete vast amounts of complex computations in a short period of time. It’s exponentially faster than a human, which has already made it a win for tedious, time-consuming tasks such as filtering log data for anomalies that may indicate a security breach. 

These computations happen inside of a model, which is designed by humans and trained using large amounts of data on a particular subject. The model is then used to create an algorithm that can be applied to pattern recognition, anomaly detection, or predictions. 

So what about the cons? The potential flaw in this system comes from within the model itself when it’s first developed. Bias can be introduced in the algorithm by the human who creates the algorithm based on the developer’s unconscious biases. Then, data bias can be introduced in the training phase based on the skew in the data. For example, Amazon built an AI to help with human resources recruitment. The data they used to train the AI went back nearly 10 years for individuals in the technology industry when most applicants were male. This led the AI to exclude women in the selection process when choosing top candidates. 

Overlooking this key factor in training the model led to unintentional consequences, and it’s a lesson we should all keep in mind. Knowing the potential cons of AI can help inform a smarter framework for applying AI responsibly.

AI Is Inevitable. Bias Doesn’t Have to Be.

The flood of AI into our lives is becoming a force that we need to learn to understand. 

Bias is emerging as one of the most significant issues with AI, coming at us in the short term as we adapt to chatbots, administrative AI, and other emerging intelligences taking on our repetitive tasks.  

According to Tamie Santiago of the United States Department of Defense in her 2019 report on Combating AI Bias Through Responsible Leadership

“This shift in leadership roles will have a direct effect as well on executive function, the part of the brain that regulates analytics, verbal reasoning, inhibition, discretion, mental flexibility, and complex decision-making among other traits.”  

The concern Santiago raises is the probability that leaders will surrender decisions to AI systems, regardless of the inherent bias we know it to have. Noting the ethical ramifications of doing so is paramount to this decision. The need to determine how this separation of algorithm and human will affect actions of AIs, and, ultimately, who is responsible for the liability, responsibility, and decision of those actions. 

It is the ultimate responsibility of leadership to remove as much bias as possible, creating trust, transparency, and fairness. If leadership performs fewer repetitive tasks using AI, Santiago recommends compensating with executive functions to help remove bias and improve the other aspects at risk.  

AI Should Be the Copilot, Not the Captain

Ultimately, Santiago recommends a joint effort between humans and AI. Santiago based research on a Harvard study. In the findings, when detecting cancer cells, the AI was 92% accurate, conventional methods were 96% accurate, and humans and AI combined were 99.5% accurate

The “better together” story is not only a result of the accuracy but also the review of and reduction of bias. As we train models, we must also train people on how to watch for bias and other issues within AI models. 

This means that as humans, we may need to extend our cognitive, emotional, and social intelligence to include machine intelligence to learn to recognize the aspects of models that show bias, lack of trustworthiness, or signs of unfairness. 

How Leaders Can Evolve with the AI Landscape

Adding machine intelligence to our leadership skill set is one way to start addressing the new challenges we’ll have coming our way. The approach mentioned for chatbots of emphasizing moral foundations and ethical practices with AI helps organizations overcome perceived issues in the previous article is a starting point. 

Leadership needs to continue to stay involved to ensure they have influence on bias detection, trust, fairness, and politeness. It’s the only way to influence AI in the company in a way that reinforces the values and company culture. Whatever the direction this journey takes us, we’ll need creative out-of-the-box thinking to resolve this issue going forward.