In part one of our ethics in Artificial Intelligence (AI) blog, we discussed the impact of AI on privacy, data and human rights. This week we look at the risks and rewards for businesses. There’s no doubt that AI is a game changer that has become a go-to technology for businesses looking to gain a competitive advantage. AI represents an opportunity to turn unstructured data into insight; promising the ability to innovate faster, disrupt and outperform the market. Organisations of all sizes and from all sectors want to take advantage.
The benefits AI can bring are transformative; whether it’s accelerating healthcare research to find cures for diseases, helping a Formula 1 team shave milliseconds off lap times to become world champions, predicting crop shortages to ensure populations don’t go hungry, or simply improving business productivity.
However, as with any game changer, there is a process of checks and balances that need to be put in place to avoid accidental or intentional misuse. In AI’s case this includes issues such as biased data, deep fake videos, and electoral interference. Just as the first industrial revolution in the Victorian period sparked debate around working conditions, a similar conversation must be had regarding the responsible use of AI. This is one of many reasons we recently sponsored the AI Ethics Board panel debate at Digital Transformation EXPO Europe, where key topics discussed included how do we get ethics in AI right – and who ultimately takes responsibility?
Not only is the responsible use of AI going to be vital in terms of reputational currency for businesses, but not doing so risks halting the progress of the positive and albeit potentially lifesaving innovations that have only been made possible thanks to AI.
Customer Journeys to AI Success
Business and government bodies alike are starting to realise this, and are taking active steps to ensure AI is used and developed ethically. That said it’s not without its challenges:
An interesting point raised by Galina Alperovich, Senior Machine Learning Researcher at Avast, relates to the assumption that machines can extrapolate what is ethical and what isn’t. “Just because we as individuals may consider ethics to be straightforward, it doesn’t mean it is easy for machines to interpret such a wide spectrum of opinions, and to act accordingly.”
Garry Kasparov, security ambassador and political and human rights activist added: “If you feed the machine with data that contains bias or reflects societal problems then this will be amplified by the machines. We should be extremely cautious about trying to fix this and should instead focus on changing the way we view ourselves as a society.”
In order to review their AI strategy and deliver an ethical pledge to consumers, it will be particularly important for businesses to build a wider awareness and understanding of the data that they hold and understand where they hold biased data that may lead to biased outcomes. How the business handles and anonymises data and respects privacy will effectively put a foundation in place that they can build upon.
Will it be easy? No. Is it risky? Yes. But not addressing the problems risks halting innovation and preventing the positives that AI brings.
To watch the full debate on AI ethics, please see below:
Written By: