AI Ethics: The Impact on Privacy, Data, and Human Rights

AI is only as good as the data that feeds it, and while bias still remains, how do we get to the promised land of truly ethical AI?

AI Ethics

image_pdfimage_print

A lot of the debate surrounding AI ethics has been focused around the biased outcomes that AI can deliver, and that there is no one-size-fits-all solution for this. As we all know, AI is only as good as the data that feeds it, and while bias still remains, how do we get to the promised land of truly ethical AI?

Evaluating AI ethics From a Technology Standpoint

A strong argument can be made that biased outcomes are an inescapable reflection of societal problems that vastly pre-date AI. At the recent DTX Europe 2019 conference, our Field CTO for EMEA, Patrick Smith took part in a panel to discuss the issue of AI ethics and the threats to privacy, data and humans in future if we don’t get it right.

“AI looks at the proportions in society and it will build patterns and algorithms based on that which are not politically correct. If you don’t like what you see as an outcome it’s because of problems that are deeply built into society,” stated Garry Kasparov, security ambassador and political and human rights activist, during the debate. “People expect that AI is a magic wand that can solve all of these issues that have existed since the beginning,” he added, noting that this is not the case. 

Customer Journeys to AI Success
Logo - Pure Storage - White - Cropped

Should laws be put into place to benchmark AI?

What should these laws look like? Where do human rights come into play? While having laws in place sets a framework, it relies on there being one harmonised world with one shared set of values. What are the future risks if we don’t get AI ethics right? Much discussion is centred on how much a nation state should a) take responsibility, and b) try to impose its own view on ethics onto the rest of the world.

“You risk harking back to a British Empire scenario where you start to force your way upon people,” commented Nic Oliver, Founder, People.io. A strong opinion, but a complication to take into consideration. 

Should we start viewing large technology companies as nation states in their own right? By considering the likes of Google, Amazon and Facebook as states, a healthy discussion could be had about what constitutes human rights within this digital space.

“If you were to challenge Google with what it is doing in China, it fundamentally breaches human rights, and as a business within a part of global society, it shouldn’t be breaking this fundamental promise,” commented Oliver. “While everyone globally should respect human rights, as a sovereign state, there is a limit as to how much influence we should impart on a different country.”

“These decisions should be made by people,” added Garry Kasparov. “The moment you start feeding these decisions to AI, it could come up with very odd conclusions.”

With so many variables to consider, are we able to look past the immediate present, to AI’s long-term effects to be able to make AI ethics work effectively? This issue is so complex and ever changing that we may not know until we reach a tipping point. Regardless, organisations and governments need to act now to do what they can to ensure the right motions are put into place. Will it be easy? No, but this isn’t just a policy change – it extends far beyond that. So what should businesses that are concerned about AI ethics do next? 

Galina Alperovich, Senior Machine Learning Researcher at Avast added: “Twenty or thirty years ago when people started to use computers, they took educational courses to learn how to use them. To help people become more responsible with their data, businesses should offer AI courses into every program so that people better understand what AI is and the impact it can have.”

Oliver stated: “Whether businesses in different countries have different regulations or rules, or whether some individuals understand the use of their data more than others, I don’t think individuals will ever be able to know and be able to give an informed version of consent for the use of their data and how it is then processed within AI. I don’t think the current regulatory environment, even contemplates in a true sense, the effect of intelligent systems. Therefore, ethics and morality clauses should be included into businesses’ terms of service and privacy policies.”

“The consumer image of AI has been shaped very much by Hollywood movies. It’s very important that we cut through the mythology and look at how we can use AI as a technology tool,” concluded Garry Kasparov. “Although it is called AI, at the end of the day, ultimate decisions will remain with humans. As long as we recognise that humans still play an important role, it may help us to recognise that imposing too much emphasis on machines will not clear humans from our responsibilities.”

Ultimately, many of the risks posed by ineffective AI ethics revolve around not having control. Human intervention will be key to maintaining control over AI ethically, as AI is just a tool or a set of tools that allow us to do our jobs better. If we put in the work to get it right now, we’ll be reaping the rewards for centuries to come as AI cements itself as a transformational technology for good.

Read Part 2!

Watch the full AI Ethics debate: 

Written By: