In ethical AI, it’s best to focus on the ethics, not the AI

Despite the potential to speed up business processes and drive efficiencies, artificial intelligence (AI) can cause serious, irreparable damage to our society if not used with caution.

Unethical AI found in mathematical models used by organizations may pose imminent, less conspicuous threats today. Which raises the question: what does it mean to use AI responsibly, fairly, and ethically?

We sat down with Julien Theys, Managing Partner of Agilytic, to discuss the dangers of misusing AI and how to develop AI for good. Agilytic helps companies make smarter use of their data, conceptualizing and developing AI projects for various industries since 2015.

What is ethical AI?

AI automates decision-making and is in the everyday technologies we use, from Azure to Amazon, Facebook, and Google. Computerizing tasks that would otherwise require human intelligence, AI helps us learn new insights, find patterns, and solve problems by completely or partially automating the decision-making process.

AI contains various subdomains you may have heard of before, including; Machine Learning, Deep Learning, Neural Networks, Big Data, and Natural Language Processing.

Largely due to Facebook’s infamous Cambridge Analytica scandal, the concept of ethical AI has been growing in popularity year after year, emerging in our social feeds and conference circles.

Ethics - the study of what is morally right and wrong, or a set of beliefs about what is morally right and wrong. - Cambridge Dictionary

In a budding field, there’s little consensus over the definition of ethical and trustworthy AI. “We must involve ethics in AI by looking at the human side, the people developing the technology. Malicious intent, where people use data to do harm, captures the headlines and the imaginations; but they are arguably a smaller part of the issue. More often, we see incompetency and misunderstanding behind unethical AI, where good people with good intentions use bad data, or use data poorly,” shares Julien,

“There’s this quote, ‘Never attribute to malice what can be attributed to stupidity.’ While a lot of the debate focuses on the malice, we should not forget about our own limits and build the checks in the process.”

The root causes of unethical AI

When AI “goes wrong,” it’s usually the product of a poorly thought-out development process and unethical problem definition, not having enough quality data to fairly represent a group of people, using a discriminatory feedback loop, misuse of the results over time, or a combination of those problems. If we take a look at the bigger picture behind AI systems, the answer becomes clear:

“We need to minimize inherent biases made by those who develop AI because often unethical AI is derived from human error. There is always someone behind the algorithm. Dehumanizing AI and blaming the tools is not the way to move forward,” says Julien.

“In this debate, the press also plays an important educational role. It’s very easy to grab attention by claiming, ‘An algorithm deprived 500.000 people of running water’. It would be more correct to explain that ‘People hired to improve water flows obviously made a mistake that had dramatic consequences.’ You’re telling the same story, but the actor and the intent are the exact opposite.”

If many of us are unaware of our true motives and unconscious biases, the discussion of ethical AI shouldn’t be purely technical in nature but should touch on what is fair and trustworthy? And how do we translate fairness into numbers to eliminate biases?

Where we see unethical AI

Many well-known AI cases have been exposed as unethical, adding fuel to the pervasive fires of discrimination, inequality, and misinformation.

AI has found its way into credit scoring, loan and job application processing, facial recognition, and other sensitive use cases. It’s also been used to spread hateful chatbot messaging, fake news stories, voices, videos, images, or ‘deepfakes.’

“What’s often the case is we see a dual-use of technologies. Technologies that can be used for good and for bad outcomes. Here, it’s not the fault of a poorly designed technology, but bad intentions and misuse. These technologies have vulnerabilities where the developers may not know the beast they’re creating. We once turned down a project because of the unethical implications it had, where the prospect didn’t realize it was unethical at first, because sometimes it’s not obvious,” remarks Julien.

Assessing the danger of AI models

Cathy O'Neil’s book “Weapons of Math Destruction” is frequently referenced to determine the three main factors that make AI models dangerous and flawed:

1. Secrecy - Some of which is justified, i.e., to keep an algorithm from being vulnerable, which may lead to disastrous results. But without transparency, how do we know there is no manipulation for self-serving interests? How can we investigate the decision process behind automation systems? The ethical AI rules of the European Commission incentivize transparency.

2. Scale - How many people can the algorithms impact, and can the AI establish norms and amplify biases? Can it scale and grow exponentially?

3. Potential to harm - The social and societal impact should be carefully considered. Many models exist with built-in assumptions, many of them biased and unjust, and can significantly impact large groups of people.

AI has huge potential for social and societal impact. Therefore, we must ensure that AI systems do not use biased data to reinforce a feedback loop that exacerbates today’s inequalities. Rather, we can use AI algorithms and systems to address inequalities and turn them into opportunities to do good.

How do we build AI we can trust?

“It can be pointless to regulate an AI algorithm or system, and we need to be careful that we don’t regulate to a fault where it hinders innovation. But that’s not to say that we can’t take steps towards developing more ethical AI,” notes Julien.

Faced with the smoke-and-mirrors of self-regulation, Facebook has asked for clearer government regulation of social media giants. After the Cambridge Analytica scandal, Facebook developed a Responsible AI team to tackle AI bias and examine their algorithms’ impact on misinformation, political polarization, and extremism. Joaquin Quiñonero Candela, director of AI at Facebook, detailed their struggles with Karen Hao of the MIT Technology Review.

Karen shared on LinkedIn, “It's not about corrupt people doing corrupt things. That would be simple. It's about good people genuinely trying to do the right thing. But they're trapped in a rotten system, trying their best to push the status quo that won't budge. Reporting this thoroughly convinced me that self-regulation does not, cannot work.”

So, what is the role of regulation?

In Europe, organizations have called for a comprehensive AI regulatory framework, and legislation is in the making. As of April 21, 2021, the Commission put forth a horizontal, regulatory proposal on AI to offer concrete implementation steps across all sectors. According to the Commission’s website, “This initiative will ensure that AI is safe, lawful and in line with EU fundamental rights. The overall goal is to stimulate the uptake of trustworthy AI in the EU economy.” However, a one-size-fits-all approach may not reflect the unique benefits and risks of applying AI to different sectors.

“Much of the regulation and frameworks needed is already existing, and in long-term practice, for many of the industries we work with. The regulation outlining ethical and fair practices mostly stays the same when you apply AI. For example, if you work for a bank and wouldn’t refuse a loan to someone simply because their property value is low according to your database, why would you have an AI system do it? This is unethical as it can exclude many people who may already be at a disadvantage. The ethics of the existing regulation always comes down to the uses, not the technology,” explains Julien.

Developing an ethical AI risk framework

As a starting place, organizations should promote ethical AI practices from the initial thought conception phase of an algorithm.

The next step is to assign intervention levels. This could depend on the three factors mentioned previously: secrecy, scale, and potential to harm. One example of intervention is to formulate and follow an ‘ethics checklist’ against implied biases of gender, race, religion, class, zip codes, and other ethical risk areas — and to monitor any deviation over time.

Julien shares, “To develop algorithms that make ethical decisions, we need to avoid perpetuating biases at all costs and plan for the potential of negative intent, and stress-test against unwanted negative consequences. A checklist can explicitly systematize the ethics behind developing algorithms. By embedding better values into our algorithms, there is a clearer interpretation of what the technology is being used for and how it could be mistreated.”

More often than not, it’s not the technology that is disruptive, but how organizations use the technology to disrupt existing business models, enable new models, and whether they commit to monitor continually. In the case of AI, this will mean putting ethics ahead of profit.

Final thoughts

“We must remember that history may not repeat itself, but it rhymes. We’ve seen errors in machines for over forty years, ever since databases made their big start. Back then, intentions behind database schemes were rarely challenged in their development. So it’s good to see that the topic of ethical AI is a growing focus of many organizations, tech companies, and institutions,” adds Julien.

Involving not just developers, but lawyers, citizens, and other decision-makers around the table to implement ethical AI frameworks is crucial. Today, too much grey area exists between doing good and causing harm with AI. We must remind ourselves that the ethics of AI boils down to the ethics of those behind it.

Previous
Previous

Tech Talk: Building intensive workload solutions with Azure’s function app, expectations, and limitations

Next
Next

Celebrating International Women’s Day 2021 at Agilytic: #ChooseToChallenge