Skip to main content

Technology and gender parity: how do we keep discrimination out of AI?

As artificial intelligence gains momentum and takes root in our societies, the need to address the ethical issues involved is becoming more and more urgent. To mark International Women's Day 2022 and help to #BreakTheBias, two Thales experts talk about the specific problems of AI in terms of gender parity and the methods used to overcome them.

Do the algorithms we use every day have a problem with women? That's what you might think when you look at the recent setbacks suffered by several tech giants. A few years ago, for example, Amazon had to shut down an artificial intelligence system it had designed to automate the hiring process for its digital teams after it taught itself the annoying habit of under-rating applications from women.

The same kind of misfortune happened at LinkedIn, despite the company's specific efforts to avoid bias by ensuring that a person's gender, age or ethnicity was not taken into account by the algorithms used to recommend the best candidate for a given job. But the AI ended up finding a way to slip these criteria back in by analysing candidates' behaviour. (Men were more likely to apply for jobs requiring more work experience than their qualifications would typically have given them, while women tended only to apply for jobs that were an exact match for their level of expertise.) As a result, the platform was offering more senior jobs to men than women, even with the same qualifications, and LinkedIn had to develop a new programme to compensate for the mistakes being made by its first AI.

Another telling example comes from machine translation, which systematically translates "nurse" from English, which is grammatically gender-free, as the feminine noun "infirmière" in French, while "doctor" is rendered as the masculine "docteur". Not to mention search engines, where a search for "schoolboy" returns pictures of innocent boys in the classroom, while "schoolgirl" produces young girls in sexy outfits.

So are our algorithms sexist? To mark International Women's Day, we posed the question to Catherine Roy, Design Leader at the Thales Design Centre in Quebec, and Christophe Muratet, Director, Strategy, Marketing & Emerging Products at Thales's North American AI research and innovation centre. They explain why artificial intelligence – depending on how it's used – can either transpose the gender divide into the digital space, or offer a brilliant way to promote gender parity in our societies.

How would you define algorithmic bias? 

Catherine Roy. In cognitive psychology, the term bias refers to a preconceived opinion, an unconscious error in thinking that takes precedence over other opinions, offering only a partial view of reality and preventing a person from seeing the complete picture.

Because algorithms are designed by humans, they will also display this type of blind spot in certain cases. For me, one of the best illustrations of this process is the Tay chatbot developed by Microsoft and released via Twitter in 2016. It chatted so much with trolls and abusive users that it started tweeting sexist and racist messages itself, forcing its creators to suspend the service the same day as it had been released.

Christophe Muratet. The notion of artificial intelligence tends to arouse people's passions because it piques our collective imagination, which has been shaped by science fiction and movies about machines with emotions or robots with moral dilemmas. But to take a dispassionate look at the question of algorithmic bias, we need to step outside of AI fantasy land and remember that artificial intelligence is no more than a mirror of our own society. AI is a predictive tool at best, and its only useful purpose is to improve our ability to make decisions. When an algorithm develops a bias, it doesn't do it of its own free will; it does it because various factors pushed it in that direction when it was programmed. But AI does not make decisions alone – the other key factor is human judgment.

So why do algorithms develop sexist behaviours?

CM. There are two main causes of the bias that can creep into artificial intelligence systems. The first is the quality and diversity of the training corpus. If the data in the corpus is edited, incomplete or biased itself, there is a strong probability that what the AI learns will also be skewed. The second source of error comes from the people designing the mathematical models or predictive algorithms, who can unconsciously have a biased view of the subject they are handling.

Under-representation of women and people of colour in technology, and particular in data science, and under-sampling of these groups in the training data, have produced technologies that are optimised for just a small sector of the world's population. So it's little wonder that facial recognition software, which is often trained on images of white males, can struggle to recognise dark-skinned women.

Another cause of bias is probably the relentless jostling for position by Big Tech to capture our attention and dominate the market for online advertising and e-commerce. I think the best solution to the problem of bias in machine learning models is to build the most transparent models possible. It is conceivable that the performance of the algorithms used by these large-scale distributors of consumer AI hinges partly on mechanisms that have an adverse effect on the norms of femininity. And clearly there is no incentive for their designers to make them less opaque.

CR. It would be ridiculous to think that programmers are evil by nature and go out of their way to design misogynistic algorithms! In the vast majority of cases, they are simply not aware of the risk of bias in their calculations or in the way their algorithms are designed. We had a situation like this at Thales quite recently, when we were working on voice assistants for aircraft cockpits. We tested the voice assistants available on the market and found that most of them worked better for men than for women. As 95% of aircraft pilots are men, the data used to design the voice assistants was necessarily biased.

Do you think artificial intelligence is a threat to gender equality? 

CM. It all depends how and where it's used. We tend to have a very black-and-white vision of AI. Either we see it as the bogeyman, the epitome of everything that's wrong with our societies, or we see it is the magic wand that can make all our problems go away. The truth lies somewhere in between. Take education in the digital age: there are as many causes for concern as there are reasons to be positive about the growing role of IT in schools. My own view is that algorithms have a hugely positive role to play in connecting students with teachers and providing teaching staff with additional ways to transmit knowledge. But I am much more wary about new tools that offer to optimise the learning paths of students, sometimes in their first few months at school. Imagine a girl who might have developed an interest in mechanical engineering over the course of her schooling, but went in another direction because an algorithm recommended it, based on historical data indicating that women were under-represented in mechanical engineering and therefore had a lower chance of success. I think it's a very slippery slope.

For algorithms to be less biased about women, society first needs to evolve towards gender parity and build more inclusive, more diverse workforces.

CR. It's important to remember that AI is simply a digital reproduction of the failings we first found in the physical world. For example, if most chatbots have female voices, it's because they were based on a stereotype that women's voices are naturally more caring and reassuring. For algorithms to be less biased about women, society first needs to evolve towards gender parity and build more inclusive, more diverse workforces. On the other hand, it's very interesting how instances of algorithmic bias have resonated with the general public. Indignation at bias in hiring processes has forced the companies concerned to quickly change course. So in that sense, I think these "algorithm scandals" can serve a useful purpose in the fight for gender parity.

Thales drew up a digital ethics charter in 2019, and made it public in 2021, as part of efforts to ensure that artificial intelligence helps to build a more inclusive world. In practical terms, how can we make sure these efforts are not compromised by algorithmic bias?

CR. By making users the linchpin of product design! To achieve that goal, we have expanded our universal design programme to create solutions and services that are accessible to all, understandable by all and reflective of the diversity of the populations they are intended to serve. They key to success is for scientists, engineers and designers to work together. The idea behind this co-construction approach is not just to find a solution to a problem, but to make sure the solution works in a given context – and that means going out and checking what happens on the ground. It's a matter of feeding real-life knowledge and experience into the design loop, detecting potential instances of AI bias and correcting them before the product or service is released.

CM. I agree that the way we co-construct our machine learning models in the first place is extremely important. In terms of recruitment, we increasingly need to make sure we have a diverse population of engineers and scientists, and we need to balance out the mathematicians with people from other disciples like social science or anthropology.

As machines get better at making predictions, the value of human judgment will become more and more critical.

And in terms of our innovation practices, the growing use of artificial intelligence in sensitive areas (recruitment, criminal justice, healthcare, etc.) makes the social construction of technology an even more important issue. That means the people who are going to use and contribute to future machine learning solutions need to be involved very early in the development process to provide critical analysis of any unfair biases, which would then be amplified by AI systems unless we are very careful. Paradoxically, as machines get better at making predictions, the value of human judgment will become more and more critical. Here's an example. We needed to develop a solution to optimise flows of passengers in a public transport system, and our people had already started to develop algorithms that would predict how people use the network. When we looked at the data and got our experienced AI designers to work alongside our data scientists, we realised that some of the behaviours seemed illogical. So we went and looked at what was happening on the ground and what we were failing to see in the data. It turned out that the seemingly erratic behaviour was caused by women who, rather than choosing the shortest travel times, were picking their routes based on the likelihood of finding a seat at rush hour. Based on that finding – which a machine could never have understood alone – we were able to tweak the algorithm and develop a recommendation system with fewer anomalies caused by human prejudice. We call it experience optimisation.

As well as building trust in AI, the other critical issue is explainability. In Thales's TrUE approach, AI must be transparent: users must be able to see the data used to reach a conclusion and understand the results. And of course AI needs to be ethical and compliant with laws and human rights. Algorithms that follow these rules will not only avoid creating bias in artificial intelligence systems – they will make human decisions themselves fairer and more transparent.

 

Find out more:

Thales's Chief Scientific Officer Marko Erman on algorithmic bias

Catherine Roy talks about the idea of Inclusive Design