We must listen to women’s voices to deal with gender bias in AI

Ada Lopez, Senior Manager, Product Diversity Office & Marine Rabeyrin, EMEA Education Segment Director at Lenovo write exclusively for NODE.

International Women’s Day offers a chance to reflect on a pressing issue in technology: gender bias in artificial intelligence (AI) systems. There has been progress in the tech industry on this topic, with a growing awareness of the issue and measures to deal with it, but much remains to be done. With AI systems becoming ubiquitous in everyday life (and deciding who gets everything, from healthcare to credit), it’s never been more urgent to root out bias. Dealing with this requires organisations to listen to the voices of women, and to create forums where women’s perspectives can be heard.

Generative AI such as ChatGPT has sparked a wave of interest in AI technology, but many such systems contain biases which risk rolling back decades of progress in how women are depicted. For example, the AI art generator Stable Diffusion tends to generate images of light-skinned men when asked to depict a ‘person’, a recent University of Washington study showed. As similar tools become integrated into our lives, AI systems risk creating a whiter, more male future. To counter this, organisations must ensure women are at the forefront of dealing with gender bias, specifically in testing, overseeing and highlighting problems with AI systems.

Business leaders must not wait for government regulation, but act now to deliver strong governance, listening to women to mould a future in which AI helps further gender equality, rather than hinder it. Every organisation needs a place, such as Lenovo’s Women in AI group, where women feel free to discuss such issues. This is the time for women to raise their voices and be heard.

Designing unbiased AI

To design truly unbiased systems, gender bias has to be taken into consideration from the outset, and women must be involved at all levels. Gender diverse business units are an essential tool, and organisations must ensure they examine AI for diversity as closely as they would for privacy and security issues. With high-risk technologies such as AI, it pays to expand user testing for more diversity, so that the people testing the product are different from those designing it. Simply by doing this, much of the bias around AI can be minimised, and organisations can create better products. Such teams can learn from developing technology for people with disabilities, as examining one type of discrimination can help to understand how to deal with others.

Ada Lopez, Senior Manager, Product Diversity Office - Lenovo
Marine Rabeyrin, EMEA Education Segment Director - Lenovo

Organisations cannot afford to rest on their laurels. Historically, problems with gender bias in AI have tended to be very visible, such as women being offered lower credit card limits. But there may be newer kinds of discrimination which are less overtly obvious. As the use of AI expands into sectors such as education, for example, biased systems could lead to new problems such as failing to promote STEM education as an option for women. Meanwhile, thanks to non-binary and transgender expressions, gender itself is evolving into a more complex topic, offering more opportunities for bias to creep in.

Giving employees a voice

Every organisation is still at the beginning of its AI journey, and taking its first steps towards understanding what ‘good’ AI should look like. The starting point should be communication and awareness. Organisations must ensure that employees – and women specifically – feel empowered to speak out when they feel there is an issue.

It can be helpful to establish a ‘central point’ where problems can be highlighted and dealt with, and a culture where women feel they can be heard. At Lenovo, our employees raised concerns around a female avatar, and this sparked an important wider discussion about how women should be represented through AI. For organisations, the key is to embed processes within the company to allow employees at all levels to highlight problems. As with product testing, the broader and more diverse the discussion around potential gender bias issues, the better.

Working together

Collaboration within the tech industry, and with outside organisations, is the key to dealing with this issue. Companies should share best practices and have conversations, while collaborating with education organisations to offer training on how to deal with bias in AI.

Organisations must take a long-term view and work with engineering schools to raise awareness of gender bias with those who will be developing AI systems in the future. Crucially, organisations (and female technologists) must also work to develop the interest of girls and young women in STEM subjects to bring more women into the workplace, and into the teams which design AI systems. Women still make up just 26% of workers in IT, according to Gartner, and that needs to change to deliver unbiased AI systems.

This is just the starting point. Over the longer term, organisations must move from general gender bias to specific representations and consider how women see themselves in different markets. Over time, AI tools may evolve to support gender diversity, highlighting problems with other AI systems, or identifying video deepfakes which are used to harass women.

An unbiased future

AI bias is not a simple problem, nor is it one that will be dealt with quickly. It’s therefore important to aim for progress rather than perfection. This is not an issue which can be delegated to a committee. Company leadership must be on board, and AI bias must be discussed from the beginning of any AI project, with buy-in from the top. Crucially, organisations must ensure that women at all levels have a voice. Only by ensuring diversity of thought in everything, from AI oversight to testing, can we deal with this problem in the long term. By making tangible progress now, businesses can pave the way for a more equitable and diverse future.

Authors

Scroll to Top

SUBSCRIBE

SUBSCRIBE