Can AI Create a Fairer World? By Kriti Sharma

In this thought-provoking keynote from #mtpcon London, Google Scholar and UN Advisor Kriti Sharma discusses the impact of artificial intelligence on decision making and what we, as product people, should be doing to ensure this decision making is ethical and fair.

Key Points

  •  Machine learning and decision making is often based on biased historical patterns and data
  • Avoiding bias relies upon better understanding the user
  • People can be more trusting of machines than other humans
  • To create successful/useful products you must bring people together from different backgrounds

Kriti begins by talking about the robots she sees currently being built all around the world. These, she explains, are designed to increasingly make decisions for us – decisions that can be anything from the next YouTube videos we see and the jobs we should apply for to whether we should be given access to loans.

The algorithms that robots use to make decisions are based on a combination of what they know about us, historical patterns, and data. For example, there are well-documented examples of facial recognition software failing more regularly for dark-skinned people, and this happens because the software is typically trained on the faces of its software engineers.

Kriti Sharma speaks at mtpcon London

However, it’s when AI-driven decisions have an adverse impact on our lives that things start to get serious and so, as product managers, we must make better decisions to avoid bias in these decision-making tools.

The Opportunity Before us

Kriti believes that with the product vision and frameworks we, as product managers have, we are faced with an opportunity – to start embedding a more conscious mindset and ensure more ethical decision making. This, she says, begins with a better understanding of the people our products serve because, the more perspectives we have, the better the decisions we will make.

Trust in Machines

She goes on to talk about Rainbow – a companion tool for people facing domestic abuse, that enables them to ask for help. This tool was built to be human-centred and look at the problems faced by people who suffer abuse.

Kriti Sharma speaking at mtpcon London 2019

When the service launched it showed Kriti that a solution that is empathetic, non-judgemental, and available when people need it, can help to solve problems in a very different way. Data from Rainbow has also shown her that people are sometimes more trusting of machines than of people.

Gender Inequality in AI

We have an obsession with giving AI systems a personality. Kriti references some examples including Alexa, Siri, and Cortana. These, she says have some things in common – they are mostly female, or have very female personalities, and help with tasks such as ordering your shopping or playing your favourite music. In contrast, male examples like IBM Watson and Salesforce Einstein make important business decisions for us. So there is much gender inequality in the way we design AI and machine learning personas, and Kriti believes that we should do better.

To do this we need to:

  • Follow simple frameworks
  • Look at how we reflect real-world society in digital lives

Tackling Unintended Consequences

Kriti references a framework designed by Wellcome Trust Data Labs, it’s a simple tool which she thinks could be very useful for product design. It looks at the different ways products can be used, from the expected use cases to the unintended abuse and misuse cases. Kriti thinks that by incorporating a framework like this at the point of design we can better manage our products, and minimise the likelihood of the negative outcomes we see in some AI products.

Mind the Product London 2019

Be More Forward-Thinking

Finally, Kriti talks through some of the principles she has found helpful when designing products.

She says that AI should reflect the diversity of the users it serves. AI should also be held to account. If a machine makes a recommendation or provides insight, how do you make sure the user can trust it – who is accountable?

AI should be rewarded for showing its workings too. It can be very helpful  to get a deeper understanding of decisions. Areas like criminal justice healthcare, social care, financial services all require an understanding of how and why decisions are being made. These can then be interrogated for more information to make the next set of decisions either by humans or AI.

Finally,AI will replace but it should also create, says Kriti. It will replace certain jobs, undoubtedly, but it also creates new opportunities. So when the machines do take over, at least they should be nice!

Kriti Sharma at mtpcon London

The post Can AI Create a Fairer World? By Kriti Sharma appeared first on Mind the Product.

Read more

%d bloggers like this: