Artificial intelligence (AI) is the simulation of human processes by machines, and it can offer a range of benefits for businesses from increasing productivity to solving complex problems beyond the ability of a human being.

From voice activated systems such as the Amazon Echo to enhancing customer service teams, almost all businesses today employ some type of AI and it’s used within our everyday lives.

AI can benefit everyone, but do you know what responsible AI is? It’s time to get thinking about how you use this technology in an ethical and fair way otherwise it can cause more harm than good.

AI must be responsible and that means designing, developing, and deploying AI with good intentions to empower employees and businesses and fairly impact customers and society. Sadly, there have been some issues around this emerging technology. Biases which we fight against in society have been found to be baked into these new technologies.

Bias within AI has been found to be present and can’t always be trusted to be fair and neutral. AI systems are created by humans who can be biased – even unconsciously. We’ve begun to see racism creeping into this technology and creating a whole host of social consequences.

Racist AI has been uncovered in a range of industries from healthcare to the judicial system. An example of AI bias that has occurred within the judicial system was within the US court system. An algorithm was used to predict the likelihood that someone would re-offend. Due to the data and the model that was chosen, it was found that black people are more likely to reoffend than white people.

The biases aren’t just around race but gender too. Amazon unsurprisingly are fans of using AI – speaking to Alexa has become a part of our everyday lives! But back in 2015 they used AI for hiring employees and the tech was found to be biased against women. The algorithm was based on the number of CV’s submitted previously for roles and as these were largely male, it was trained to favour men over women.

Only yesterday, Facebook has come under the spotlight as it’s been uncovered that equality laws have been broken in the way they handle job adverts. A campaign group – Global Witness, undertook an experiment by submitting job ads to Facebook for approval. The job adverts were linking to real life vacancies on Indeed for nursery nurses, pilots, mechanics and psychologists.

It was up to Facebook’s algorithm who the ads were shown to. It was found that 96% of the mechanic ads were shown to men, 95% of the nursery nurse adverts to women, airline pilot ads were shown to 75% of men, psychologists adverts to 77% of women. Global Witness has said that this highlights that the tech is amplifying biases already built into recruitment. Facebook have said that ads are shown to those they may be more interested in. Yet, it raises questions about how these algorithms are created and that tech potentially has baked in biases.

In Scotland, the Scottish centre for Crime and Justice Research (SCCJR) have found that online advertising is also playing a significant part within the government and police forces to influence the public for crime prevention and health and social policy. They’ve produced a report which highlights how using online advertising in this way goes beyond marketing and needs to be subjected to the same public debate, scrutiny and accountability as other policies.

These biases don’t enable us to use tech and AI effectively as AI is not adequately representing society and so essentially the technology becomes flawed. For the best use of AI, it needs to be built responsibly based around these four principles – empathy, fairness, transparency, and accountability.

How to begin thinking about responsible AI

Here are some of the ways that Google suggests that AI is built

  1. Use a human centred approach
  2. Identify multiple metrics to assess training and monitoring
  3. When possible, directly examine your raw data
  4. Understand the limitations of your data set and model
  5. Test, Test, Test
  6. Continue to monitor and deploy the system after it’s built

For more information on each of these areas, discover more here. 

If you’d like to discover more about responsible AI and how to build it – directly from an employee at Google!, you can listen to Toju Duke who is Responsible AI Programme Manager at Google / Manager for Women in AI Ireland and Head of Black and Brilliant AI solution, speak about responsible AI at day 1 of our Disruption and Innovation in Housing in the Devolved Nations event – The Future of the Workplace taking place on 2 November. She’ll be joined by Patrick Connolly, Digital Research Manager at Accenture exploring further how to utilise AI to transform your business and improve your services alongside other amazing speakers.

On day 2 of our event, on 16 November, the theme for the day is around Sustainable housing and digital communities, you can preview the day here. Get your business ready for the future! Secure your tickets for our exciting 2 day event here or contact monika.edwards@housemark.co.uk. Ticket bundles for two or more delegates are available at preferential rates.