Majority of public believe 'AI should not make any mistakes'

Majority of public believe 'AI should not make any mistakes'

Research into public trust around artificial intelligence found many want decisions made by the technology to still be checked by humans.

Published: July 6, 2020 at 9:16 am

The public remains sceptical over the use of artificial intelligence (AI) to make decisions, research suggests, with nearly two-thirds wanting tighter regulation around its use.

A survey by AI innovation firm Fountech.ai revealed that 64 per cent want more regulation introduced to make AI safer.

Artificial intelligence is becoming more prominent in large-scale decision-making, with algorithms now being used in areas such as healthcare with the aim of improving speed and accuracy of decision-making.

However, the research shows that the public does not yet have complete trust in the technology – 69 per cent say humans should monitor and check every decision made by AI software, while 61 per cent said they thought AI should not be making any mistakes in the first place.

Read more about artificial intelligence:

The idea of a machine making a decision also appears to have an impact on trust in AI, with 45 per cent saying it would be harder to forgive errors made by technology compared with those made by a human.

As a result, many want AI to be held to a high standard of accountability, with nearly three-quarters of those asked (72 per cent) saying they believe companies behind the development of AI should be held responsible if mistakes are made.

It is reasonable for people to harbour concerns about systems that can operate entirely outside human control

Nikolas Kairinos, Fountech.ai

Nikolas Kairinos, founder of Fountech.ai, said it was not surprising that some people were uneasy about the rise of technology which can operate outside of human control.

“We are increasingly relying on AI solutions to power decision-making, whether that is improving the speed and accuracy of medical diagnoses, or improving road safety through autonomous vehicles,” he said.

“As a non-living entity, people naturally expect AI to function faultlessly, and the results of this research speak for themselves: huge numbers of people want to see enhanced regulation and greater accountability from AI companies.

Listen to Science Focus Podcast episodes on AI:

“It is reasonable for people to harbour concerns about systems that can operate entirely outside human control. AI, like any other modern technology, must be regulated to manage risks and ensure stringent safety standards.

“That said, the approach to regulation should be a delicate balancing act. AI must be allowed room to make mistakes and learn from them; it is the only way that this technology will reach new levels of perfection.

“While lawmakers may need to refine responsibility for AI’s actions as the technology advances, over-regulating AI risks impeding the potential for innovation with AI systems that promise to transform our lives for the better.”

Instant Genius: Robots
Download your free copy ofInstant Genius: Robotshere

In a report published earlier this year, the Committee on Standards in Public Life said greater transparency was needed around AI and its potential use in the public sector in order to gain the trust of the public and reassure them over its use.

It called for the government and regulators to establish a set of ethical principles about the use of AI and make its guidance easier to use.

Reader Q&A: If we’re ever able to make robots as intelligent as us, won’t forcing them to work for us be as bad as slavery?

Asked by: Mahika Gautam, London

In short: yes. International law on slavery currently applies to humans only, but if robots become as clever as us, then politicians would need to start thinking about tweaking these laws to include robots, too.

We’d be foolish not to treat these robots with respect: history shows us that forcing labour from our equals never ends well. Fortunately, the EU is already piloting ethical guidelines for the use of AI software, so it’s likely that we would implement legislation to ensure the wellbeing of AIs with human intelligence. If we don’t, then they probably will!

Read more:

© Getty Images