AI won’t live up to the hype without trust, CPI warns world leaders in Estonia

16.10.2018.

Centre for Public Impact (CPI) launches action plan at Tallinn Digital Summit to help governments make AI work for people.

The introduction of artificial intelligence by Government has the potential to achieve great things. However public trust in AI is low, and the Centre for Public Impact (CPI), a global not-for-profit foundation, today tells government leaders that the use of AI must be legitimate or will fail to help public services do a better job for people.

AI is soon expected to help authorities more identify tax fraudsters and make public transport responsive to traveller needs in real time throughout the world. Some people however predict more negative outcomes and fear job losses.

Authorities around the world are already using AI – from assessing which railway carriages may need maintenance work next to judging which convicted offenders might be most likely to reoffend.

But many Governments are not adequately prepared, and are not taking the right steps to engage and inform citizens of where and how AI is being used or could be used to secure the levels of trust and understanding needed to achieve legitimacy and allow AI to really help public servants do their jobs.

The Centre for Public Impact (CPI) has today produced a new paper How to make AI work in government and for people which sets out realistic and practical recommendations for governments to use to result in its successful implementation.

Discussed today at the Tallinn Digital Summit with an audience of Heads of State, digital world leaders and AI experts, the CPI says that in order to cut through the prevailing narrative of scaremongering about ‘machines taking over the world’, governments need to take measured steps to build legitimacy as we proceed.

Danny Buerkli, Programme Director at the Centre for Public Impact (CPI) said:

“When it comes to AI in government we either hear hype or horror; but never the reality. Artificial intelligence (AI) in public services will not become a reality if it doesn’t have legitimacy.

“As data collection becomes easier and computing power increases, now is the right time to improve policymaking and service delivery with the help of AI but the process needs to be introduced responsibly. Our strong advice is to start using AI in government gradually, look at where it can really help and build trust as we learn.

“AI in government services – from diagnosing serious health conditions in people who might not currently have easy access to the necessary clinical specialist and predicting epidemics to chatbots on tax helplines – could create dramatic improvements in people’s lives. But AI also has the potential to drive a wedge between citizens and people, and ultimately fail if not introduced with care.”

The CPI paper says that governments need to start with these basics:

Understand the real needs of your users – understand their actual problems, and build systems around them (and not around some pretend problem just to use AI).

Focus on specific and doable tasks – don’t think of AI as replacing jobs, but tasks. If AI does replace jobs, it’s only because the jobs were focused on one task only. Think about how to shape roles that are more diverse and citizen-facing.

Build AI literacy in the organisation and the public – educate people – in your organisation and the general public. This will not only foster a good uptake of AI, but also build trust once people understand the technology.

Keep maintaining and improving AI systems – and adapt them to changing circumstances.

Design for and embrace extended scrutiny – be resolutely open towards the public, your employees and other governments and organisations about what you are doing.

Alongside this paper are revealed the latest statistics on citizens’ trust in AI by CPI’s founder, The Boston Consulting Group (BCG) – worrying figures which make building legitimacy even more urgent.

They show that many people don’t feel ready to see AI in services where humans currently make important judgements, and that citizens’ support for governments’ use of AI is strongly correlated with their level of trust in government institutions.

A survey of over 14,000 internet users across 30 countries revealed that nearly a third (32%) of citizens are strongly concerned that the moral and ethical issues of AI have not been resolved.

While there was broad support for governments using AI for process-heavy administrative tasks, trust declined in areas where significant discretion was currently given to human decision makers, such as health diagnoses or criminal justice penalties.

In addition over half (54%) of respondents are very concerned about the potential impact of AI on jobs, and 58% strongly believe that governments should regulate the use of AI to protect employment.

Experten- und Marktplattformen
Cloud Computing

Technologie-Basis zur Digitalisierung

mehr
Sicherheit und Datenschutz

Vertrauen zur Digitalisierung

mehr
Anwendungen

wichtige Schritte zur Digitalisierung

mehr
Digitale Transformation

Partner zur Digitalisierung

mehr
Energie

Grundlage zur Digitalisierung

mehr
Experten- und Marktplattformen
  • company
    Cloud Computing –

    Technologie-Basis zur Digitalisierung

    mehr
  • company
    Sicherheit und Datenschutz –

    Vertrauen zur Digitalisierung

    mehr
  • company
    Anwendungen –

    wichtige Schritte zur Digitalisierung

    mehr
  • company
    Digitale Transformation

    Partner zur Digitalisierung

    mehr
  • company
    Energie

    Grundlage zur Digitalisierung

    mehr
Blogs