The Promise And The Peril: Where Next For Artificial Intelligence?

Last week [29 March 2023], the government launched a white paper on the regulation of AI (artificial intelligence) to guide its use in the UK, with a view to driving responsible innovation and maintaining public trust in this most revolutionary of technologies.

The white paper sets out a series of five principles around the regulation of AI – ranging from safety, security and robustness to transparency and fairness. It also highlighted the need for accountability and effective governance, allowing producers to be challenged if people feel they are in breach of the regulations.

AI covers the multitude of computer systems that can do tasks which would normally need human intelligence. As a concept, and now very much a reality, it has long divided opinion.

Supporters point to its role in driving future economic growth, enhancing productivity, improving safety and fostering innovation. Many also cite examples of AI being used as a force for social good, especially in areas such as conservation, healthcare and fake news detection.

There are great examples of the positive use of AI to save penguin populations in Antarctica, predict mental health disorders following the COVID-19 pandemic and crack down on human trafficking by identifying potential perpetrators and victims.

Meanwhile, detractors have raised concerns over robots replacing humans in the workplace and warned of computer-generated communications perpetuating bias, spreading mis-information and promoting hate within large sections of society.

The use of facial recognition software in the recruitment industry is one area that has attracted lots of criticism. And in another major development last week, Elon Musk and Apple co-founder, Steve Wozniak, joined over a thousand signatories in an open letter calling for a six-month pause on “AI experiments”, which, they warn, could result in “profound risks to society and humanity.”

New AI platforms, like ChatGPT, which can write a student’s essay, create social media content and even dispense advice over the internet with just a little information from a user, have been accelerating the debate.  Meanwhile, Google’s rival AI chatbot, Ask Bard, which launched earlier this month, came complete with warnings from Google itself that the product could “share misinformation”.

The TUC encapsulates this tightrope in its manifesto ‘Dignity at Work and the AI Revolution’. Writing in its introduction, the former TUC General Secretary, Frances O’Grady, says: “Artificial Intelligence is transforming the way we work and, alongside boosting productivity, offers an opportunity to improve working lives. But new technologies also pose risks: more inequality and discrimination, unsafe working conditions, and unhealthy blurring of the boundaries between home and work.”

The thorny issue of how governments should approach AI’s regulation has been brewing for a number of years. In the US, a hotbed for AI development, debate has arguably been at its fiercest, with politicians struggling to balance ‘the promise with the peril’. The resulting inertia has seen providers speed-to-market far outweigh the readiness of law makers to regulate, which has only fuelled the concerns of its opponents.

The AI industry contributed £3.7bn to the UK economy last year, employing over 50,000 people, and Britain is home to twice as many companies providing AI services than any other European country. It’s of little surprise, therefore, that the government is wary of doing anything to impede growth and wants to steer clear of “heavy handed legislation”.

While the guiding principles of the UK approach have been broadly welcomed by the industry and stakeholders, it is in the practical delivery that the majority of questions have been raised.

The government wants responsibility for the regulation of AI to sit with existing regulators within their own sectors, rather than creating a new entity, arguing that these bodies already have the relevant expertise and governance structures in place.

Opponents have voiced concerns that this leaves significant capability gaps. Without additional investment and appropriate legal obligations in place, both the ability and the appetite to regulate AI will be lacking. Without some serious beefing up, they argue, most of the regulators will be fighting a losing battle as they face a potential barrage of complaints that require their attention.

While some regulators, such as the Information Commissioners Office and Ofcom, are already doing good work in this area, critics say the majority are already struggling under the pressure of existing workloads. They also argue that AI is such a specialist and rapidly developing arena, there is no way that regulators can keep pace with the sheer number of products being released onto the market, on top of doing the day job.

As growth and innovation in AI is set to continue at pace, so too will the warnings of its true impact on jobs and prosperity, privacy and human rights, and society as a whole. Decisions we take now have the potential to shape the sector for years to come.  The challenge for industry, governments and regulators is to work collaboratively to ensure AI serves to enhance people’s lives, not threaten their welfare.

This article was written by our chief executive, Angharad Neagle, and featured in the Western Mail on 03 April 2023

Contact

Angharad Neagle

Chief Executive
029 20 30 40 50
angharad.neagle@freshwater.co.uk

Share

Recent