Schools are “bewildered” by advances in AI and do not trust the companies behind the tech to provide adequate regulation, headteachers have warned.
Leading figures from the UK’s education sector said systems like OpenAI’s ChatGPT and Google’s Bard were developing “far too quickly” and guidance on how classrooms should adapt wasn’t keeping up.
The said the government alone would not be able to provide the advice schools require, with ministers previously admitting any attempts to craft AI-related legislation would rapidly become out of date given the rate of change.
Rishi Sunak has said while “guardrails” are needed to minimise AI’s risks to society, the government wants to maximise the benefits in its bid to make the UK a “science and technology superpower”.
In a letter to The Times newspaper, with more than 60 signatures, education figures said ministers have not proved “capable or willing” to provide the “guidance and counsel” they need.
They wrote: “We have no confidence that the large digital companies will be capable of regulating themselves in the interests of students, staff and schools.
“Neither in the past has government shown itself capable or willing to do so.”
They added: “The truth is that AI is moving far too quickly for government or parliament alone to provide the real-time advice that schools need.”
Read more:
How teachers are facing up to ChatGPT
AI will make marking homework ‘virtually impossible’
Please use Chrome browser for a more accessible video player
The headteachers behind the letter, led by Epsom College’s Sir Anthony Seldon, said they plan to set up their own “cross-sector body” of teachers from their schools, guided by digital and AI experts, to provide advice on which AI developments could be beneficial or damaging.
They would work to ensure systems like ChatGPT work in the interests of pupils, rather than tech companies.
Some workplaces, schools, and universities in other countries have already banned generative AI like ChatGPT.
While they have wowed with their ability to pass exams, fix computer bugs, and write speeches, they have also been shown capable of generating incorrect or offensive answers.
Elon Musk joined a group of AI experts in calling for a pause in the training of large language models, while Google’s chief executive, Sundar Pichai, admitted the potential dangers “keep me up at night”.
Be the first to get Breaking News
Install the Sky News app for free
The letter in The Times comes after AI pioneer Professor Stuart Russell warned “the stakes couldn’t be higher” as governments grapple with how best to approach regulation.
He said: “How do you maintain power over entities more powerful than you – forever?”
“If you don’t have an answer, then stop doing the research. It’s as simple as that.”
Earlier this month, fellow British computer scientist Geoffrey Hinton, the man known as the “Godfather of AI”, quit his Google job with a warning about the tech’s threat to humanity.
Read more: Who is the ‘Godfather of AI’?