“He is considered one of the most important figures in the history of artificial intelligence – a visionary leader who has helped to shape the future of AI.”
That’s the glowing assessment of British computer scientist Geoffrey Hinton provided by Google‘s Bard, the technology giant’s nascent chatbot powered by systems that he helped pioneer.
But less than three months after its launch, amid a dramatic upswing in the capability and accessibility of so-called large language models like Bard, mostly driven by the success of OpenAI’s ChatGPT, the man known as the “Godfather of AI” has quit Google with a warning about the tech’s threat to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” he told The New York Times, concerned both about the dangers of disinformation, fuelled by convincingly generated photos, videos, and stories, and the transformative impact of AI on the jobs market, potentially making many roles redundant.
Dr Hinton’s worrying outlook comes some five decades after he earned a degree in experimental psychology at the University of Cambridge and a PhD in AI at Edinburgh, followed by postdoctoral work in computer science at other leading universities on both sides of the Atlantic.
Born in Wimbledon in 1947, the path he found himself on was perhaps inevitable, given he heralded from a family of scientists including great-grandfather George Boole, a mathematician whose invention of Boolean algebra laid the foundations for modern computers; cousin Joan Hinton, a nuclear physicist who worked on the Manhattan Project, which produced the world’s first nuclear weapons during the Second World War; and father Geoffrey Taylor, a respected scholar who became a member of the Royal Society, the world’s oldest scientific academy.
“Be an academic or be a failure,” Dr Hinton once recalled his mother having told him as a child – advice he certainly seemed to run with.
‘Godfather of AI’ Geoffrey Hinton warns about advancement of technology after leaving Google job
Artificial intelligence: Powerful AI systems ‘can’t be controlled’ and ‘are causing harm’, says UK expert
Local Elections, attack ads and AI | Keir Starmer and Mark Harper
The ‘key breakthrough’
Dr Hinton himself was inducted into the Royal Society in 1998. By then, he had co-authored a landmark paper with David Rumelhart and Ronald Williams on the concept of backpropagation – a way of training artificial neural networks hailed as “the missing mathematical piece” needed to supercharge machine learning. It meant that rather than humans having to keep tinkering with neural networks to improve their performance, they could do it themselves.
This technique is key to the chatbots now used by millions of people every day, each based on a neural network architecture trained on massive amounts of text data to interpret prompts and generate responses.
ChatGPT itself is well aware of how vital backpropagation is to its development, describing it as a “key breakthrough” that “helps ChatGPT adjust its parameters so that its predictions (responses) become more accurate over time”.
Asked how backpropagation helps ChatGPT function, it says: “In essence, backpropagation is a way for ChatGPT to learn from its mistakes and improve its performance. With each iteration of the training process, ChatGPT becomes better at predicting the correct output for a given input.”
Read more:
ChatGPT helped write this article – how did it do?
Please use Chrome browser for a more accessible video player
From ‘nonsense’ to success
Dr Hinton’s pioneering research didn’t stop there, instead he would continue “popping up like Forrest Gump” at points in time that would prove crucial to where we are now with AI in 2023, a drastic period of technological advancement he recently compared to “the Industrial Revolution, or electricity… or maybe the wheel”.
A year after the publication of the backpropagation paper in 1986, Dr Hinton started a programme dedicated to machine learning at the University of Toronto. He continued to collaborate with like-minded colleagues and students, fascinated by how computers could be trained to think, see, and understand.
Dr Hinton told CBS News it was work sceptics once dismissed as “nonsense”. But in 2012, another milestone, as he and two other researchers – including future OpenAI co-founder Ilya Sutskever – won a competition for building a computer vision system that could recognise hundreds of objects in pictures. Eleven years later, OpenAI’s latest version of GPT software boasts the same feature on a scale once impossible to imagine.
Along with grad students Alex Krizhevsky and Sutskever, Dr Hinton founded DNNresearch to concentrate their joint work on machine learning. The success of their image recognition system, dubbed AlexNet, attracted the interest of search giant Google, and it acquired their company in 2013.
Click to subscribe to the Sky News Daily wherever you get your podcasts
Following the acquisition, Dr Hinton began working part-time at Google, splitting his time with university research in Toronto. From there he set up a branch of Google Brain, a research team dedicated to the development of AI. Last month, in a sign of the field’s growing importance to the company, it was merged with the formerly independent British research company DeepMind, which Google also bought in 2014.
DeepMind remains based in the UK and was even treated to a recent ministerial visit. Reports suggest the now merged DeepMind and Brain teams have been tasked with working on a Google Bard follow-up dubbed “Gemini”, another sign of the non-stop nature of AI development in a post-ChatGPT world.
In a few months since launching late last year, ChatGPT has amassed more than 100 million active monthly users, wowing experts and casual observers alike with its ability to pass the world’s toughest exams, fix computer bugs, compose anything from political speeches to children’s homework, and even get through job applications.
Its popularity has seen Microsoft invest massively into the chatbot’s creator, San Francisco startup OpenAI, and incorporate the tech into its Bing search engine and Office apps. Google’s Bard was widely reported to have been fast-tracked as a result, with the firm having previously been cautious about rolling out a language model so advanced that an ex-engineer claimed it was “sentient”.
Read more:
What is GPT-4 and how is it improved?
Assessing UK’s ‘light touch’ AI regulation
Please use Chrome browser for a more accessible video player
But for all the wide-eyed wonder these systems have provided, they have also been shown capable of generating answers that range from factually wrong to downright offensive. European law enforcement agency Europol has warned ChatGPT could be used by criminals and to spread disinformation online, while Italy became the first country to outright ban it while the country’s data protection authorities investigated user privacy concerns.
Image generation tools like Dall-E and Midjourney, responsible for a recent picture that had many convinced the Pope was an unlikely fashion icon, have attracted similar scrutiny.
Some workplaces, schools, and universities have banned generative AI like ChatGPT, the White House has started a public consultation on how such AI should be regulated.
Elon Musk joined a group of AI experts in calling for a pause in the training of large language models.
Even Google’s chief executive, Sundar Pichai, admits the potential dangers “keep me up at night”. Dr Hinton has been keen to stress he believes Google is acting responsibly in its work with AI, his concerns directed at the field as a whole rather than a specific company.
Be the first to get Breaking News
Install the Sky News app for free
‘I thought it was way off – I no longer think that’
On a web page dedicated to the now 75-year-old Dr Hinton, who won the Turing Award for his work on AI in 2019, alongside fellow scientists Yoshua Bengio and Yann LeCun, The Royal Society says his work on backpropagation “may well be the start of autonomous intelligent brain-like machines”.
“Brain-like” is one thing, but the idea that such technology could one day outsmart people was a concept most mainstream commentators had consigned to the realm of science-fiction until now.
“Most people thought it was way off,” Dr Hinton told The New York Times. “And I thought it was way off. I thought it was 30 to 50 years, or even longer away. Obviously, I no longer think that.”
While Dr Hinton won’t be at Google to see the fruits of that reported “Gemini” project, his life’s work has already assured him a place in the history books.
Excitingly or worryingly, depending on your stance, those which can fully appreciate his impact are yet to be written.