Ask the new artificial intelligence tool ChatGPT to write an essay about the cause of the American Civil War and you can watch it churn out a persuasive term paper in a matter of seconds.
That’s one reason why New York City school officials this week started blocking the impressive but controversial writing tool that can generate paragraphs of human-like text.
The decision by the largest U.S. school district to restrict the ChatGPT website on school devices and networks could have ripple effects on other schools, and teachers are scrambling to figure out how to prevent cheating. The creators of ChatGPT say they’re also looking for ways to detect misuse.
The free tool has been around for just five weeks but is already raising tough questions about the future of AI in education, the tech industry and a host of professions.
What is ChatGPT?
ChatGPT launched on Nov. 30 but is part of a broader set of technologies developed by the San Francisco-based startup OpenAI, which has a close relationship with Microsoft.
It’s part of a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they’ve learned from a vast database of digital books, online writings and other media.
But unlike previous iterations of so-called “large language models,” such as OpenAI’s GPT-3, launched in 2020, the ChatGPT tool is available for free to anyone with an internet connection and designed to be more user-friendly. It works like a written dialogue between the AI system and the person asking it questions.
Millions of people have played with it over the past month, using it to write silly poems or songs, to try to trick it into making mistakes, or for more practical purposes such as helping compose an email. All of those queries are also helping it get smarter.
What are the pitfalls?
As with similar systems, ChatGPT can generate convincing prose, but that doesn’t mean what it says is factual or logical.
Its launch came with little guidance for how to use it, other than a promise that ChatGPT will admit when it’s wrong, challenge “incorrect premises” and reject requests meant to generate offensive answers. Since then, however, its popularity has led its creators to try to lower some people’s expectations.
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said on Twitter in December.
Altman added that “it’s a mistake to be relying on it for anything important right now.”
“It’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
Can it be used for writing school papers?
This is what ChatGPT said when The Associated Press asked it to answer that question in all caps from the perspective of a principal shouting a brief message through a school’s PA system:
“DO NOT USE CHATGPT OR ANY OTHER AUTOMATED WRITING TOOL FOR SCHOOL PAPERS. THIS IS CHEATING AND WILL NOT BE TOLERATED. IF YOU ARE CAUGHT USING CHATGPT OR ANY OTHER AUTOMATED WRITING TOOL FOR SCHOOL PAPERS, THERE WILL BE SERIOUS CONSEQUENCES.”
But when asked to answer the same question on its own, ChatGPT offered this more measured warning: “As a general rule, it is not appropriate to use ChatGPT or any other automated writing tool for school papers, as it is considered cheating and does not benefit the student in the long run.”
What do schools say?
Many school districts are still scrambling to figure out how to set policies on if and how it can be used.
The New York City education department said Thursday that it’s restricting access on school networks and devices because it’s worried about negative impacts on student learning, as well as “concerns regarding the safety and accuracy of content.”
But there’s no stopping a student from accessing ChatGPT from a personal phone or computer at home.
“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said schools spokesperson Jenna Lyle.
Human or AI?
“To determine if something was written by a human or an AI, you can look for the absence of personal experiences or emotions, check for inconsistency in writing style, and watch for the use of filler words or repetitive phrases. These may be signs that the text was generated by an AI.”
That’s what ChatGPT told an AP reporter when asked how to tell the difference.
OpenAI said in a human-written statement this week that it plans to work with educators as it learns from how people are experimenting with ChatGPT in the real world.
“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system,” the company said.
Does this threaten Google?
There’s been some speculation that ChatGPT could upend the internet search business now dominated by Google, but the tech giant has been working on similar technology for years — it’s just more cautious about releasing it in the wild.
It was Google that helped jumpstart the trend for ever-bigger, ever-smarter AI language models that could be “pre-trained” on a wide body of writings. In 2018 the company introduced a system known as BERT that uses a “transformer” technique that compares words across a sentence to predict meaning and context. Some of those advances are now baked into Google searches.
But there’s no question that successive iterations of GPT — which stands for Generative Pre-trained Transformer — are having an impact. Microsoft has invested at least $1 billion in OpenAI and has an exclusive license to use GPT-3.
Hey ChatGPT, can you put all this in a rap?
“ChatGPT’s just a tool,
But it ain’t no substitute for school.
You can’t cheat your way to the top,
Using a machine to do your homework, you’ll flop.
Plagiarism’s a no-no,
And ChatGPT’s text is not your own, yo.
So put in the work, earn that grade,
Don’t try to cheat, it’s not worth the trade.”
Story by Matt O’Brien, Associated Press