

Enter a prompt in ChatGPT, and it becomes your own virtual assistant.
OpenAI/Screenshot by NPR
hide caption
toggle caption
OpenAI/Screenshot by NPR

Enter a prompt in ChatGPT, and it becomes your own virtual assistant.
OpenAI/Screenshot by NPR
Why do your homework when a chatbot can do it for you? A new artificial intelligence tool called ChatGPT has thrilled the internet with its superhuman abilities to solve math problems, write academic essays, and write research papers.
After developer OpenAI released the text-based system to the public last month, some educators sounded the alarm about the potential for these AI systems to transform academia, for better and for worse.
AI basically ruined homework. said Ethan Mollick, a professor at the Wharton School of Business at the University of Pennsylvania, on Twitter.
The tool was an instant hit with many of his students, he told NPR in an interview. interview on morning editionits most immediate obvious use being a way to cheat by plagiarizing AI-written work, he said.
Academic fraud aside, Mollick also sees his benefits as a fellow student.
He used it as an assistant to his own professor, to help him develop a syllabus, lecture, assignment, and grading rubric for MBA students.
“You can paste entire academic papers and ask him to summarize it. You can ask him to find an error in your code, fix it, and tell you why you got it wrong,” he said. “It’s that ability multiplier, which I think we don’t quite understand, that’s absolutely astounding,” he said.
A convincing but unreliable bot
But the superhuman virtual assistant – like any emerging artificial intelligence technology – has its limits. ChatGPT was created by humans, after all. OpenAI trained the tool using a large dataset of real human conversations.
“The best way to think about it is to chat with an all-knowing, eager-to-please intern who sometimes lies to you,” Mollick said.
It is also with confidence. Despite its authoritative tone, there have been instances where ChatGPT won’t tell you when it doesn’t have the answer.
That’s what Teresa Kubacka, a data scientist based in Zurich, Switzerland, discovered when she experimented with the language model. Kubacka, who studied physics for her doctorate, tested the tool by asking it about an invented physical phenomenon.
“I deliberately asked him about something that I thought I didn’t know didn’t exist so they could judge if it also has the notion of what does exist and what doesn’t,” she said.
ChatGPT produced such a specific and plausible answer, backed up by quotes, she said, that she had to investigate to find out if the fake phenomenon, “a cycloidal inverted electromagnon,” was actually real.
When she looked closer, the alleged source material was also fake, she said. There were names of well-known physics experts on the list – the titles of the publications they were supposed to have authored, however, were non-existent, she said.
“This is where it gets a little dangerous,” Kubacka said. “The moment you can’t trust references, that kind of erodes trust in the science being cited as well,” she said.
Scientists call these false generations “hallucinations”.
“There are still many instances where you ask him a question and it will give you a very impressive answer that is just plain wrong,” said Oren Etzioni, the company’s founding CEO. Allen Institute for AI, who ran the nonprofit research until recently. And, of course, it’s a problem if you don’t carefully check or corroborate his facts.

Users who experiment with the chatbot are warned before testing the tool that ChatGPT “may occasionally generate incorrect or misleading information”.
OpenAI/Screenshot by NPR
hide caption
toggle caption
OpenAI/Screenshot by NPR
An opportunity to scrutinize AI language tools
Users experiencing the chatbot free preview are warned before testing the tool that ChatGPT “may occasionally generate incorrect or misleading information”, harmful instructions or biased content.
OpenAI CEO Sam Altman said earlier this month that it would be a mistake to rely on the tool for anything “important” in its current iteration. “It’s a glimpse of progress” he tweeted.
The failures of another AI language model unveiled by Meta last month led to its discontinuation. The company pulled its demo for Galactica, a tool designed to help scientists, just three days after encouraging the public to test it, following criticism that it spewed out biased and nonsensical text.
Likewise, Etzioni says ChatGPT does not produce good scientific data. For all its flaws, however, it sees ChatGPT’s public debut as a positive. He sees this as a moment of peer review.
“ChatGPT is only a few days old, I like to say,” said Etzioni, who remains at the AI institute as a board member and advisor. It’s ‘giving us a chance to understand what he can and can’t do and seriously start the conversation of ‘What are we going to do about it?’ “
The alternative, which he describes as “security through obscurity,” will not help improve fallible AI, he said. “What if we hide the problems? Will this be a recipe for solving them? As a general rule – not in the software world – it hasn’t worked.”
0 Comments