Season 2 • Episode 4
In early 2023, ChatGPT blew up the internet. It’s an AI app that can create any piece of writing you ask for. Poems, homework, lyrics, essays, outlines, recipes, interview questions, even code. All indistinguishable from something written by a person, all instantaneous and free.
In schools, cheaters began cheating immediately. Educators were horrified, calling it the end of homework, college-entrance essays, and even writing skills.New York City schools banned it. Experts called it a potential factory for misinformation (ChatGPT routinely writes authoritative-sounding articles that are simply wrong).
Everyone agrees that ChatGPT is disruptive. But how do we keep the good—and prevent the terrifying?
Season 2, episode 4: ChatGPT and the End of Writing.
ChatGPT may have the clunkiest, least memorable name in software history. But once you’ve seen it…oh, it’s memorable, all right.
JANE: 11:25:47 Look, this machine can generate this piece of writing that’s as good as what someone that would write it themselves could generate. that seems exciting and rather astonishing in many ways.
ERIK: 00:36:55 / think of it as turbocharging your job as a — as a lawyer or a doctor or a writer, or writing songs or creating art, doing it better than you ever could do before.
TIMNIT: 11:40:00 I think that we should be really terrified of — of — of — of this whole thing.
I’m David Pogue, and this is “Unsung Science.”
Before we dive in, I’d just like to say—welcome back! This is the first regular episode of season 2—from here on, we’ll be unleashing a new “Unsung Science” episode every other Friday. And you know what else I’d like to say? Thank you. Your emails, tweets, reviews, and ratings have made this whole enterprise super exciting and super fun!
Today’s episode is about what may be the most amazing artificial-intelligencesoftware you’ve ever seen—with what may be the dumbest name. It’s ChatGPT. Not only is that utterly un-catchy, but even if you know what it stands for, you still don’t get it. GPT is Generative Pre-Trained Transformer. Well, that’s helpful.
Anyway, in AI circles, ChatGPT is what’s called a large-language model: software trained by feeding it massive amounts of existing written material, and after that, it can generate new text on command.It can write anything, to your specifications. It’s free. You can try it out at chat.openai.com.
Like, I can type in,
Add subtle typing sounds please
“Write a limerick about the effects of artificial intelligence on the future of humanity.” And we get this:
KEATON: There once was AI so grand,
It could help us with tasks at hand,
But it also might do harm,
If its actions lack charm,
So we must use it with care and command.
Add typing sounds please
Or let’s try this. “Tell the story of Goldilocks in the style of the King James Bible.”
KEATON: And it came to pass in those days, that a certain young damsel named Goldilocks did wander into the dwelling of three bears, who were away from their home. And she did behold a bowl of porridge on the table, and she did taste the porridge of the first bear, and it was too hot. And she did taste the porridge of the second bear, and it was too cold. But the porridge of the third bear was just right, and she did eat it all.
Andit goes on. In case you’re wondering what happens to Goldilocks, the story ends like this:
Keaton: And thus it was written, that Goldilocks and the three bears were forever remembered for this story of trial and error, and the importance of respecting the possessions of others.
ChatGPT can also write letters. Poems. Homework. Essays. Outlines. Recipes. Interview questions. Even computer code. Even song lyrics! Here:
“Write song lyrics from the point of viewa former Broadway conductor who grew up to host a podcast about science and technology.”
Keaton: Verse 1: Once a conductor on Broadway,
Now he’s found a new way
To share his love of science and tech —
Oh hold on, Keaton—sorry. I got this one.
(DP sings at piano)
Once a conductor on Broadway,
Now he’s found a new way
To share his love of science and tech —
On a podcast, he’ll take you on a trek
I’m a different kind of conductor now,
Guiding you through science and technology,
Sharing tales of progress, and how
It shapes the world we live in, you’ll see.
Thank you, thank you. Tip your server!
And now, there’s one more demo thing I gotta do. Apparently, it’s a journalistic law: Any story about ChatGPTmust let ChatGPT write the introduction for the story itself. I mean, at this point, it’s a tired,obvious gag, but look, I wouldn’t want to break one of the core tenets of journalism. So here:
“Write an introduction to an episode of Unsung Science about the promise and perils of ChatGPT.”
Keaton: Welcome to the latest episode of Unsung Science, where we explore the cutting-edge technology that’s shaping our world. Today, we’re talking about ChatGPT, a language model that’s—
But now I can type: “Make it funnier.” And now I get:
Keaton: Welcome to the latest episode of Unsung Science, where we explore the technology that’s shaping our world, or at least trying to. Today we’re talking about ChatGPT, the language model that’s taking over the world, one typo at a time. So put on your thinking caps, or just your headphones and join us as we explore the hilarious, and sometimes terrifying world of ChatGPT.
Dang. That’s actually not bad!
By the way… the voice ofChatGPT’s output in this episode belongs to Keaton Talmadge. She’s a professional voice actor—and, by the way, Buster Keaton’s great-granddaughter!Kind of a special treat. Hey, Keaton!
A snippet of our Zoom greeting… something like:
DP: I really appreciate your doing this. ‘Cause it’s really hard for podcast listeners to read what’s on my screen.
Keaton: No problem!
Anyway. As you can imagine, ChatGPT is freaking people out. Within a week of its release, a million people were using it. Microsoft, which has a billion-dollar investment in OpenAI, the company behind ChatGPT, is considering building it into Word and PowerPoint. And OpenAI itself is in talks to sell shares at a price that would put the company’s value at 29 billion dollars.
Some people are super excited by the potential of this software.
ERIK: 00:26:15 I think this is huge. /00:26:43 / I wouldn’t be surprised 50 years from now, people looked back and say, wow, that was a really seminal set of inventions that happened in the early 2020s.
00:03:48 I think we’re going to have potentially the best decade of flourishing of creativity that we’ve ever had, because a whole bunch of people, lots more people than before, are going to be able to contribute to our collective art and science.
This is Erik Brynjolfsson. He’s a Stanford professor, and director of Stanford’s Digital Economy Lab. He’s got many great stories about how ChatGPT has been helpful.
ERIK: 00:05:47 / I happened to be the last speaker at this AI conference /. And for fun, I decided to have ChatGPT / help me with my comments.So, first I had to just do it straight up the way Eric Brynjolfsson would — would say. /But then to have more fun, I said, “now do it in the style of Taylor Swift.” Oh, my God. It was so beautiful. You know, the metaphors, the examples that came out of it I shared it!
It can also be a godsend if you’re not great at writing, or not great at English.
ERIK: 00:20:55 / one of my colleagues at Stanford had a grad student, um, who gave a research proposal to him.And she was from another country, and it was kind of muddled and you couldn’t quite get like, what’s the point of your proposal? I don’t understand it. He / had GPT write it in a clearer way for him. And then he’s like, “aha, now I get what she was trying to say.”/
00:21:44 / these tools can help them express things, thoughts that they had, they weren’t quite able to put in the right words.
To him, ChatGPT represents the ideal use of AI: augmenting our own abilities.
ERIK: 00:22:53 / The greatest benefit comes from when humans and machines work together. /
00:36:55 And specifically with these tools, they work a lot better with humans in the loop than they do on their own/. So, think of it as turbocharging your job as a — as a lawyer or a doctor or a writer, or writing songs or creating art, doing it better than you ever could do before.
Then there’s the story of KokoCares.org, a free, crowdsourcedmental-health hotline, primarily for adolescents who are having trouble. Its cofounder Rob Morris tried a little experiment. Here’s what he said in his video:
Rob: And so we have a post that just came in, someone saying, “I’m trying to become a better me and having a hard time.” So what we’ve done is, we’ve sent this post to GPT 3. GPT 3 says, “I hear you, you’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone. There are people here who care about you and want to help you. I send you love and support as you go through this journey.”
The AI ,on its own in a matter of seconds, wrote a really nice, articulate response here. So we ran an experiment where we tested it on about 30,000 posts. Responses were rated more favorably when they were co-written by the AI, and they were done in well over half the time.
Now, if you were one of the people providing care, using the ChatGPT response was optional, and you could edit it if you liked. Still, this experiment really upset a lot of people, who called it unethical and maybe even illegal. I mean, how would these struggling teenagers react if they found out that those caring responses were AI autotext?
As it turns out, not well. Morris wrote a followuplater on: “Once people learned the messages were co-created by a machine, it didn’t work.Simulated empathy feels weird, empty. A chatbot response that’s generated in 3 seconds, no matter how elegant, feels cheap somehow.”
Anyway. So yes—lots of potential. But also—SO many questions we have to work out. And really, there are six big ones. Six doozies that perhaps should have been addressed before AI writing bots got unleashed to the world.
After the ads, we’ll bring some experts on board to help us make sense of ‘em.
Second Ad Break
Welcome back. I promised that we’d work through the six massive problems that large-language models like ChatGPT present. Six real doozies.
Doozie #1 is this: Now that AI can write anything we want, in any style, is there any point in teaching anyone to write anymore?
I mean, school kids are already using ChatGPT to write their homework for them. Editorials are already flying with headlines like, “ChatGPT Will End High-School English” and “Will ChatGPT End the Student Essay?”
JANE: 11:25:47 / Look, this machine can generate this piece of writing that’s as good as what someone that would write it themselves could generate. And that seems exciting and rather astonishing in many ways.
Jane Rosenzweig is the Director of the Writing Center at Harvard.
JANE: 11:26:25 / I think writing is a way of figuring out what you think. And so if a machine is doing the writing, then we need to ask ourselves, where are we figuring out what we think? /
POGUE: 11:27:19 / you don’t need to know structure or syntax or vocabulary or grammar or spelling. /
JANE: 11:27:25 But even beyond the structure, the syntax or the grammar or the spelling, which are all important things to putting together a piece of writing that an audience could understand, I think I worry the most about just that thinking piece.
11:28:01 / I say to my students, what do you want to understand? What do you want to know about the world, about the topic that we’re thinking about? /
11:28:18 First, you read things that other people have said about this thing that you want to understand. You talk to people. We have class discussions. /And in the process of writing about that, you often start examining evidence and trying to make an argument about what you think.
11:28:38 /And so a very common piece of advice that I give my students / is, look in your conclusion, because you write yourself to that main idea. /
POGUE: 11:28:58 Huh!
JANE: 11:28:58 So what happens if instead of going through that process, / what if instead you just type the prompt into ChatGPT and got the answer there? What do we lose?
POGUE: 11:29:31 I mean, what do we lose? /
JANE: 11:29:38 / you would lose figuring out what you think.
At the moment, educators all over the world are struggling to figure out how to proceed. The New York City school system, the biggest in the country, has banned ChatGPT outright.
POGUE: 11:35:53 / what do you hear in the educational industry from people who are confronted with this?
JANE: 11:36:02 / there are these possible paths that — that people can take with ChatGPT, right? You can turn it off. You can embrace it fully and say, “we’re going to use this as — as a teaching tool in these different ways.” Or you can come out somewhere in between.
11:36:41 / for example, I think a lot of people have been talking / about this idea: “Well, perhaps what’s going to happen is that our students will stop being writers and they will become editors.”So we will say to them, let’s generate three or four different answers to this prompt with using ChatGPT, and then let’s talk about that in class. /
11:37:22 My initial reaction to that was, are we doing this because ChatGPT exists? Or are we doing this because it’s better than other things that we’ve already done? /
11:42:17 / to what question is this the answer? 11:42:54 / when we’re talking about replacing human writing, human thinking, human teaching with a machine, what — why are we doing this? To what question is this the answer?
OpenAI declined an interview for this episode, and for my “CBS Sunday Morning” story about ChatGPT—which is interesting, because they did gran an interview only a few weeks earlier about its other big product, Dall-E 2, which generates art and photos. That’s when Aditya Ramesh, the creator of Dall-E, told me:
RAMESH: 10:19:39 The goal of the company is to develop artificial general intelligence. And by that, we mean an AI that can doall of the things a human can– / and to deploy it in a way that’s safe and maximizes the positive benefits to society.
Here’s what OpenAI’s own website gives as the reason it created ChatGPT: to help “make AI systems more natural to interact with.”
But that doesn’t really answer the question “why”?And educators are stuck with finding their way out of this mess.
Doozie #2 has to do with how ChatGPT was trained: by hoovering up hundreds of millions of articles on the internet. And—I know that this might come as a shock—but not everything on the internet turns out to be true. The problem is…
TIMNIT: 11:20:44 / it’ll just sort of parrot back / what was on the Internet. Right?/it wasn’t taught to understand what is fact, what is fiction or anything like that. What it has learned how to do is tell you the most probable sequences of things, because — as it’s seen on the Internet. /.
Timnit Gebru is an AI researcher—kind of a big deal AI researcher. She was the one who co-authored the 2018 study about face-recognition software from IBM, Microsoft, and Amazon. You know, how its error rate on white male faceswas under 1%—but it got Black female faces wrong about 35% of the time.
Anyway, she points out that ChatGPT’s answers sound super confident and authoritative—and that makes them even more insidious.
TIMNIT: 11:21:47 / the fact that you can even get grammatically correct answers can deceive you even more, right? Because you think this is coming from a person. /you don’t even have the cues that generally would tell you that something might be wrong. /it can give you a completely wrong answers for a whole bunch of things.
OK, like here: “Write the history of the first woman President of the United States.”
KEATON: The first woman President of the United States, Hillary Rodham Clinton, served as the 45th President of the United States from January 20, 2017 to January 20, 2025. Clinton was a former First Lady, U.S. Senator, and Secretary of State.
In the general election, Clinton faced off against Republican nominee Donald Trump. /
There’s a computer-science term for this, whenAI churns out baloney statements: Hallucinations. Imagine how dangerous a hallucination would be if you were asking formedical advice.
Doozie #3 isrelated—and it’s really terrifying. Even Erik Brynjolfsson thinks so.
POGUE: 00:13:00 / The other thing that we hear is that these things will be misinformation factories.
POGUE: Now, a Russian bad actor can generate hundreds of thousands of extremely well-written, plausible articles filled with phony studies and research papers. /
ERIK: 00:13:34 It’s definitely something to worry about. / we could have an enormous flood. Each of us could have hundreds, thousands, millions of spam messages sent to us, all kind of plausibly looking like they were done by humans. This could be a catastrophe, and we need to take measures to address it.
POGUE: Are there any that you’ve heard of? Any proposals?
ERIK: Yeah. There are two kinds of approaches people have taken. One is to try to identify which content is generated by a machine with watermarks or other things, and say, “hey, that’s a problem. You know, is there sort of a signature?”
Oh right—the watermark thing. OpenAIis already working on this approach—where everythingChatGPT writes will someday be encoded with some kind of digitalwatermark, so we’ll be able to differentiate AI writing from human writing.
Obviously, we’re not talking about something visible, or even some hidden data attached to the text document.
No, this would be some kind of algorithmicwatermark—a statistical pattern of words or punctuation,detectable by special software. It would work only on longish blobs of text, and of course you couldstill bypass it just by rewriting it in your own words.
But there are two bigger problems with the watermarking approach. First, ChatGPT isn’t the only large-language model. Google, for example, has one that they say is even better than OpenAI’s—but they haven’t released it to the public, precisely because of the potential problems we’re talking about.
Anyway, the point is, there will be lots of these things, and not all of their creators will install guardrails the wayOpenAI has.
The second problem with watermarks is, as TimnitGebru points out,
TIMNIT: 11:45:18 / But the thing is that this is now a game of cat and mouse. So, any time people know there is such a system, they’re going to try to game it. /
11:45:43 /And then, you know, people will update the watermarking system and they’re going to try to game it. And I think that’s what’s going to happen.
OK. Doozie #4 is job losses—or at least job changes.
ERIK: 00:35:10 / Most of the U.S. economy is knowledge and information work, and that’s who’s going to be most squarely affected by this. I would put people like lawyers right at the top of the list. Obviously, a lot of copywriters, screenwriters, um, fiction and then a lot of other people use writing and certainly nurses do a lot of writing, doctors do a lot of writing.
/Professors (LAUGH) will be affected. Um, but — but I like to use the word “affected,” not “replaced,” because I think if done right, it’s not going to be AI replacing lawyers; it’s going to be lawyers working with AI replacing lawyers who don’t work with AI.
There will be at least one new job category made possible by ChatGPT, by the way: Prompt engineer.
ERIK: 00:39:30 / There’s a new occupation that’s emerging; it’s called prompt engineering. And prompt engineering is,“how do you write a prompt that really evokes the most from GPT?”
It’s funny how slightly different changes in the prompt can lead to very different outcomes. And even the designers don’t fully understand how to write the prompts to get it — to get — to get the most from the tool.
/prompt engineering is, uh, emerging as a way of getting the most from these tools.
ERIK: 00:40:23 / that will be a new one, uh, in the — in the 2025 census of occupational codes. I think, uh, we’ll have —may be one of the entries.
Doozie #5 is, as Timnit calls it, automated plagiarism.
OpenAItrained ChatGPT on real people’s writing. Including Timnit’s. Including mine. Once OpenAI becomes a 29-billion-dollar company, do you really think they’re going to pay us for the writingthey incorporated without consulting us?
Timnit: 11:36:55 / if let’s say you owned a restaurant, right. You wouldn’t go to a market and steal all the tomatoes and carrots and then make your creation and sell it back to the people who, you know, you took tomatoes and carrots from. Right? We understand that to be theft.
Why is this different? How do we understand this differently? You can’t steal raw ingredients, make something, and then sell it back to the people you stole raw ingredients from.
Finally, Doozie #6: That we’re mistaking writing for creativity. ChatGPT is not sentient. It’s not human. It can’t really think. All it does is a really good impersonation of things thousands of people have already written. It’s like … a parrot.
TIMNIT: Precisely. / 11:19:18 And so, parrots, we don’t — you know, we hear them repeating things that they’ve heard, right, and / we think it’s funny, we think it’s entertaining. But we’re not saying that parrots understand what’s going on and they, you know, can predict the future or anything like that, right? We just think it’s really interesting that they can parrot back what they’ve seen. And so, / these large language models are doing a version of that.
But Erik Brynjolfsson has a different take.
POGUE: 00:30:32 / we’ve heard it / called a Pastiche machine. You know, all it can do is mimic stuff that other people have already written, and therefore can never leapfrog to something new.
ERIK: 00:30:54 I think that’s the fundamental misunderstanding. I mean, a lot of innovation really is kind of in combinatorial business, an input-output business. Uh, either implicitly or explicitly, when I come up with new ideas, I’m combining different existing old ideas like Lego blocks and create something new. And that’s what/ these large language models do as well.
/when I see some of the content, I’m like, wow, “that’s a beautiful metaphor that — that it just made. / Never been said before.” /
By the way, this is probably a good time to mention the 8000-ton elephant in the room: ChatGPT 4.
OpenAI has always been transparent about the limitations of the current version, ChatGPT 3. Here’s what CEO Sam Altman tweeted about it: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now.”
But version 4, now in testing, is slated to come out later this year. OpenAI trained it on 500 times as much data as the current version. And people who’ve seen it say it’s like seeing the face of God.
ERIK: 00:16:55 A very senior person at OpenAI, he — he basically described it as a phase change. You know, it’s like — it’s like going from water to steam. It’s just a whole another level of — of ability.
That’s important, because already, there’s such a thing as a ChatGPT detector. A Princeton student named Edward Tianwrote one, for example. It’s a web-based app called GPTZero that got 7 million uses in a matter of days. You paste in some text, and it gives you a guess as to whether it’s human or synthetic writing.
But when ChatGPT 4 comes out—I mean, will those things have a chance? It’s gonna be another arms race, at best.
OK—one last ad break, and then I’ll send you into the sunset contemplating the three possible coping approaches in this new world of AI writing.
Third ad break
So…I don’t know, man. The more I read about ChatGPT, the more I talk to people, the more I realize that there are three camps—three ways of looking at the invention of large-language models.
First, there’s what I call the Skynet Philosophy.Remember Skynet, in the “Terminator” movies?
SARAH: I need to know how Skynet gets built.
ARNOLD: The Skynet funding bill is passed. The system goes on-line on August 4, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29.
You could also call it the “I, Robot” Philosophy, or the “Matrix” Philosophy, or the “Avengers: Age of Ultron” Philosophy, because they all basically have the same plot.
Anyway, the point is—the Skynet Philosophy says that we’ve just unleashed something that might be marketed as a useful tool—but we haven’t really thought it through.
TIMNIT: 11:40:00 I think that we should be really terrified of — of — of — of this whole thing.
12:03:49 We should understand the harms before we proliferate something everywhere and mitigate those risks before we put something like this out there.
The second possible approach is the Coexistence Philosophy. You know: Well, this thing is here, and we’ll adapt. On the Great Spectrum of Panic, seems like that’s where Harvard’s Jane Rosenzweig falls.
JANE: 11:48: / I do think teaching writing will still go on. I can’t predict what the next round of technology will bring, but I think it’s really important for people to think about the fact that when we teach writing, we’re not just teaching a set of skills that a machine can take over.
/ A machine can do the part where it puts ideas on paper, but it can’t do the part where it puts your ideas on paper. So that’s — that’s why I’m somewhat optimistic.
Anyway. The third possibly philosophy, you could call the Embrace Our New Overlords philosophy, perhaps best described by Stanford’s Erik Brynjolfsson.
ERIK: 00:02:00 / We’re going to have better essays, better stories, better art, better poems, better songs than we ever had in history. This is a tool that turbocharges what we could do compared to what we did before.
You know, always in history, when some new technology comes along, like books, you know, the Greeks were like, “oh, my God. Stop memorizing epic — epic poems?! That’s going to be terrible for our brains!”Or calculators, you know. “How is that — how — no one’s going to know how to like take the square root of a six digit number on but — but the pen and pencil or long division.”
ERIK: 00:06:47 / one of the things that I’m quite convinced is that for some time, we want to keep humans in the loop, that the human and machine working together. You know, these tools, anyone who’s worked with them know that they can do some amazing stuff. Sometimes they can do incredibly boneheaded stuff as well. They lie. They don’t understand the world entirely.
/and they do things that sound good on the surface, but we think more carefully aren’t exactly right. So, you want to keep the human in the loop. And that cooperation of human and machine is where most of the value comes.
POGUE: 00:09:05 / So are there other examples in history where there is some game-changing new technology that caused everybody to panic, that didn’t turn out to be such a big deal?
ERIK: 00:09:30 Well, I would say that’s the norm, not the exception. /
You know, he’s got a point. I mean, I remember—when I was in fourth grade, calculators had just been invented. And my school got a set of them. And I can remember all the parents freaking out— “Our kids won’t learn how to do long division anymore! They won’t know how to multiply big numbers anymore!”The same kind of hysteria.
And, as it turns out…they were kinda right! These days, lots of kids don’t know how to do those problems, but nobody’s setting their hair on fire about it. Maybe AI writing bots will be like that? They’ll do most of the writing, and we’ll turn our attention to other things?
Anyway—sorry to interrupt there, Erik. Go on.
ERIK: 00:18:16 you know, when cars were invented, there were accidents. You know, when fire was invented, and people got burned. / If you think every technology was completely 100 percent had no negative effects whatsoever, you know, progress would grind to a halt.
00:19:50 / So technology has always been destroying jobs, technology has always been creating jobs, and that churn is very natural. If you want to have higher living standards, the answer is not to try to freeze in place all the existing industries and jobs and companies. /
POGUE: 00:20:24 So in other words, I, as a writer, might feel momentarily like a horse and buggy driver (LAUGH) but society as a whole might wind up better off?
ERIK: 00:20:34 Society as a whole and you individually, if you embrace these technologies, I think writers like me and you will, um, benefit from these technologies and create new kinds of content that we never were able to create before. And that could be really fun.
But you know who I haven’t asked about ChatGPT’s effect on the future?
You guessed it.ChatGPT.
“I’m worried aboutChatGPT’s effects on education, misinformation, and jobs.”
KEATON: ChatGPT, like any technology, has the potential to produce harmful effects if it is not used responsibly. / However, the technology is meant to assist, rather than replace human creativity, critical thinking, and writing skills. It is also important to ensure that the tool is used responsibly, with clear guidelines and regulations in place.
“Thank you.”KEATON: You’re welcome! If you have any other questions, feel free to ask.