I’m trying to think of a word. Is it existential? Maybe it’s epistemic. I feel like it starts with an “e.” To sort things out, I can ctrl+click on this word in Word and see what synonyms Microsoft suggests. I could go to Google. I could even do it old school and pull out a thesaurus. What I’m trying to illustrate here is that writers have used a variety of tools for nearly two hundred years* to assist them in getting thoughts from their heads onto the page. Is generative AI any different? That’s what I’m going to spend the next six hundred words trying to figure out.

The underlying** problem is that we don’t know how AI impacts thought, especially in developing brains. One theory is that AI can get rid of the drudgery of menial tasks. If I don’t have to worry about dangling modifiers, subject/verb agreement, and double negatives, I can spend my time on more meaningful pursuits. AI could accelerate learning by allowing us to delve into deeper questions.
Another possibility is that AI will lead us to a certain, bland “average.” AI has taken the sum of our digital culture and condensed it into the likeliest outputs. For example, “How do we stop gun violence in the United States?” ChatGPT says, “Stopping gun violence in the United States requires a multifaceted approach that includes implementing comprehensive gun safety laws, addressing root causes like poverty and mental health, improving community support systems, and fostering a culture of responsible gun ownership.” Let’s be honest, I didn’t need to ask ChatGPT to come up with that answer. It’s pretty obvious. But it’s also so non-specific that it’s basically useless. We know these things, and they haven’t worked yet. We need to figure out why, specifically, they haven’t worked and come up with actionable steps to address them. And this intellectual dead end takes us to the most dire possible impact AI could have on thought.
If students leapfrog over fundamental lessons, they may fail to develop the critical thinking skills necessary to tackle the world’s biggest problems. Let’s go back to that double negative. When someone’s speaking to us, we can usually tell what they’re trying to say, even if it’s not literally true. I won’t try to confuse you with none of that grammar unclarity. (You know what I mean, right?) But as problems and conversations get more complex, double negatives and dangling modifiers can create serious issues. Worse, if students have been relying on AI to solve their issues up to the point where they get to those more complex problems, they may not have the skills to solve them on their own.
The truth is, at this point, we just don’t know how AI affects developing brains. Some of the emerging research should certainly make us pause. This study looking at human creativity found that AI’s impact is—to steal a favorite LLM word—“multifaceted.” To sort out exactly what aspects of learning AI accelerates and what parts it damages will require extensive research, attention, and time. To implement these findings will require even more research, attention, and time. But I feel like this is a good place to pivot back to something I wrote in my first paragraph. “That’s what I’m going to spend the next six hundred words trying to figure out.”
[Out of curiosity, I copied the preceding paragraphs into ChatGPT and asked it to write a conclusion. Here’s what it came up with.] “In the end, the question of whether generative AI is fundamentally different from the tools writers have always used boils down to its impact on thought and learning. Like a thesaurus or spell-check, it can help us articulate ideas more efficiently, but unlike those tools, it operates on a scale and complexity that could reshape how we think altogether. Whether it accelerates creativity or dulls it into predictable averages, AI challenges us to reevaluate how we approach learning, problem-solving, and even the pursuit of originality. The stakes couldn’t be higher: in an era of unprecedented technological advancement, ensuring that AI enhances rather than erodes critical thinking will determine whether it becomes a powerful ally or a crutch that undermines us in the long run.”
While ChatGPT’s conclusion is true(ish), it’s not where I planned to take this essay because it’s missing a key consideration. Many people think of writing as a way to record and transmit information. That’s true. And AI is actually decent at doing that. But writing is much more than that. Writing is a form of expression. Writing allows us to connect with each other intimately (love notes) or impersonally (jury summons). Writing is performative. (Seriously Hallmark, who is the “Best Dad Ever”?) But that performance serves a social function. (Aren’t those cards more meaningful with a handwritten note?) Writing signals who’s part of our group and who isn’t. And writing can help us find common ground with people who aren’t part of our group. Most relevant to this essay, writing is a way to sort out our thoughts. That’s what I’m doing here. That’s one of the reasons I started blogging. As various writers, including Joan Didion said, “I write entirely to find out what I’m thinking.” The more that students outsource their writing to a machine, the less time they will spend thinking about their words. We don’t yet know the consequences of this outsourcing, but we do know that writing can address many of the deeply human issues facing us today: a lack of critical thought, empathy, meaning, and human connection. Maybe we should spend some more time grappling with our words before we outsource too much of this process to the machines.
*According to Wikipedia, another writer’s tool, when Peter Mark Roget created Roget’s thesaurus, he “wished to help ‘those who are painfully groping their way and struggling with the difficulties of composition … this work processes to hold out a helping hand.'”
**Maybe the word I was looking for started with “u.”
