Writing to Think

I’m trying to think of a word. Is it existential? Maybe it’s epistemic. I feel like it starts with an “e.” To sort things out, I can ctrl+click on this word in Word and see what synonyms Microsoft suggests. I could go to Google. I could even do it old school and pull out a thesaurus. What I’m trying to illustrate here is that writers have used a variety of tools for nearly two hundred years* to assist them in getting thoughts from their heads onto the page. Is generative AI any different? That’s what I’m going to spend the next six hundred words trying to figure out. 

DALL-E's effort to illustrate this blog...

The underlying** problem is that we don’t know how AI impacts thought, especially in developing brains. One theory is that AI can get rid of the drudgery of menial tasks. If I don’t have to worry about dangling modifiers, subject/verb agreement, and double negatives, I can spend my time on more meaningful pursuits. AI could accelerate learning by allowing us to delve into deeper questions.

Another possibility is that AI will lead us to a certain, bland “average.” AI has taken the sum of our digital culture and condensed it into the likeliest outputs. For example, “How do we stop gun violence in the United States?” ChatGPT says, “Stopping gun violence in the United States requires a multifaceted approach that includes implementing comprehensive gun safety laws, addressing root causes like poverty and mental health, improving community support systems, and fostering a culture of responsible gun ownership.” Let’s be honest, I didn’t need to ask ChatGPT to come up with that answer. It’s pretty obvious. But it’s also so non-specific that it’s basically useless. We know these things, and they haven’t worked yet.  We need to figure out why, specifically, they haven’t worked and come up with actionable steps to address them. And this intellectual dead end takes us to the most dire possible impact AI could have on thought. 

If students leapfrog over fundamental lessons, they may fail to develop the critical thinking skills necessary to tackle the world’s biggest problems. Let’s go back to that double negative. When someone’s speaking to us, we can usually tell what they’re trying to say, even if it’s not literally true. I won’t try to confuse you with none of that grammar unclarity. (You know what I mean, right?) But as problems and conversations get more complex, double negatives and dangling modifiers can create serious issues. Worse, if students have been relying on AI to solve their issues up to the point where they get to those more complex problems, they may not have the skills to solve them on their own. 

The truth is, at this point, we just don’t know how AI affects developing brains. Some of the emerging research should certainly make us pause. This study looking at human creativity found that AI’s impact is—to steal a favorite LLM word—“multifaceted.” To sort out exactly what aspects of learning AI accelerates and what parts it damages will require extensive research, attention, and time. To implement these findings will require even more research, attention, and time. But I feel like this is a good place to pivot back to something I wrote in my first paragraph.  “That’s what I’m going to spend the next six hundred words trying to figure out.” 

[Out of curiosity, I copied the preceding paragraphs into ChatGPT and asked it to write a conclusion. Here’s what it came up with.] “In the end, the question of whether generative AI is fundamentally different from the tools writers have always used boils down to its impact on thought and learning. Like a thesaurus or spell-check, it can help us articulate ideas more efficiently, but unlike those tools, it operates on a scale and complexity that could reshape how we think altogether. Whether it accelerates creativity or dulls it into predictable averages, AI challenges us to reevaluate how we approach learning, problem-solving, and even the pursuit of originality. The stakes couldn’t be higher: in an era of unprecedented technological advancement, ensuring that AI enhances rather than erodes critical thinking will determine whether it becomes a powerful ally or a crutch that undermines us in the long run.”

While ChatGPT’s conclusion is true(ish), it’s not where I planned to take this essay because it’s missing a key consideration. Many people think of writing as a way to record and transmit information. That’s true. And AI is actually decent at doing that. But writing is much more than that. Writing is a form of expression. Writing allows us to connect with each other intimately (love notes) or impersonally (jury summons). Writing is performative. (Seriously Hallmark, who is the “Best Dad Ever”?) But that performance serves a social function. (Aren’t those cards more meaningful with a handwritten note?) Writing signals who’s part of our group and who isn’t. And writing can help us find common ground with people who aren’t part of our group. Most relevant to this essay, writing is a way to sort out our thoughts. That’s what I’m doing here. That’s one of the reasons I started blogging. As various writers, including Joan Didion said, “I write entirely to find out what I’m thinking.” The more that students outsource their writing to a machine, the less time they will spend thinking about their words. We don’t yet know the consequences of this outsourcing, but we do know that writing can address many of the deeply human issues facing us today: a lack of critical thought, empathy, meaning, and human connection. Maybe we should spend some more time grappling with our words before we outsource too much of this process to the machines. 

*According to Wikipedia, another writer’s tool, when Peter Mark Roget created Roget’s thesaurus, he “wished to help ‘those who are painfully groping their way and struggling with the difficulties of composition … this work processes to hold out a helping hand.'”

**Maybe the word I was looking for started with “u.”

Stranger Than Fiction

Here’s a screenplay pitch for you. Psychology professor by day, undercover cop by night. Not just an undercover cop, but an undercover cop posing as a hit man. In this least, I hope you say, “Go on.” On top of that basic premise, this psychology professor is the kind of forgettable, mild-mannered dweeb you fist imagined, but his hit man persona is a mix between Dirty Harry and John Wick. And as his college classes delve into questions about id and identity, he starts to wonder who he really is. When we get to the crisis moment, however, he’s no longer wondering if his true calling had always been undercover cop. He’s wondering if his true calling had been a real hit man.

That’s the premise of Richard Linklater’s Hit Man on Netflix. Seems like a surefire blockbuster. It’s even based on the true story of Gary Johnson, a psychology professor who had been moonlighting as an undercover cop for the Houston Police Department in the 80’s and 90’s. He’d been profiled by Skip Hollandsworth in Texas Monthly a few years ago. The problem, however, is that Linklater cleaved too closely the source material, creating a decent biopic rather than a blockbuster movie. I want to briefly delve into what didn’t work. 

The big problem with this movie is that screenplays are complete fictions. While they may be based on real life, they are not real life. Real life is chaotic and absurd. Screenplays (hopefully) are not. Everything in a screenplay should be intentionally placed there to form a tight, cohesive narrative–a sort of logical argument. If A, then B; if B, then C; and so on until you end up with hopefully a logical and fulfilling conclusion.

I’ll defer to Anton Chekov here who rephrases what I’m trying to say much more directly and cleanly. “If in the first act you have hung a pistol on the wall, then in the following one it should be fired. Otherwise don’t put it there.” The screenplay for Hit Man, however, includes numerous unfired pistols. These come directly from Hollingsworth’s source material, and while they are fascinating glimpses into the human psyche, they do not progress the narrative or Gary Johnson’s growth as a character. (Keep in mind here that Gary Johnson the character is different than Gary Johnson the person.)

Here are three bizarre, true anecdotes from Hollingsworth’s article that Linklater chose to keep in the movie (despite their irrelevance to Johnson’s story). The first is an example of how a person starts looking for a hit man: a seedy gentleman asks a stripper(?!) if she could recommend anyone. Second, Hollandsworth writes about a teenager who wanted to off one of his classmates (changed to his mother in the movie). Third, he relates a family inheritance dispute that ends up with a brother forgiving his sister for trying to hire a hit man. The judge in the case grants her probation. All three of these anecdotes are crazy and fascinating, but I argue that you could cut them without losing anything. In fact, you’d have a better movie, and here’s why. 

Go back to that first paragraph I wrote. If this movie is exploring whether or not Gary Johnson wants to become a real hit man, how exactly does the high school student fit in? Why is it important for the audience to go to the strip club and eavesdrop on two characters (the seedy patron and stripper) who we’ve never met and will never see again? In short, it’s not. 

This is Gary Johnson’s story. Those scenes are deviations from that story. While they do show the diversity of clients that Johnson must deal with and his ability to read a variety of clients, we have another dozen characters in the “Undercover Gary Montage” that do the same thing. That makes those scenes unnecessary. Worse, it also makes them confusing. 

At the point where we meet the teenager, this movie has established that Johnson is working with a variety of unsavory people and becoming increasingly comfortable in his role as a “hit man.” If we, the audience, see a deviation from norm that the screenplay has established for unsavory people (in this case because of age), the teenager becomes Chekov’s gun. We expect the story to pivot at this point. There is a complication to the movie’s “ordinary world.” The main protagonist must overcome this new obstacle and learn something about themselves. But…he doesn’t.

The other two scenes operate in a similar fashion. While they’re both interesting anecdotes (How does someone find a hit man? What happens after you learn that someone hired a hit man to kill you?) they aren’t central to the plot. They don’t tell us anything about Johnson’s new identity. Instead they introduce irrelevant characters and locations. They are “pistols” that never get fired. In the logical argument that is Gary Johnson’s story, they are Red Herrings. 

It’s important to point out that Gary does have a meeting with a unique client that causes him to change his entire operation. Madison Masters is an attractive young woman trapped in an abusive relationship. Rather than going through with the sting, he suggests she get out of the relationship. He also begins to wonder if some people are, perhaps, legitimate targets for a hit man. That’s the crux of the movie. This is where things get interesting. This is in fact the inciting incident. Those other three anecdotes should have been left in the Hollingsworth’s profile piece.

Additionally, the movie suffers from an overused, uninteresting voiceover. And Gary Johnson’s character fails to adequately differentiate his “real” self and his hit man persona. What are we left with? An okay biopic that takes a handful of major liberties from the source material. As Hollandsworth’s original article proves, the truth is often stranger than fiction. Good fiction, however, is far more focused than reality. 

Literature Review of AI Detectors

About eighteen months ago, I started to notice machine-generated text cropping up in student work. As a composition teacher, my immediate reaction was to ban it. Text generators have little role in the composition classroom, however, composition teachers had few options for accurately identifying machine-generated text. The basic concerns were that detectors were inaccurate and could provide false positives. In other words, they might flag human writing as machine generated, especially with non-native speakers. My colleagues and I put considerable effort into redesigning courses and disincentivizing students from using AI such as ChatGPT or Bard to complete assignments. I think these changes have improved our pedagogies. Having survived a school year with AI, however, I was curious how things have changed in the world of detecting machine-generated text. As of mid-July 2024, here is what I’ve found. 

Neither humans nor AI-detection systems can regularly identify machine-generated text flawlessly. However, it’s worth noting that detectors are reaching a high level of accuracy, and they are preforming better than humans. Looking at research abstracts, J. Elliott Casal and Matt Kessler found that reviewers had “an overall positive identification rate of only 38.9%” (1). Oana Ignat and colleagues found that humans could only accurately identify 71.5% of machine-generated hotel reviews (7). Their AI detector, however, was able to correctly identify roughly 81% of machine-generated hotel reviews (8). Writing in 2023, Deborah Weber-Wulff et al. found similar results when testing twelve different AI-detection programs. The highest, Turnitin and Compatio approached 80% accuracy (15). Publishing this year, Mike Perkins and colleagues found Turnitin detected 91% of machine-generated texts (103-104) while human reviewers in the study only successfully identified 54.5% (1). Custom designing an AI detector to find machine-generated app reviews, Seung-Cheol Lee et al. were able to achieve 90% accuracy with their best model (20). For longer texts, the accuracy of both human reviewers and AI detectors increases. Comparing full-length medical articles, Jae Q. J. Liu et al. found that both professors and ZeroGPT correctly identified 96% of machine-generated texts (1). (Note that GPTZero, a different AI detector, performed considerably worse.) However, the professors also misclassified 12% of human-written content as having been rephrased by AI (8). 

Notably, Weber-Wulff mentions that AI detectors tend to have few false positives. In other words, if the software is unsure if a text was written by a human or a machine, it is more likely to classify it as human written (17). Turnitin, in fact, had 0 false positives (26). Perkins, too, noted that Turnitin was reluctant to label text as machine generated. While it did correctly identify 91% of papers as machine generated, it reported only 54.8% of the content in those papers as machine generated. In fact, the entire paper (100%) was machine generated (103-104). While this means a certain percentage of machine-generated writing will evade detectors, it should give professors some confidence that something flagged as machine generated is, very likely, machine generated. In another encouraging finding, Liu found that “No human-written articles were misclassified by both AI-content detectors and the professorial reviewers simultaneously” (11). 

There is one caveat, however. AI detectors may flag translated or proofread text as machine generated (Weber-Wulff 26). Once machines are introduced into the composition process, they likely leave artifacts that may be noticed by AI-detectors. Strictly speaking, the AI-detectors would not be wrong. Machines were introduced into the composition process. However, most professors would find the use of machines for translation or proofreading to be acceptable. 

The studies I mention to this point were attempting to consistently identify machine-generated content, but a team of researchers led by Mohammad Kutbi took a different approach. Their goal was to establish consistent, human authorship of texts by looking for a “linguistic fingerprint.” In addition to detecting the use of machine writing, this method would also detect contract plagiarism (i.e. someone hiring another person to write an essay for them). This system achieved 98% accuracy (1). While not mentioned in Kutbi’s study, other scholars have found that certain linguistic markers maintain consistency across contexts (Litvinova et al.). For these and other reasons, I believe that linguistic fingerprinting holds the most promise in detecting use of AI in the composition process. 

It’s also worth mentioning that participants in Liu’s study took between four and nine minutes to make a determination about whether or not an article was written by a human (8). In this situation, AI may actually aid professors by reducing the time they need and increasing the confidence they have in determining whether or not a text was machine generated. 

To briefly summarize

  • Both humans and AI-detectors are prone to error
  • AI detectors are generally better and in some cases significantly better than humans at identifying machine-generated text
  • AI detectors are fairly conservative in their classification of text as machine generated

Considering these points, I believe that at the current time, instructors should use AI detectors as a tool to help them determine the authorship of a text. According to Liu and colleagues, Originality.ai is the best overall AI detector and ZeroGPT is the best free AI detector (10). While not as accurate as the preceding tools, Turnitin deserves mention because it did not have any false positives in multiple studies (Liu 6, Weber-Wulff 26). Of course, as with any tool, these detectors need to be used with discretion and with a consideration of the bigger context of a work. I plan to write another post considering some common flags of machine-generated text. 

Stop Saying “Fast Forward.”

Over the past few years, I’ve noticed people using the phrase “fast forward” to indicate a passage of time. While I acknowledge the dynamic nature of language and try not to ride a high horse about “proper” English, I do find this phrase particularly jarring and troublesome. I’d like to take a moment to explain my concerns and, I hope, encourage you to think twice before using the phrase.

Photo by Anthony on Pexels.com

Here’s a pretty typical example from Forbes. “I served as a translator for both language and culture over the years and gained a deep appreciation of the challenges of navigating caregiving, education and culture. Fast forward to graduate school: My interest in supporting child well-being led me to become interested in better understanding policy.”

For starters, there’s an issue of “point of view” (POV). In case you forget POV from English class, first person = I; second person = you; third person = he/she/it/they.“Fast forward” shifts the point of view of a story. Most stories are told in first or third person. So if you say “fast forward,” who is doing the forwarding? 

If no subject is identified, “fast forward” operates in the second person POV with “you” understood. In other words, when I say, “Call me later,” I’m really saying, “You call me later.” So in the example above, who’s fast forwarding? You aren’t telling the story. The phrase makes much more sense when the subject of the sentence takes clear ownership. “Let me fast forward to graduate school.” “Can you fast forward to graduate school …” But if I have control of a story, why are you the one fast forwarding?

Then, there’s an issue of redundancy. “Fast forward,” used to indicate a passage of time, is often used in conjunction with another phrase used to indicate a passage of time. For example, “But relevance wasn’t the point — this was all about toughness. Fast-forward to May 14, when 10 people were gunned down at a Tops supermarket in Buffalo, New York” (MSNBC). Or “…President Donald Trump took a few steps in to North Korea and spoke about his friendship with that country’s leader, Kim Jong Un. Fast-forward almost three years. President Biden is in Seoul, emphasizing his friendship with new South Korean President Yoon Suk Yeol” (NPR).

https://blog.arduino.cc/2020/07/20/this-automated-perpetual-calendar-is-a-beautiful-way-to-watch-the-years-pass-by/

In both of these cases, fast forward is redundant. It is literally a waste of breath. The authors could just write “On May 14” or “Today.” Both choices are shorter and convey the same information. Brevity is a skill. Why use up our mental bandwidth for something you don’t need? If you can delete something, do it!

That being said, many writers will sprinkle in phrases to help set a theme. If you’re talking about movies, why not use “fast forward” as your time transition? And bits of jargon have been weaseling their way into our everyday language for centuries. In my introduction, I mention horses even though this essay really has nothing to do with horses. Below, I allude to plants. Considering the prevalence of video in today’s word, we can’t exactly prevent a phrase like “fast forward” from taking root. But there is a good reason I would caution against it. 

Hollywood has done a great job convincing us that love at first sight is real, “smoking guns” exist, passionate speeches change people’s minds, and there’s always a parking spot directly in front of a courthouse. If you use the veracity of that last example to measure the other three, you can see how absurd some of these propositions really are. We don’t live in movies. We can’t fast forward and rewind at will, and we need to stop thinking that we can. “Sure,” you might say. “That’s a problem for tween influencers who want to star in reality shows. But can tell the difference between reality and the movies.” Respectfully, I disagree. 

https://www.businessinsider.com/fox-24-in-development-2015-5

Think back, if you will, to the early aughts, when one of the most powerful countries in the world invaded a sovereign nation under false pretenses. At that time, a paranoid Bush administration justified its torture of detainees not through psychology and jurisprudence, but through the Fox television show 24. Slate noted the troubling argument way back in 2008. Politicians, judges, and intelligence operatives were basing their actions off of a fictional television show with real-world ramifications. It’s hard for me to believe that fourteen years later, with the ubiquitous use of smart phones and social media, that our psychology has become less entwined with fiction, fantasy, and technology.

I need to reset here, briefly, because I am a fan of fiction. Fiction helps us explore questions and ideas that would not be accessible in a purely fact-driven world. Fiction helps us develop empathy. Understanding fiction helps us understand reality. But fiction is merely an analogy. Fiction and virtual worlds are not the same thing as flesh and blood, and I think it is incumbent on us to keep those lines distinct.

As we spend more time in the virtual world, manipulating images, audio, and video like gods, we need to keep the reality of our existence in mind. We can’t photoshop ourselves to be sexier, edit our conversations to remove faux pas, or fast forward our way through a traffic jam. I think acknowledging that fact, even in a small way, will lead us to accept the world we really live in and do our best to make this world a better place.

Pedagogy. Or is it pedagogy?

On #WorldEmojiDay, I’d like to talk about…pronunciation. Last year, I returned to school to study creative writing. Since most people with MFAs actually end up teaching English, Drexel University offered a writing instruction or “pedagogy” class. And it made me wonder, how exactly do you say pedagogy?

I’ve heard everything from to PED-a-GOG-y (rhymes with doggy) to PED-a-GO-gie (rhymes with hoagie) to to PED-a-GOD-gy (rhymes with dodgy) to PED-a-GO-gee (rhymes with Emoji—which is why I was thinking about this today). I heard that last pronunciation most frequently, but it kind of bothered me. 

In my Merriam-Webster’s Rhyming Dictionary, the only words that rhyme with Emoji are Moji, shoji, anagoge, and Hachioje. Moji, shoji, and Hachioje are all Japanese and spelled with a “j.” As a native English speaker, “pedagoji” would not have been my first guess. Anagoge is Greek, like pedagogy, but it is spelled differently. Also… I’ve never heard of it. 

But looking at the spelling and word origin, the similar Greek words, synagogue, demagogue, and even pedagogue, are pronounced with a hard “o” and spelled “gue.” Since pedagogy and pedagogue have a shared origin, “pedagoggy” does make some sense… although it sounds completely ridiculous. Seriously. Say it out loud. 

That brings me to my initial guess. In the academic world, we study -ogies: biology, mythology, gerontology, psychology, geology, Egyptology, immunology, hydrology, chronology, neurology, archeology, et ceterology. Here the “o” has the schwa or “uh” sound, not the “oh” of emoji. It seemed to be the obvious choice to me. Psychology. Pedagogy. The study of teaching should rhyme with the study of the psyche or myths or gerons… or whatever. But that -ology comes from the Greek logia (study) or logos (story or word) while -agogy comes from the Greek agogos (guide). So maybe the unique pronunciation is an important distinction. 

(Sidebar: I don’t see why anyone would go for pedagoagie.)

But that brought me to one of the more important lessons of modern, English pedagogy. Enforcing pronunciation, spelling, and grammar rules are actually a form of oppression. “Proper” English is a way of separating groups. It tells you where someone came from. It tells you if English was their first language. It tells you who had enough money to go to college and who had enough free time to study English. The reality is, most people in the world get along just fine with double negatives, dangling modifiers, and frequent switches in verb tense. A lot of the time, they don’t even use words. 😉

However you deploy English has less to do with what’s “correct” than what group you want to be a part of. English “rules” are actually guidelines that are very audience specific. Grant proposals, ad copy, emails, news reports, social media feeds, and blogs all follow slightly different rules. The most “well written” research paper in the world will not sell more copies of The Hollywood Reporter

That can make English pedagogy a little more nuanced than your sixth grade English teacher may have claimed. There are trends and best practices, but the rules are actually kind of fluid. As far as the pronunciation of pedagogy goes, if you want to fit in, pronounce it however the cool kids do. You can justify pretty much any version. But if you want to be a bastion of liberty, forge your own pronunciation. Maybe that’s the best argument for pedagoagie. 

The Danger of a Single Story

Stories are critical to our understanding of ourselves and of the world. But stories are not TRUE. The world is chaotic and indifferent. Humans are contradictory and ever changing. Singular events like the migration of a humpback whale or a dance at senior prom or a parking ticket have no real meaning. In order to make sense of ourselves—where we’ve come from and where we’re going—we tell stories. 

Seeing that humpback whale off the coast of Cape Cod may reignite your love of marine biology. Wonderful Tonight may perpetually remind you of your first true love. That parking ticket may be just one of the myriad ways the universe lets you know “the man” is out to get you. This is storytelling, putting the events of our lives into context and using them to shape our identities. 

Some psychologists and anthropologists argue that storytelling is uniquely human, that it is, in fact, what makes us human. That theory may just be another story, but storytelling is certainly a strength. What is “Hamlet” or “the stock market” or “human rights.” You can’t feed a monkey the S&P 500, and yet it’s a critical part of our world. 

Stories, however, can also be dangerous. Stories are not reality. They are not TRUE. Stories simplify things, omit details, take a certain point of view. As psychologist Jerome Bruner said, “To tell a story is inescapably to take a moral stance.” In her Ted Talk, “The Danger of a Single Story,” novelist Chimamanda Adichie explains the hazards of the stories we tell and the stories we omit. It is worth a listen. I’ll wait.

The danger of a single story is not merely that it limits our understanding of the world or that it limits what we think we are capable of. The biggest danger, I would argue, is that if we only hear one story, we start to think it is TRUE. 

In 2020, we are being asked to re-evaluate many of the stories we have been told for decades, in some cases centuries. These stories address race, gender, patriotism, service, loyalty, victimhood, history, bravery, citizenship, equality, essentiality, responsibility, heroism, and many more things. They address our very identities. Remember, stories, by their very nature, are critical to our understanding of ourselves and the world. This process won’t be comfortable. That’s okay. New stories bring us to a fuller, more colorful understanding of the world. New stories bring us closer to the TRUTH. 

I’d like to leave with this anecdote. In the late nineteenth and early twentieth century, the United States was publicly, virulently white supremacist.

In part to thumb their noses at the White, Anglo-Saxon, Protestant (WASP) establishment, a Catholic fraternal organization took Christopher Columbus for their patron. By the late 19th century, Americans had been celebrating Columbus as a mythic hero for 100 years. The WASPs liked to tell a story of Christopher Columbus discovering America but conveniently ignored the fact that he was an Italian (Catholic) funded by Spaniards (Catholic). Italian, Irish and other Catholic immigrants wanted to remind WASPs that Catholics played a major role in creating the United States.

A century after the Knights of Columbus were formed, we may question their choice of patron. Here is Kurt Vonnegut’s reflection on Columbus from Breakfast of Champions in 1973:  

“As children we were taught to memorize [1492] with pride and joy as the year people began living full and imaginative lives on the continent of North America. Actually, people had been living full and imaginative lives on the continent of North America for hundreds of years before that. 1492 was simply the year sea pirates began to rob, cheat, and kill them.”

What’s the TRUTH? Well, all of it. The world is chaotic. It is not simple and neat. It’s natural for us to associate these stories with our identities. To think that an attack on Columbus is an attack on our selves. But it’s not. It’s just a new perspective. It moves us to a fuller and more interesting understanding of the world. It moves us away from the dangerous, myopic belief in a single story. 

O.M.G.

I’ve been busy. You might think that being stuck at home for two months would give me free time to blog. Not so.

IMG_8578
Actual photo of me working, yesterday.

In addition to teaching and learning and writing a novel, I helped organize the website launch of Drexel University’s MFA in Creative Writing program literary journal.

www.drexelpaperdragon.com

Check it out! Submit some of your work!

You can also follow us on Twitter @drexpaperdragon

Between all of that and some pandemic everyone keeps talking about, I haven’t really had time to advance my presidential campaign. You may think this is the point where I officially withdraw, but if you read my previous post… I was never officially running. I guess my point is, you won’t hear my stump speech any time in the near future.

Hopefully, we return to normal (or better) in the near future. In the meantime, I’ll see you online.

In Praise of Naps

@wdavisliterary posted this on Twitter a few weeks ago: “The Four Horsemen of Procrastination”

Screen Shot 2019-10-25 at 6.43.16 PM
From @tcviani on Instagram

It’s a solid observation, and it was meant in good fun. I did, however, have one quibble. Naps are part of the writing process. I cheekily said as much in my reply. She responded as well, saying that napping is “totally” procrastination. And To be fair, the act of napping is not the act of writing. But napping IS part of the writing process (the creative process more generally) as are showers, bike rides, long walks, and getting lost on public transit.

I’m not much of a fighter, but this is one battle I’m not going to take lying down.

Things are going to get Biblical

There are a lot of themes in the Bible, but the importance of dreams seems to permeate all 60+ books. There’s Old Testament Joseph and his famous technicolor dreamcoat; Jonah who was fast asleep before being thrown overboard to a whale; Job, whom God scolds for not listening to his dreams; and New Testament Joseph who decides to divorce his miraculously pregnant fiancé before settling down for some zzz’s.  After a good nap (and some meddling angels) he changes his mind.

It’s this last point I want to emphasize. Joseph had a problem he wasn’t sure how to handle. After he thought about it for a while, he took a nap. The Psalms also speak about this magical, problem-solving nap. “…Meditate in your heart upon your bed, and be still” (Psalm 4:4). “Hear my cry for help… In the morning, Lord, you hear my voice” (Psalm 5:2-3).

The general theme here is that your problem is too big for you to comprehend. Don’t even bother trying to solve it. In fact, you’re better off getting a good night’s sleep. Somehow, miraculously, the problem will solve itself.

Seriously, though…

Research into creativity has revealed that there is real wisdom in ancient, well, wisdom. Your brain needs time to ruminate on ideas. In his book Where Good Ideas Come From, Steven Johnson refers to this as the “slow hunch.” Over time (minutes, days, years?) you slowly, subconsciously consider the same problem as you age, mature, learn, and change. You open yourself up to “serendipity” (another of Johnson’s terms), as your brain forms new, unexpected connections. (It had never occurred to me that when I did a bible study with my wife five years ago, I could use that information for ammunition in a Twitter argument about screenwriting.)

In his work The Art of Thought, Graham Wallas shares this reflection from the prolific German physicist Hermann von Helmholtz.

…happy ideas come unexpectedly without effort, like an inspiration. So far as I am concerned, they have never come to me when my mind was fatigued, or when I was at my working table… They came particularly readily during the slow ascent of wooded hills on a sunny day.*

Hermann_von_Helmholtz
Image via: Wikipedia Clearly von Helmholtz had something going for him other than looks…

Wallas goes on to describe Helmholtz’s three stage process: preparation, where you consider the problem at hand; incubation, where you do not think about the problem at all; and finally, illumination, where the solution simply pops into your head. Sound familiar?

Physical activity is great way to keep your neurons on their toes, but don’t worry if there aren’t any “wooded hills” near your writing desk. Johnson thinks dreams may be just as beneficial as fresh air. In fact, he wonders if that most analytical of dreamers wasn’t on to something.

Sigmund Freud, he says, had it backwards. Dreams aren’t repressed memories trying to come to the surface, but our brains searching for meaning through all of the clutter (sights, smells, conversations, thoughts, obstacles, emotions, etc.). Although the evidence is anecdotal, the sewing machine, the periodic table, and the theory of relativity were all conceived by people sleeping on the job. It’s not just science and technology. Artists dream up crazy stuff all of the time. Famously, The Beatles’ “Yesterday,” The Rolling Stones’ “Satisfaction,” Frankenstein, and Dr. Jekyll and Mr. Hyde all started in dreams.

Sometimes a Nap Isn’t Just a Nap

When it comes to creative processes, the best approach is not always head on. Ideas are ephemeral things and you may spook them. Instead, the research seems to suggest that one great approach is, well, napping. Now to be clear, you can’t use this as avoidance (which is what spawned the initial tweet). But what has actually proven to be very effective is gathering all of the pieces in your mind: the inputs and outputs, the technical hurdles, your fears, and in regard to writing, your subject and audience, characters, stories, motivation, and criticisms.

Hold them there. Look at them objectively. Write them down if it helps. Then, forget about them.

Go for a walk. Do some painting. Cook dinner. Take a nap. As long as you’re going to hit your deadline, take a day or two to just let everything simmer. And if you do, you’ll be surprised to see how things miraculously work themselves out.

I hear your protests, but don’t say anything now. Just think it over. Sleep on it. Get back to me in the morning.

 

*Miller, Susan. The Norton Book of Composition Studies. W.W. Norton & Co., Inc. 2009. 236

Screenwriters, do yourselves a favor.

HBO recently released Craig Mazin’s miniseries ChernobylWhile nothing in this life is perfect, Chernobyl comes pretty darn close. From acting to directing to art direction to sound design, Chernobyl is a masterclass in filmmaking.

But the biggest story is probably the story itself. In the television world, screenwriters hold the creative power and, as writer and executive producer, Mazin made a variety of bold and effective decisions. For example, the explosion takes place in the first few minutes of the mini-series. He doesn’t make the audience sit through a lengthy first act or ordinary world, and it’s spectacularly powerful. But his reasoning behind the decision is what will really make things click for filmmakers.

In addition to the show, Mazin recorded a companion podcast with NPR host Peter Sagal to accompany each episode. In it, he explains his creative decisions. He shares insight about story structure, adapting true stories, portraying gore on screen, sound design, and even accents. It’s entertaining, engaging, and informative. It’s unfiltered information coming from a filmmaker at the top of his game.

Taken together, Chernobyl and the companion podcast are worth far more to aspiring filmmakers than anything you can find in a university catalogue. The podcast is free and HBO Now has a 7 day free trial. You have no excuses. If you want to learn about the craft of filmmaking, Chernobyl is a must.

All Men (and Shows) Must Die

The last episode of Game of Thrones will go live in just a two days and the internet is still roiling about last week’s episode. “What have the writer’s done?!” I can’t be certain how D&D plan to resolve this mess, but I can guarantee no matter what happens some people will hate it. Is all of the Sturm und Drang really merited?

I’ll start with a bit of a humble brag. I was a fan of the books. I was ecstatic to hear that HBO would be adapting them into a show. And for the most part, the series stayed true to the books, which is to say, it stayed true to human nature.

The thing that struck me about Game of Thrones was its realism. Sure, you had to get past the dragons and the army of the undead, but in many ways, George R.R. Martin’s world felt more authentic and his characters felt more real than most things you read. Writers – screenwriters in particular – rely heavily on preconceived notions (also called cliches) to keep stories moving forward. When you’re telling the story of Odysseus, you can’t get hung up what oarsman #3 is doing.

Martin didn’t let that bother him. Oarsman #3, the red-headed prostitute, and the kennel master’s daughter were just as likely to have staring roles as the king and the elite assassin. No one was purely good and no one was purely evil. Everyone was just trying to get by in this nasty and brutish world Martin had created. It was enthralling.

It also came at a cost. Descriptions could be burdensome. Do we really care what all eleventeen courses were at the feast? Or whose bannermen wore what sigils? All of the descriptions, details, side quests, and characters made each book in the series a massive tome somewhere in the 1000 page range. And then there was the killing of characters.

I started out rooting for Ned. Here was a man who was going to get things done. It’s not a spoiler at this point to say things didn’t pan out for him. Then I rooted for his son, Robb… and then Jon. But the last time Martin mentions Jon, he’s, well, dead. Then Martin went off and wrote a book on an entirely different continent with other characters. (As a reader, I was none too happy about it and couldn’t decide if I would finish the series. But I’d like to point out that, despite some angry fan mail, if  anyone is winning the game of thrones, it’s Martin.)

HBO took the same route, shocking audiences each season. There was Ned, the Red Wedding, the Purple Wedding, the Great Sept. How do you top that? Looking back on it, however, the question isn’t about “topping” the previous season, but treating Westeros with the same reality the books did.

Life is messy. The people who you want to win don’t. The people in charge are often war criminals. The people who should be in charge don’t want the job. Siblings fight and betray trust. Some people redeem themselves. Others don’t. Game of Thrones created a world that was big enough to be treated realistically rather than having to rely on the tropes that govern most stories. Last week’s episode was a case in point.

Was Danaery’s a long con? Did D&D spend nine years building empathy for a character they knew would turn out to be an unhinged megalomaniac? Or maybe like the gods whenever a Targaryen is born, they just flipped a coin in the writer’s room. The point is, even though it frustrated a huge portion of the audience, it felt strangely inspired. It felt real. We don’t get upset when our deadbeat friend does something stupid. We get upset when our heroes and mentors do something stupid. That’s why this episode bothered us so much.

There are many theories about what will happen in the final episode, some of them more disappointing than others. But I can honestly say, I have no idea what will happen. That’s been the shocking fun of Game of Thrones since day one. Let’s be honest, for a show that killed most of its characters off, it would be completely “in character” for them to do something shocking, absurd, and brutal. I’m fully expecting that. The disappointment won’t be how it ends, but that it’s ending.

I’m hopeful though. Game of Thrones took chances with traditional storytelling, creating something new and complex and engaging. HBO adapted it – warts and all – into something we love, and love to hate. I hope that complexity affects television for years to come. In the meantime, I know what I’ll be reading.

IMG_7328

7/14/19

While I stand by my comments regarding Game of Thrones, and my appreciation of George R. R. Martin’s storytelling, I won’t necessarily recommend Fire & Blood. It is a history book. It’s an interesting, well-written history book, but it’s a. history book. I guess I’ll just have to wait for The Winds of Winter.