Writing to Think

I’m trying to think of a word. Is it existential? Maybe it’s epistemic. I feel like it starts with an “e.” To sort things out, I can ctrl+click on this word in Word and see what synonyms Microsoft suggests. I could go to Google. I could even do it old school and pull out a thesaurus. What I’m trying to illustrate here is that writers have used a variety of tools for nearly two hundred years* to assist them in getting thoughts from their heads onto the page. Is generative AI any different? That’s what I’m going to spend the next six hundred words trying to figure out. 

DALL-E's effort to illustrate this blog...

The underlying** problem is that we don’t know how AI impacts thought, especially in developing brains. One theory is that AI can get rid of the drudgery of menial tasks. If I don’t have to worry about dangling modifiers, subject/verb agreement, and double negatives, I can spend my time on more meaningful pursuits. AI could accelerate learning by allowing us to delve into deeper questions.

Another possibility is that AI will lead us to a certain, bland “average.” AI has taken the sum of our digital culture and condensed it into the likeliest outputs. For example, “How do we stop gun violence in the United States?” ChatGPT says, “Stopping gun violence in the United States requires a multifaceted approach that includes implementing comprehensive gun safety laws, addressing root causes like poverty and mental health, improving community support systems, and fostering a culture of responsible gun ownership.” Let’s be honest, I didn’t need to ask ChatGPT to come up with that answer. It’s pretty obvious. But it’s also so non-specific that it’s basically useless. We know these things, and they haven’t worked yet.  We need to figure out why, specifically, they haven’t worked and come up with actionable steps to address them. And this intellectual dead end takes us to the most dire possible impact AI could have on thought. 

If students leapfrog over fundamental lessons, they may fail to develop the critical thinking skills necessary to tackle the world’s biggest problems. Let’s go back to that double negative. When someone’s speaking to us, we can usually tell what they’re trying to say, even if it’s not literally true. I won’t try to confuse you with none of that grammar unclarity. (You know what I mean, right?) But as problems and conversations get more complex, double negatives and dangling modifiers can create serious issues. Worse, if students have been relying on AI to solve their issues up to the point where they get to those more complex problems, they may not have the skills to solve them on their own. 

The truth is, at this point, we just don’t know how AI affects developing brains. Some of the emerging research should certainly make us pause. This study looking at human creativity found that AI’s impact is—to steal a favorite LLM word—“multifaceted.” To sort out exactly what aspects of learning AI accelerates and what parts it damages will require extensive research, attention, and time. To implement these findings will require even more research, attention, and time. But I feel like this is a good place to pivot back to something I wrote in my first paragraph.  “That’s what I’m going to spend the next six hundred words trying to figure out.” 

[Out of curiosity, I copied the preceding paragraphs into ChatGPT and asked it to write a conclusion. Here’s what it came up with.] “In the end, the question of whether generative AI is fundamentally different from the tools writers have always used boils down to its impact on thought and learning. Like a thesaurus or spell-check, it can help us articulate ideas more efficiently, but unlike those tools, it operates on a scale and complexity that could reshape how we think altogether. Whether it accelerates creativity or dulls it into predictable averages, AI challenges us to reevaluate how we approach learning, problem-solving, and even the pursuit of originality. The stakes couldn’t be higher: in an era of unprecedented technological advancement, ensuring that AI enhances rather than erodes critical thinking will determine whether it becomes a powerful ally or a crutch that undermines us in the long run.”

While ChatGPT’s conclusion is true(ish), it’s not where I planned to take this essay because it’s missing a key consideration. Many people think of writing as a way to record and transmit information. That’s true. And AI is actually decent at doing that. But writing is much more than that. Writing is a form of expression. Writing allows us to connect with each other intimately (love notes) or impersonally (jury summons). Writing is performative. (Seriously Hallmark, who is the “Best Dad Ever”?) But that performance serves a social function. (Aren’t those cards more meaningful with a handwritten note?) Writing signals who’s part of our group and who isn’t. And writing can help us find common ground with people who aren’t part of our group. Most relevant to this essay, writing is a way to sort out our thoughts. That’s what I’m doing here. That’s one of the reasons I started blogging. As various writers, including Joan Didion said, “I write entirely to find out what I’m thinking.” The more that students outsource their writing to a machine, the less time they will spend thinking about their words. We don’t yet know the consequences of this outsourcing, but we do know that writing can address many of the deeply human issues facing us today: a lack of critical thought, empathy, meaning, and human connection. Maybe we should spend some more time grappling with our words before we outsource too much of this process to the machines. 

*According to Wikipedia, another writer’s tool, when Peter Mark Roget created Roget’s thesaurus, he “wished to help ‘those who are painfully groping their way and struggling with the difficulties of composition … this work processes to hold out a helping hand.'”

**Maybe the word I was looking for started with “u.”

Why Theater Should be a Core Class Requirement

As an undergraduate, my wife studied theater. Her coursework included reading dozens of plays, welding sets together, and procuring fake blood. Today, she is a special education supervisor. For those of you who are skeptical of college, this academic history may read as a perfect case study in the ridiculousness of college education. How does this coursework relate to supervising special education teachers? I’d argue that it’s more relevant than you think, and I have my own case study. 

Yesterday, I attended a webinar hosted by a major academic publisher. (I’ll keep them anonymous because it’s not polite to insult your host.) After introductions, the M.C. handed control over to the opening keynote speaker, who shared her computer screen—a slide deck made on Canva. The only problem was, she couldn’t read the notes she’d written for herself. Hundreds of people* from across the globe were compelled to watch ten minutes of the M.C. and speaker fumble through the technical difficulties of sharing a presentation screen from Canva through Zoom. And this was at a technology webinar… 

Let me get back to my wife. You see, she studied stage management. Not acting but all of the technical details that make a show happen. All of the things that—when they go well—nobody notices. When the show is running, the stage manager is the one running the show. They’re in charge of actors hitting their marks, light cues, and timetables. But it’s far more than that. They’re in charge before the show. They’re in charge of actors’ schedules and where rehearsal will take place; what time lunch will be, who caters it, and making sure your sound designer with a nut allergy doesn’t eat the baklava. And if they do eat the baklava, the stage manager has an epi-pen and knows where the closest hospital is. They know what time the building opens, who to call when the toilets overflow, how much time the lead actress needs for hair and makeup, and how to read a budget. They have dry cleaners, seamstresses, and rental houses on speed dial. They know how many prop guns are on set and who is allowed to hold them. And they know how to run a Zoom meeting.

At countless times in life, my wife and I have complained that the people running large events have no stage management skill. The coffee reception was too short, and the bathrooms were too far away. Or the projector screen was in front of an east-facing window during a morning presentation, forcing the audience to squint directly into the rising sun. Or an obvious technical hurdle—say screensharing from two commonly used presentation applications—was not overcome. These are the kinds of things that a good stage manager will think through and resolve before show time. 

Stage management requires empathy. It makes you think about an experience from your audience’s point of view. It requires critical thought and imagination. “So you want live feedback from your Zoom audience of 5,000 people. How will you get that feedback? Do they know how to provide it? Who’s in charge of the mute button?” It requires foresight and real-world problem solving. When you learn at 8pm on a Friday night that your star Norwegian Blue parakeet has bird flu and can’t take the stage, what’s your backup plan? There’s no obvious answer here, but the intermission ends in fifteen minutes, and you need a solution. And that solution will require some buy-in from your team.

These skills are applicable in any job: empathy, critical thinking, imagination, foresight, problem solving, teamwork, but they’re also very hard to teach. The theater offers a perfect sandbox for learners to develop these skills in a low-stakes but also very tangible way. The show must go on, but if it’s not a very good show, the repercussions usually aren’t dire. And isn’t that the bigger goal of college education? Let’s give you access to the infrastructure and the mentors necessary to help you practice some of those soft skills in a safe environment. 

I feel like this is worth mentioning because everyone is tied into knots—myself included—trying to figure out what to do with AI in the classroom. While it will affect us all, and AI skills may be necessary in many jobs, they won’t replace many of those real-world human skills that are necessary in most jobs. Those are the skills we need to be focusing on in the classroom. And those are the skills that you can only develop through real-world practice. During the technical snafu in our webinar, one person quipped that, “Maybe AI can fix it.” Maybe. But a good stage manager would have fixed it, and a great stage manager would have foreseen and avoided the issue altogether. So while administrators are trying to figure out how to incorporate AI into their core requirements, I’m going to ask them to take a step back and consider incorporating theater into their core requirements. No matter what new technological wonder dazzles us five or ten years down the road, you’re still going to need a stage manager to work through the gremlins. 

*Early on, the M.C. said that the webinar had 5,000 registrants. Later, a guest remarked that he saw 800 people logged on. Maybe those early technical snafus thinned the herd…

Review of James D. Kirylo’s The Catholic Teacher

The school year has started, and a major election is less than two months away. Personally, I’ve been wrestling with how to address politics in my class. There are numerous, highly polarized issues that directly affect my students: gun violencestudent loansimmigration, even a proposal to eliminate the Department of Education. These contentious issues can seep into classroom discussions and erode the trust necessary to create fruitful discussions and a safe learning environment. Looking for actionable suggestions, I turned to James D. Kirylo’s recently published The Catholic Teacher: Teaching for Social Justice with Faith, Hope, and Love. Sporting an encouraging title and cover art, including chapters on COVID discourse, guns, “the sacredness of life” (i.e. abortion), and climate change, the book appears like it will be a good starting point for difficult conversations. Unfortunately, it aspires to more than it achieves. 

Kirylo’s main argument is that educators not only can—but have a moral obligation—to use their faith to guide their teaching. Doing so, however, ought to be dialogic, not didactive. Kirylo writes that his book is meant to be “ecumenical, interfaith, and interreligious in tone. In that way, perhaps the text will be appealing to Catholics and non-Catholics alike” (1). He uses the next few chapters to establish a Catholic tradition of ecumenicalism, but here things quickly unravel. Kirylo’s optimistic tone lacks awareness of the complexity of the issues he discusses and even, in some cases, the Catholic Church’s complicity in these issues. This lack of self-awareness is illustrated well when Kirylo quotes Nostra Aetate. “Since in the course of centuries not a few quarrels and hostiles have arisen between Christians and Moslems, this sacred synod urges all to forget the past” (21). While it’s a nice sentiment, it ignores reality of teaching in a classroom today that likely includes Jewish, Muslim, and Christian students whose lives are being affected by antisemitism, Islamophobia, a war in Gaza, and white Christian nationalism. Taken in this light, Kirylo’s calls for ecumenicalism ring as hollow as Nostra Aetate’s urge to “forget the past.” Instead of a serious proposal to open a dialogue about contentious topics in a pluralistic classroom, Kirylo seems to have a much narrower focus. He’s targeting practicing Catholics who are either unsure if they should take a stance on contentious topics in the classroom or want to take a stance but do not feel as though they have a mandate to do so. While this is a laudable endeavor, it’s a significantly narrower audience and purpose than Kirylo’s hopes stated in the introduction. 

Once you realize this narrower purpose, Kirylo’s calls for ecumenicalism feel strangely hypocritical. The first two sections (nearly half of the book) lay a foundation for his purpose in Catholic theology and tradition. While this history lesson may motivate practicing Catholics to imitate their forebearersit fails to invite other people into the conversation. In other words, if the reader is not swayed by the arguments of Catholic synods and encyclicals, Sections I and II lack any real merit. Section III of the book finally looks at the contentious issues in question. But as the shortest part of the book, Section III lacks the research and thoroughness of the first two sections, simply stating cliched positions on tired issues. Little effort is made to share alternate views, and Kirylo offers no suggestions on how to discuss these issues with people who hold differing positions. And again, Kirylo either misses or ignores what actually makes these topics contentious. 

In the case of abortion, for example, Kirylo directly mentions Catholicism’s outgrowth from Jewish theology without acknowledging that the majority of Jewish branches do not consider life to begin at conception (NCJW, Genet). Even his tepid dismissal of contemporary Jewish theology, “There is not a monolithic Jewish point of view” (93), acknowledges that there are a multiplicity of views. How then, should the Catholic educator engage with students whose views disagree and who want to retain their civil rights of bodily autonomy during a political era that seeks to take them away (Guttmacher)?

Then there’s the very germane question of school funding, something that would directly affect Kirylo’s audience no matter how narrow it is. Here Kirylo critiques “neoliberalism” for its efforts to defund “public K-12 education” (69) while ignoring the fact that many Catholics and even the United States Conference of Catholic Bishops push for voucher programs that would remove money and students from the supremely inclusive and democratic public education system. 

It’s this blindness or outright disingenuousness that frustrates me the most about Kirylo’s work. While I do believe that his narrow purpose of empowering Catholic teachers is genuine, the lack of critical thought and introspection results in more of a propaganda piece than an insightful work for teaching in a pluralistic, multicultural classroom. Maybe that’s not Kirylo’s fault. Maybe my expectations for the book were too high. Whatever the case, if you’re looking for something to help you with these difficult conversations over the next two months, you’re going to have to look elsewhere. 

P.S. For someone who handles difficult conversations well, I recommend Emmanuel Acho’s work. 

Uncomfortable Conversations with a Black Man (book)

Uncomfortable Conversations with a Jew (book)

Uncomfortable Conversations with Emmanuel Acho (YouTube)

And for much more thoughtful exploration of gun violence, I recommend Season 8 of Malcolm Gladwell’s Revisionist History.

Stranger Than Fiction

Here’s a screenplay pitch for you. Psychology professor by day, undercover cop by night. Not just an undercover cop, but an undercover cop posing as a hit man. In this least, I hope you say, “Go on.” On top of that basic premise, this psychology professor is the kind of forgettable, mild-mannered dweeb you fist imagined, but his hit man persona is a mix between Dirty Harry and John Wick. And as his college classes delve into questions about id and identity, he starts to wonder who he really is. When we get to the crisis moment, however, he’s no longer wondering if his true calling had always been undercover cop. He’s wondering if his true calling had been a real hit man.

That’s the premise of Richard Linklater’s Hit Man on Netflix. Seems like a surefire blockbuster. It’s even based on the true story of Gary Johnson, a psychology professor who had been moonlighting as an undercover cop for the Houston Police Department in the 80’s and 90’s. He’d been profiled by Skip Hollandsworth in Texas Monthly a few years ago. The problem, however, is that Linklater cleaved too closely the source material, creating a decent biopic rather than a blockbuster movie. I want to briefly delve into what didn’t work. 

The big problem with this movie is that screenplays are complete fictions. While they may be based on real life, they are not real life. Real life is chaotic and absurd. Screenplays (hopefully) are not. Everything in a screenplay should be intentionally placed there to form a tight, cohesive narrative–a sort of logical argument. If A, then B; if B, then C; and so on until you end up with hopefully a logical and fulfilling conclusion.

I’ll defer to Anton Chekov here who rephrases what I’m trying to say much more directly and cleanly. “If in the first act you have hung a pistol on the wall, then in the following one it should be fired. Otherwise don’t put it there.” The screenplay for Hit Man, however, includes numerous unfired pistols. These come directly from Hollingsworth’s source material, and while they are fascinating glimpses into the human psyche, they do not progress the narrative or Gary Johnson’s growth as a character. (Keep in mind here that Gary Johnson the character is different than Gary Johnson the person.)

Here are three bizarre, true anecdotes from Hollingsworth’s article that Linklater chose to keep in the movie (despite their irrelevance to Johnson’s story). The first is an example of how a person starts looking for a hit man: a seedy gentleman asks a stripper(?!) if she could recommend anyone. Second, Hollandsworth writes about a teenager who wanted to off one of his classmates (changed to his mother in the movie). Third, he relates a family inheritance dispute that ends up with a brother forgiving his sister for trying to hire a hit man. The judge in the case grants her probation. All three of these anecdotes are crazy and fascinating, but I argue that you could cut them without losing anything. In fact, you’d have a better movie, and here’s why. 

Go back to that first paragraph I wrote. If this movie is exploring whether or not Gary Johnson wants to become a real hit man, how exactly does the high school student fit in? Why is it important for the audience to go to the strip club and eavesdrop on two characters (the seedy patron and stripper) who we’ve never met and will never see again? In short, it’s not. 

This is Gary Johnson’s story. Those scenes are deviations from that story. While they do show the diversity of clients that Johnson must deal with and his ability to read a variety of clients, we have another dozen characters in the “Undercover Gary Montage” that do the same thing. That makes those scenes unnecessary. Worse, it also makes them confusing. 

At the point where we meet the teenager, this movie has established that Johnson is working with a variety of unsavory people and becoming increasingly comfortable in his role as a “hit man.” If we, the audience, see a deviation from norm that the screenplay has established for unsavory people (in this case because of age), the teenager becomes Chekov’s gun. We expect the story to pivot at this point. There is a complication to the movie’s “ordinary world.” The main protagonist must overcome this new obstacle and learn something about themselves. But…he doesn’t.

The other two scenes operate in a similar fashion. While they’re both interesting anecdotes (How does someone find a hit man? What happens after you learn that someone hired a hit man to kill you?) they aren’t central to the plot. They don’t tell us anything about Johnson’s new identity. Instead they introduce irrelevant characters and locations. They are “pistols” that never get fired. In the logical argument that is Gary Johnson’s story, they are Red Herrings. 

It’s important to point out that Gary does have a meeting with a unique client that causes him to change his entire operation. Madison Masters is an attractive young woman trapped in an abusive relationship. Rather than going through with the sting, he suggests she get out of the relationship. He also begins to wonder if some people are, perhaps, legitimate targets for a hit man. That’s the crux of the movie. This is where things get interesting. This is in fact the inciting incident. Those other three anecdotes should have been left in the Hollingsworth’s profile piece.

Additionally, the movie suffers from an overused, uninteresting voiceover. And Gary Johnson’s character fails to adequately differentiate his “real” self and his hit man persona. What are we left with? An okay biopic that takes a handful of major liberties from the source material. As Hollandsworth’s original article proves, the truth is often stranger than fiction. Good fiction, however, is far more focused than reality. 

Literature Review of AI Detectors

About eighteen months ago, I started to notice machine-generated text cropping up in student work. As a composition teacher, my immediate reaction was to ban it. Text generators have little role in the composition classroom, however, composition teachers had few options for accurately identifying machine-generated text. The basic concerns were that detectors were inaccurate and could provide false positives. In other words, they might flag human writing as machine generated, especially with non-native speakers. My colleagues and I put considerable effort into redesigning courses and disincentivizing students from using AI such as ChatGPT or Bard to complete assignments. I think these changes have improved our pedagogies. Having survived a school year with AI, however, I was curious how things have changed in the world of detecting machine-generated text. As of mid-July 2024, here is what I’ve found. 

Neither humans nor AI-detection systems can regularly identify machine-generated text flawlessly. However, it’s worth noting that detectors are reaching a high level of accuracy, and they are preforming better than humans. Looking at research abstracts, J. Elliott Casal and Matt Kessler found that reviewers had “an overall positive identification rate of only 38.9%” (1). Oana Ignat and colleagues found that humans could only accurately identify 71.5% of machine-generated hotel reviews (7). Their AI detector, however, was able to correctly identify roughly 81% of machine-generated hotel reviews (8). Writing in 2023, Deborah Weber-Wulff et al. found similar results when testing twelve different AI-detection programs. The highest, Turnitin and Compatio approached 80% accuracy (15). Publishing this year, Mike Perkins and colleagues found Turnitin detected 91% of machine-generated texts (103-104) while human reviewers in the study only successfully identified 54.5% (1). Custom designing an AI detector to find machine-generated app reviews, Seung-Cheol Lee et al. were able to achieve 90% accuracy with their best model (20). For longer texts, the accuracy of both human reviewers and AI detectors increases. Comparing full-length medical articles, Jae Q. J. Liu et al. found that both professors and ZeroGPT correctly identified 96% of machine-generated texts (1). (Note that GPTZero, a different AI detector, performed considerably worse.) However, the professors also misclassified 12% of human-written content as having been rephrased by AI (8). 

Notably, Weber-Wulff mentions that AI detectors tend to have few false positives. In other words, if the software is unsure if a text was written by a human or a machine, it is more likely to classify it as human written (17). Turnitin, in fact, had 0 false positives (26). Perkins, too, noted that Turnitin was reluctant to label text as machine generated. While it did correctly identify 91% of papers as machine generated, it reported only 54.8% of the content in those papers as machine generated. In fact, the entire paper (100%) was machine generated (103-104). While this means a certain percentage of machine-generated writing will evade detectors, it should give professors some confidence that something flagged as machine generated is, very likely, machine generated. In another encouraging finding, Liu found that “No human-written articles were misclassified by both AI-content detectors and the professorial reviewers simultaneously” (11). 

There is one caveat, however. AI detectors may flag translated or proofread text as machine generated (Weber-Wulff 26). Once machines are introduced into the composition process, they likely leave artifacts that may be noticed by AI-detectors. Strictly speaking, the AI-detectors would not be wrong. Machines were introduced into the composition process. However, most professors would find the use of machines for translation or proofreading to be acceptable. 

The studies I mention to this point were attempting to consistently identify machine-generated content, but a team of researchers led by Mohammad Kutbi took a different approach. Their goal was to establish consistent, human authorship of texts by looking for a “linguistic fingerprint.” In addition to detecting the use of machine writing, this method would also detect contract plagiarism (i.e. someone hiring another person to write an essay for them). This system achieved 98% accuracy (1). While not mentioned in Kutbi’s study, other scholars have found that certain linguistic markers maintain consistency across contexts (Litvinova et al.). For these and other reasons, I believe that linguistic fingerprinting holds the most promise in detecting use of AI in the composition process. 

It’s also worth mentioning that participants in Liu’s study took between four and nine minutes to make a determination about whether or not an article was written by a human (8). In this situation, AI may actually aid professors by reducing the time they need and increasing the confidence they have in determining whether or not a text was machine generated. 

To briefly summarize

  • Both humans and AI-detectors are prone to error
  • AI detectors are generally better and in some cases significantly better than humans at identifying machine-generated text
  • AI detectors are fairly conservative in their classification of text as machine generated

Considering these points, I believe that at the current time, instructors should use AI detectors as a tool to help them determine the authorship of a text. According to Liu and colleagues, Originality.ai is the best overall AI detector and ZeroGPT is the best free AI detector (10). While not as accurate as the preceding tools, Turnitin deserves mention because it did not have any false positives in multiple studies (Liu 6, Weber-Wulff 26). Of course, as with any tool, these detectors need to be used with discretion and with a consideration of the bigger context of a work. I plan to write another post considering some common flags of machine-generated text. 

Ceci Continue de ne pas Etre une Pipe

Photo manipulation is nothing new, and for anyone growing up in the digital age, Photoshop has morphed from a proprietary digital editing program into a verb. Taking it a step further, Google Pixel’s Magic Editor puts Photoshopping right into the palm of your hand. The smartphone app allows you to move, resize, or even delete items from an image. Not happy with what your camera captures in the first place? Canva, Dall-E, Firefly, and other software lets you conjure up any image you can verbalize. Don’t feel encumbered by your lack or experience or, for that matter, reality. We used to say, “It’s only true if there’s pictures.” That’s certainly not the case any more. But was it ever?

In the commercial and theatrical world, professional filmmakers intentionally alter reality. They use all kinds of tricks to make things intimidating, pretty, ugly, or endearing. Makeup, costumes, and lighting turn a perfectly charming Emilia Clarke into the Mother of Dragons.

But these tricks are still in play even if you don’t use them intentionally. Here are two pictures of a friend of mine at a mud run.

IMG_8313
IMG_8322

In the first, you can clearly see the falling rain. (No, those aren’t orbs…) In the second, you can see that it is still raining if you look at the water. But the change of angle and focal length means you can no longer see the falling rain. (This is the same reason why it sometimes looks like it’s barely sprinkling at rained out sporting events.) I didn’t hide the rain intentionally, but that’s the reality of the situation.

The important thing to remember is, consciously or not, all media is an interpretation of reality. When you’re on vacation and take a picture, it captures some part of the moment, but it isn’t a recreation of the moment. You’re limited by the abilities of your camera. You choose to photograph the Grand Canyon not the parking lot next to the Grand Canyon. You crop out the guy wearing that ridiculous Hawaiian shirt. And what about the people in your photo? Are they like my nephew, who for two years refused to smile any time someone pointed a camera at him? Or do they ham it up for the camera in the hopes of becoming an internet star?

This is nothing new. Film has been an interpretive art since its inception. Below are two of the earliest war photographs ever taken. These are from the Crimean War in 1855, twenty-three years before the first movie was made.

canonball2
canonball 1

Both show a scene of desolation strewn with cannonballs. But it’s the second one that would make photographer Roger Fenton famous. Simply put, it’s a more striking photograph. Fenton wanted to show the horror and destruction of war, but he was restricted by his cumbersome film equipment. His solution? Move the cannonballs onto the road to take advantage of the high contrast. Although it was much more labor intensive than Photoshop, it’s the same basic principle. He altered his photo for effect. He sought a deeper “Truth” that wasn’t reflected in “reality.”

In the late 1920s, surrealist painter Rene Magritte created this thoughtful painting called “The Treachery of Images.”

the-treachery-of-images-this-is-not-a-pipe-1948(2)

If you don’t know French, the text reads, “This is not a pipe.” Of course it’s not a pipe. It’s a painting. It represents a pipe. Our brains conceive of it as a pipe, but it is not a pipe. The same holds true for all media.

A definition might be helpful here. Media is the plural of medium. The two definitions that come to mind are medium (size), as in the size between small and large and medium (fortune teller), as in someone who communicates between the living and the dead. Both contemporary definitions share the same Latin route, medias, which simply means middle. It shouldn’t surprise you that our news sources are generally referred to as “the media.” They are the middlemen. They transport ideas from the source to us. But along the way, they must interpret it.

Media is also used in the art world to describe the material that an artist uses. You may see the phrase “mixed media on canvas.” This medium might be oil paint, latex paint, clay, canvas, silk, steel, analog audio recording, digital video recording, computer programs, or even food. The point is, the artist interprets the world through this medium. Film, as an artistic endeavor, is its own medium. But never forget that the six o’clock news, the news radio traffic report, and the Wall Street Journal all operate in artistic mediums. To explain in more detail, I’d recommend listening to Malcolm Galdwell’s Revisionist History podcast from a 2017.

In it, he discusses this famous photograph from Birmingham in 1963.

footsoldier

It seems to show a police officer unleashing his dog on a black protester. But it doesn’t. The man in the photograph wasn’t part of the protest. He wasn’t a “Foot soldier,” as the Civil Rights activists called themselves. And the police officer hadn’t unleashed his dog on him. If you look closely, both men seem surprised and the police officer is leaning back, trying to pull the dog away. But that’s not what the nation saw.

This photo highlighted the brutality of the Jim Crowe south. It represented the discrimination, the institutionalized hate, and the lynchings. It shifted public opinion to the side of the Civil Rights movement, and it was all done on purpose. Bill Hudson, the photographer, chose this picture over the hundreds of other photos he had taken that day. The editor of The New York Times chose to put this story above the fold rather than any other news of the day. This is what a medium does. It takes the raw data, curates it, interprets it, and disseminates a cohesive message. In doing so, a medium must disregard data that fails to support its message or obfuscate its position. In its quest for “Truth,” it must necessarily deviate from “reality.”

To further clarify, look at Ronald S. McDowell’s statue inspired by Bill Hudson’s photograph.

footsoldier2.jpg

The figure representing Walter Gadsen, the student, is considerably younger and shorter than he was in reality. The police officer is emotionless and inhuman, reminiscent of T-1000 from Terminator 2: Judgment Day, which came out four years before the sculpture was dedicated. The police dog’s mouth is wide open, bearing vicious, anatomically improbable fangs. But remember this is a piece of art. This is not a pipe.

The push and pull between film as art and film as documentation will never end. But as a filmmaker and consumer of media, it’s important to acknowledge that film is a medium. It is not reality. There is no magic bullet, no enforceable code of conduct, no ten commandments of filmmaking that will ever make film purely objective. Reality is reality. Film is film. The best thing you can do as both a filmmaker and a consumer is educate yourself.

It’s important to learn about technology, to learn what is possible and how to spot a fake. But it’s also important to learn about art. You know—art, that thing that gets cut when we want to tighten school budgets. Learn about artistic conventions. Learn to read the meaning behind how a frame is composed, how set decoration reinforces the theme, and how story arcs are constructed. Understand that models and movie stars are just people, too. Play with AI. Appreciate the complex and tragic story of Aaron Copland and Fanfare for the Common Man (examined here in another great podcast). See the allegory between Game of Throne’s White Walkers and climate change. Learn how Proust’s understanding of memory preceded neuroscience. Discover the 1920 play that introduced the word robot and the idea that robots are out to kill us. And, above all, recognize that just because you see a video of something doesn’t mean it’s reality. In fact it’s not reality. It’s just a video. And while there will be a degree of objective truth in it, there will always be a degree of the artist’s truth. After all, it is a video. It is not a pipe.

*NOTE: An earlier version of this blog appeared on this site in 2017. This version has been updated to reflect technological advances, specifically AI.

Stop Saying “Fast Forward.”

Over the past few years, I’ve noticed people using the phrase “fast forward” to indicate a passage of time. While I acknowledge the dynamic nature of language and try not to ride a high horse about “proper” English, I do find this phrase particularly jarring and troublesome. I’d like to take a moment to explain my concerns and, I hope, encourage you to think twice before using the phrase.

Photo by Anthony on Pexels.com

Here’s a pretty typical example from Forbes. “I served as a translator for both language and culture over the years and gained a deep appreciation of the challenges of navigating caregiving, education and culture. Fast forward to graduate school: My interest in supporting child well-being led me to become interested in better understanding policy.”

For starters, there’s an issue of “point of view” (POV). In case you forget POV from English class, first person = I; second person = you; third person = he/she/it/they.“Fast forward” shifts the point of view of a story. Most stories are told in first or third person. So if you say “fast forward,” who is doing the forwarding? 

If no subject is identified, “fast forward” operates in the second person POV with “you” understood. In other words, when I say, “Call me later,” I’m really saying, “You call me later.” So in the example above, who’s fast forwarding? You aren’t telling the story. The phrase makes much more sense when the subject of the sentence takes clear ownership. “Let me fast forward to graduate school.” “Can you fast forward to graduate school …” But if I have control of a story, why are you the one fast forwarding?

Then, there’s an issue of redundancy. “Fast forward,” used to indicate a passage of time, is often used in conjunction with another phrase used to indicate a passage of time. For example, “But relevance wasn’t the point — this was all about toughness. Fast-forward to May 14, when 10 people were gunned down at a Tops supermarket in Buffalo, New York” (MSNBC). Or “…President Donald Trump took a few steps in to North Korea and spoke about his friendship with that country’s leader, Kim Jong Un. Fast-forward almost three years. President Biden is in Seoul, emphasizing his friendship with new South Korean President Yoon Suk Yeol” (NPR).

https://blog.arduino.cc/2020/07/20/this-automated-perpetual-calendar-is-a-beautiful-way-to-watch-the-years-pass-by/

In both of these cases, fast forward is redundant. It is literally a waste of breath. The authors could just write “On May 14” or “Today.” Both choices are shorter and convey the same information. Brevity is a skill. Why use up our mental bandwidth for something you don’t need? If you can delete something, do it!

That being said, many writers will sprinkle in phrases to help set a theme. If you’re talking about movies, why not use “fast forward” as your time transition? And bits of jargon have been weaseling their way into our everyday language for centuries. In my introduction, I mention horses even though this essay really has nothing to do with horses. Below, I allude to plants. Considering the prevalence of video in today’s word, we can’t exactly prevent a phrase like “fast forward” from taking root. But there is a good reason I would caution against it. 

Hollywood has done a great job convincing us that love at first sight is real, “smoking guns” exist, passionate speeches change people’s minds, and there’s always a parking spot directly in front of a courthouse. If you use the veracity of that last example to measure the other three, you can see how absurd some of these propositions really are. We don’t live in movies. We can’t fast forward and rewind at will, and we need to stop thinking that we can. “Sure,” you might say. “That’s a problem for tween influencers who want to star in reality shows. But can tell the difference between reality and the movies.” Respectfully, I disagree. 

https://www.businessinsider.com/fox-24-in-development-2015-5

Think back, if you will, to the early aughts, when one of the most powerful countries in the world invaded a sovereign nation under false pretenses. At that time, a paranoid Bush administration justified its torture of detainees not through psychology and jurisprudence, but through the Fox television show 24. Slate noted the troubling argument way back in 2008. Politicians, judges, and intelligence operatives were basing their actions off of a fictional television show with real-world ramifications. It’s hard for me to believe that fourteen years later, with the ubiquitous use of smart phones and social media, that our psychology has become less entwined with fiction, fantasy, and technology.

I need to reset here, briefly, because I am a fan of fiction. Fiction helps us explore questions and ideas that would not be accessible in a purely fact-driven world. Fiction helps us develop empathy. Understanding fiction helps us understand reality. But fiction is merely an analogy. Fiction and virtual worlds are not the same thing as flesh and blood, and I think it is incumbent on us to keep those lines distinct.

As we spend more time in the virtual world, manipulating images, audio, and video like gods, we need to keep the reality of our existence in mind. We can’t photoshop ourselves to be sexier, edit our conversations to remove faux pas, or fast forward our way through a traffic jam. I think acknowledging that fact, even in a small way, will lead us to accept the world we really live in and do our best to make this world a better place.

Guns Don’t Kill People, Toxic Individualism Does.

Image via: https://www.kob.com/new-mexico-news/crew-member-sues-alec-baldwin-others-over-lsquorustrsquo-shooting/6297201/

After Alec Baldwin accidentally shot and killed cinematographer Halyna Hutchins on the set of his movie Rust, many people have been wondering why films even use real firearms. After all, (spoiler alter) Star Wars doesn’t use real lightsabers and Jurassic Park doesn’t use real dinosaurs. Why should westerns and cop shows use real guns?

Largely, I agree with this argument. Firearms present an unnecessary risk on a film set. But if we ban firearms in films, we’re avoiding the bigger issue, and it’s not just a problem in Hollywood.

I’ve observed a pervasive attitude in the United States that rules are for suckers, regulations only exist to hinder progress, and anything is legal as long as you don’t get caught. This is a shortsighted, toxic attitude. It’s the kind of thinking that led to nine people dying at the Astroworld Music festival last week and eleven people drowning in illegal basement apartments in New York during hurricane Ida. It’s the same narrative that has led to injuries at Tesla’s Gigafactory 1 and, before that, its plant in Fresno. It’s the same kind of thinking that killed Sarah Jones on a movie set in 2014. (No firearms were involved in that incident.)

I think this general attitude can be appropriately described as “toxic individualism.” It’s a belief that I have a right to say and do whatever I want at any time I want without consequences. It’s a belief that rules don’t apply to me. It’s a belief that personal choices are not influenced by social constructs nor do they affect the people around us. To be clear, individualism itself is not a negative concept. Some degree of personal independence is healthy and rewarding. But extreme individualism at the expense of everything else-individualism that tramples on other people’s rights-is downright deadly.

The issue on the set of Rust was not that the filmmakers were using firearms; the issue was that they were not following well-established guidelines for handling firearms. I’ve been on sets with explosive, guns, helicopters, boats, pyrotechnics, car crashes, fight scenes, and hundreds of extras wielding swords. I was perfectly safe on all of them. The most dangerous sets I’ve been on are the ones where production rushed the crew, ignored the safety recommendations of more experienced crew members, or flouted industry standards altogether. There is nothing clever, artistic, or thrifty about putting people’s lives at risk.

I’m not opposed to banning firearms on set, but that’s not going to solve the problem. We need to disabuse ourselves of the idea the rules apply to everyone else. The rules only work when they apply to everyone. And we need to start calling out our colleagues and employers who think they can cut corners and take shortcuts. It takes guts. Reports from the set of Rust state that crew members walked off the job shortly before Hutchins was killed. It’s not easy to stand up to Alec Baldwin or Elon Musk. It’s going to take a sea change in American culture for worker safety to take priority over profits. Fortunately, we don’t have to do it alone.

Individualism may be American, but so are unions. Unions built this country. They led the fight for the weekend, overtime pay, minimum wage, health insurance, and banning child labor. If the crew of Rust had been following IATSE’s firearms regulations, Halyna Hutchins would still be alive.

Eliminating toxic individualism will not be quick or easy. Like anything that’s worth doing, it will take time. Appropriately, it’s important to recognize that you aren’t alone in the fight. Educate yourself about your rights as a worker and a consumer, participate in the processes that negotiate these rights, and reach out to the unions and organizations that are trying to make America a better place for everyone.

Should we ban firearms on set? Sure. But while we’re talking about it, let’s talk about the root of the problem, as well. Let’s ban toxic individualism, too.

The Death of the Artist

In his book, The Death of the Artist, William Deresiewicz laments the decline and fall of the blue collar, professional artist. And while he unpacks a variety of legitimate and terrifying issues such as the unravelling of historic institutions and the job gobbling monster that is big tech (problems that affect everyone, not just artists), I feel like he misses a certain perspective about the motion picture industry. While the industry manages to sidestep many of the issues plaguing other artistic endeavors, it’s not avoiding them altogether. Because it’s a complex, multi-layered situation, I think it might be instructive to look at the motion picture industry through the three specific lenses: technology, art, and business. 

Technology (What is film?)

Deresiewicz differentiates between television and film, but it’s an arbitrary distinction. With the exception of the live or live-to-tape multi-camera shoot, production crews make feature films, television shows, and used car commercials the exact same way. Only a decade ago, we filmed thirty-second lottery commercials on 35mm Kodak. The question of “what is a film” has less to do with being “filmed” than how the content is delivered to an audience. 

In that regard, feature films suffer from a major constraint: they need to be long enough to justify the price of admission but short enough to satisfy an audience before their legs fall asleep. To do that, many films rely on tropes and cliches to keep a story moving forward. The boom in quality television over the last decade has allowed filmmakers to explore more interesting stories in more depth than they ever would have been able to on the silver screen. 

Are movie theaters dead? Well, not quite. Some nostalgic urge to hit the town and see a show will linger indefinitely. Movie theaters have yet to kill live theater, and a black and white silent movie won the Oscar for best picture in 2011. I can guarantee that ninety-minute visual storytelling will live on. It is true, however, that certain low-to-mid-budget genres are not currently profitable. Nevertheless, I’m not fully convinced that it’s a bad thing or that the trend won’t change.

Art (Are filmmakers artists?)

Deresiewicz defines four paradigms of artist: artisans (or craftsmen), bohemians, professionals, and producers. It’s the professionals—working artists who own houses and have dental plans—that Deresiewicz is most concerned about in his book. Chapter after chapter outlines how writers, painters, and visual artists fight for the crumbs of an ever-shrinking pie while struggling to find time to develop their art. And yet, in 2021, television and film are actually doing okay. 

One of the big things I need to point out here is that television and film straddle the worlds between art and commerce more than other industries. True, you’ve probably heard of writers who cut their teeth in the newspaper industry (when that was a thing), but very few renown painters started off whitewashing fences. 

On a film set, any given crewmember may have spent the previous day filming a television show or commercial. Disappointingly to most crewmembers, that often means that they’re capable of delivering a much higher quality product than the used car company requires, but it also means that art and commerce move around freely in the same space. Similarly, scenic painters, carpenters, costumers, and camera operators are highly educated, incredibly talented artisans operating at the top of their game. Not only do they need the vision to offer their own artistic input, they need to be able to shift gears to cater to someone else’s vision or mimic a historical style. 

In that way, filmmakers really match Deresiewicz’s first paradigm—the artisan—and I think it’s a good model to follow. Although it really doesn’t matter to the IRS, Deresiewicz’s paradigm poses an interesting question: “Are filmmakers artists?” That’s hard to say. If Deresiewicz is looking for talented individuals who work in a creative discipline and can afford middle class lifestyles, then yes. We’ve found a winner. But if you define artists as individuals who create things that make you question and challenge the world… well maybe not. The two aren’t mutually exclusive, but the latter is far less economically viable.

It’s also worth pointing out that filmmakers have always been “gig” workers. A crewmember (even a director) may have multiple employers in a single week. By and large, the thing that enables filmmakers to buy houses and get dental plans is unions. As people are trying to cobble together livings by working Uber and Door Dash, I can’t stress enough how beneficial it would be to unionize. 

Business (Are moving pictures safe from a flood of amateurs?)

On page 220, Deresiewicz states, “Film and television have a final advantage over arts like music or writing. Amateurs do not pose any threat because no one is ever going to mistake what they do for the real thing.” I have to disagree with him there. As technology has decreased cost, it’s become easier and cheaper for people to produce video content. Whether their productions can be considered art or even “feature films” is another matter. Birdemic is a prime example. 

If you think I’m being dramatic, you haven’t noticed how much content Gen Z watches on YouTube. Poor production quality has become synonymous with verisimilitude, and young viewers have managed to lower their standards below even “reality TV” quality. True, in the world of fiction, no one’s going to mistake Tommy Wiseau for the next Spielberg, but it’s a troubling sign if you recognize the names Tommy Wiseau or Birdemic.**

The bigger problem here is that as audiences accept lower quality, they refuse to pay for higher quality. Consequently, production companies refuse to pay as well. Just earlier today, I was speaking with a coordinator who lamented that the latest money-saving trend is to not hire location managers. And after a year of looking at everyone’s terrible lighting skills on Zoom, I’m afraid the bar for quality has been irreparably lowered. 

Where does that leave us?

Although not artists in the Van Gogh or even the Andy Warhol sense, filmmakers do work in creative fields, and they can make a decent living. Artisans produce work that is beautiful and functional. Their work may be thought provoking but is seldom a “think piece.” In other words, film—and all twenty-first century art—needs some utility or usefulness (see The Death of the Artist pages 272-273). Within this paradigm, artists are producers. I like this concept. It’s more democratic and egalitarian than the concept of elite geniuses sprinkling culture to the plebes. Artists are useful members of society who produce goods that can also be beautiful and thought-provoking. 

Consider the gorgeous pattern on this 1100-year-old Peruvian tunic. It is beautiful and useful, and I doubt that the person who made it had an MFA. Source: https://museum.gwu.edu/indigenous-american-textiles https://museum.gwu.edu/indigenous-american-textiles

That being said, our society is continuing to devalue labor and expertise. There’s no easy fix for this, but there is, perhaps, a silver lining. Art, throughout the ages, has always helped humanity cope with change and reframe tragedy into something that we can—if not understand—at least articulate. In the twenty-first century, art is not only doing this job metaphorically but literally instructing us on how to make a new economic paradigm. If you have a chance, check out Deresiewicz’s book. And if not, at least take a moment to check out some art. 

**In many ways, Canon’s 5D Mark II, the first SLR camera to shoot full high-definition video, marked a depressing turning point. In 2009, every film school grad with $3,000 suddenly thought they were a director of photography. Today, the image quality and editing ability of a smartphone are more advanced than the professional digital equipment I used in the early aughts. But to reiterate a point Deresiewicz makes over and over again, just because some can paint or film or sing does not mean they have the professional experience or artistic eye to be an artist.

Pedagogy. Or is it pedagogy?

On #WorldEmojiDay, I’d like to talk about…pronunciation. Last year, I returned to school to study creative writing. Since most people with MFAs actually end up teaching English, Drexel University offered a writing instruction or “pedagogy” class. And it made me wonder, how exactly do you say pedagogy?

I’ve heard everything from to PED-a-GOG-y (rhymes with doggy) to PED-a-GO-gie (rhymes with hoagie) to to PED-a-GOD-gy (rhymes with dodgy) to PED-a-GO-gee (rhymes with Emoji—which is why I was thinking about this today). I heard that last pronunciation most frequently, but it kind of bothered me. 

In my Merriam-Webster’s Rhyming Dictionary, the only words that rhyme with Emoji are Moji, shoji, anagoge, and Hachioje. Moji, shoji, and Hachioje are all Japanese and spelled with a “j.” As a native English speaker, “pedagoji” would not have been my first guess. Anagoge is Greek, like pedagogy, but it is spelled differently. Also… I’ve never heard of it. 

But looking at the spelling and word origin, the similar Greek words, synagogue, demagogue, and even pedagogue, are pronounced with a hard “o” and spelled “gue.” Since pedagogy and pedagogue have a shared origin, “pedagoggy” does make some sense… although it sounds completely ridiculous. Seriously. Say it out loud. 

That brings me to my initial guess. In the academic world, we study -ogies: biology, mythology, gerontology, psychology, geology, Egyptology, immunology, hydrology, chronology, neurology, archeology, et ceterology. Here the “o” has the schwa or “uh” sound, not the “oh” of emoji. It seemed to be the obvious choice to me. Psychology. Pedagogy. The study of teaching should rhyme with the study of the psyche or myths or gerons… or whatever. But that -ology comes from the Greek logia (study) or logos (story or word) while -agogy comes from the Greek agogos (guide). So maybe the unique pronunciation is an important distinction. 

(Sidebar: I don’t see why anyone would go for pedagoagie.)

But that brought me to one of the more important lessons of modern, English pedagogy. Enforcing pronunciation, spelling, and grammar rules are actually a form of oppression. “Proper” English is a way of separating groups. It tells you where someone came from. It tells you if English was their first language. It tells you who had enough money to go to college and who had enough free time to study English. The reality is, most people in the world get along just fine with double negatives, dangling modifiers, and frequent switches in verb tense. A lot of the time, they don’t even use words. 😉

However you deploy English has less to do with what’s “correct” than what group you want to be a part of. English “rules” are actually guidelines that are very audience specific. Grant proposals, ad copy, emails, news reports, social media feeds, and blogs all follow slightly different rules. The most “well written” research paper in the world will not sell more copies of The Hollywood Reporter

That can make English pedagogy a little more nuanced than your sixth grade English teacher may have claimed. There are trends and best practices, but the rules are actually kind of fluid. As far as the pronunciation of pedagogy goes, if you want to fit in, pronounce it however the cool kids do. You can justify pretty much any version. But if you want to be a bastion of liberty, forge your own pronunciation. Maybe that’s the best argument for pedagoagie.