Quick reminder that we’re hosting our first mailbag in a few weeks! Got a question for me? Email it to bmoritz99 at gmail, or just reply here. It is an AMA situation so have fun with it!
Nobody is going to be satisfied with this post.
If you’re anti-AI , if you nodded as you read along with Sally Jenkins’ recent column, you’re going to be disappointed that I don’t share your disdain for it.
If you’re an AI evangelist, if you think this is truly transformative technology and is vital to the future of higher education and journalism, you’re going to think I’m an old man yelling at a cloud.
And I’m OK with that. Because thinking about AI as a journalism professor, a writer, a podcaster, a parent, an educator, is complicated and complex business. The answers shouldn’t be easy or obvious, and they shouldn’t confirm our existing biases. As you can probably tell, this is one of those posts that is less a definitive take and more me working through stuff, riffing about a topic that is everywhere in all of my world.
Let’s start here: I’m writer who teachers writing and who also hosts a podcast. Of course I’m bringing a skeptical mind to any conversation about generative AI technology like ChatGPT. One of the end results of these technologies is that my industries go away, that I lose my job and all of my students lose their jobs. I’ve also seen students - in my classes and others - use ChatGPT as a shortcut to do work for them, a way of cheating the assignment and themselves.
And it’s hard not to read stories like the one in the New Yorker about how widespread ChatGPT use is among college students, or the one in the New York Times about how men using ChatGPT have been driven to the depths of mental illness and anguish, or stories like Amanda Guinzburg’s harrowing conversation with ChatGPT, and want to just fire all of these technologies into the sun.
I get it. I truly do.
But I can’t go all the way there.
For one thing, so much of the writing and the stories and the discourse around AI is starting to have real Moral Panic vibes about it. We talked this way about social media. We talked this way about Google. We talked this way about the internet. We talked this way about Cable TV and regular TV and radio and 45s and video games. Yes, there are differences, but any conversation about the evils of a piece of technology is always going to make me pause.
But there’s one question that keeps me from throwing generative AI technology into the sun.
It comes from one of the technology teachers in my local school district, who’s one of the leaders in figuring out how to best use these platforms in our schools.
Here’s the question:
If we don’t teach our kids how to use AI, who will?
That’s a powerful question.
It reframes this entire issue.
The students in that New Yorker article who are “cheating their way through college” by using ChatGPT, the students in my classes who have used it, weren’t taught how to use AI in a responsible, meaningful way that is focused on human connection and ethical behavior. They found it on their own and figured out how to use it to game the system.
Let’s assume for a second that these technologies and platforms are going to be here. In fact, as Bomani Jones often says on his podcast, this right here is the worst iteration of AI we’re ever going see. So these tools exist and are going to continue existing and probably get more powerful. Throwing them into the sun is a pipe dream, the way we all dream about chucking our phones into the ocean but never do.
If we assume that AI, in one form or another, will be a part of our technological lives in the future, then we have a responsibility to teach our students how to use it in ethical, responsible ways.
This is not like the Metaverse, a truly dumb idea no one really wanted and had no practical applications. Generative AI has real world applications. People can use this in their jobs and in the real world.
One way I think about it is by separating the tools from the industry.
What we hear a lot about in AI is really hype from the AI industry. And from what I have read, current AI industry is flaming hot garbage, combining all of the worst Move Fast and Break Things Scale Things 1000X impulses of the last 20 years into one series of programs that, as of right now, don’t feel ready for Prime Time. That’s the heart of the criticism Sally Jenkins and others have levied — it’s less about the tools themselves, more about how we are being sold these tools.
But if you disregard the industry and look at the tools themselves, there is a lot of promise and potential there. They’re not a substitute or a replacement for the hard work involved in writing and thinking and creating, but there are ways to use them that are useful. For all the online handwriting this New York Times piece caused this week, it describes some sensible and useful ways skilled and expert researchers can use a program like ResearchLM in their work.
Bluesky friend Malka Older posted this the other day:
As always, I turn to
. Last year, he wrote:What if we quit treating AI expansion like an unstoppable force of nature and instead treated it like a new, untested set of tools that can be deployed faster or slower, better or worse?
In the hands of a subject-area expert, a skilled journalist, A.I. tools can be extremely useful. The fear, though, is when they are not in the hands of a skilled journalist or subject-area expert. And the thing we can do to allay that fear is to teach our kids how to use these tools in an ethical, responsible way.
If we don’t, who will?
I think a big part of the issue is how AI is sold to us. The people in charge of it are telling us it's so much further along and so much more revolutionary than it actually is, at least in editorial and content-related fields. If they could be more honest about their applications, perhaps we could have a normal conversation about it. But when we're told it's this be-all-end-all technology and then we use it and it's full of errors and bad writing, we're left with a double band of disappointment and frustration. (It also doesn't help that the people trying to sell AI to us don't have a great track record when it comes to the value-add of their previous efforts).
If I don't use AI to think for me, how will I be able to generate pithy insights like this on Substacks and message boards:
Generative AI has the potential to enhance online discussions by providing thoughtful, well-researched, and contextually relevant contributions to Substack. By leveraging vast amounts of data and advanced algorithms, AI can assist users in articulating their thoughts more clearly and effectively, fostering richer conversations.
Moreover, generative AI can help moderate discussions by identifying and addressing misinformation, promoting respectful dialogue, and ensuring that diverse perspectives are represented. This technology can serve as a valuable tool for users seeking to engage in meaningful exchanges, ultimately enriching the online community experience.
As we navigate the complexities of digital communication, embracing generative AI can lead to more informed, constructive, and engaging interactions on internet message boards.