Using AI as a co-author?
How has generative AI changed your work? Here's what our community has to say.
We asked our community how their work has changed with AI co-author tools. We agree that OpenAI's ChatGPT, Google's Bard, and other AI co-author tools shouldn't be used verbatim in the writing process. AI can suggest an outline or approach to discuss a topic, but that doesn't seem very different to reading articles by other writers to get a feel for how to write about the topic on our own.
Katie Sanders says:
I've played around with ChatGPT for headline brainstorming and it is helpful. However, beyond asking for ideas to get my own brainstorming juices flowing, I wouldn't let it write an article for me. If I'm blocked and need a starting point, I'll use it as a tool.
Ultimately, the human perspective will be what stands out in copy written by actual humans versus content created with generative AI. As with most things in life, moderation is key!
Lauren Pritchett writes:
I don't yet trust the popular generative AI tools for the purpose of gathering information, but I have had fun experimenting for the sake of productivity hacks! For example, ChatGPT has helped distill the information I feed it with key takeaways.
When writing articles for my personal blog, I record my thoughts using an audio app on my phone. Then I ask ChatGPT to turn the audio transcript into a blog post. The more of my own content I feed it, the better idea it has for honing my personality and tone.
Robin Bland is planning to leverage AI where it makes sense:
I'm experimenting with it, but I am careful not to use any of the text it generates for me. I'm super cautious about that. I'll ask ChatGPT to describe something that I need to write about, and I'll use that as a starting point for my own article.
That's not very different from starting a writing project by reading what others have written about similar topics. Once I see how they structure their articles on a topic like mine, I can usually jump in and write my own version.
Jim Hall adds:
Like Robin, I'm very careful not to copy/paste from ChatGPT. I find that ChatGPT is still quite limited. The AI doesn't really understand what it's writing about; an AI system like ChatGPT is trained on a large body of what real people have written, then it essentially generates text based on probability. ChatGPT uses the context of the question and the text that it has already generated, and (I'm over simplifying) it decides on the next word or phrase based on what text is most likely to follow.
Technical communicators need to be very careful about using ChatGPT directly in their work. I wrote a book with ChatGPT and I learned that ChatGPT sometimes makes large, obvious errors - and sometimes it makes small, subtle errors. If you copy/paste directly from ChatGPT and "claim" it as part of your writing, you need to fact-check everything on your own.
It's still the early days of generative AI, but I'm encouraged by ChatGPT and other AI systems. If we set the clock ahead by five or ten years, using an AI co-author will be second nature. By comparison, think back to when Microsoft Word started to highlight misspelled words with a red squiggle. When Word started doing it, everyone thought it was amazing. But today, you don't think about it. We'll get there with ChatGPT too.
Seth Kenlon hasn't felt the impact yet:
Not at all.