An introduction to "AI as co-author"
A primer to using AI as a co-author in the technical writing process.
The concept of “AI” is not a new idea. Computer science researchers have examined and developed AI systems for decades. In the 2010s, one common use of “AI” systems was machine learning, such as systems that are trained to categorize input, such as identifying subjects in photographs or video, or translating text from one language to another. As cited in French & Shim (2024), “AI is often used interchangeably with machine learning, neural networks, and deep learning, there is a broader spectrum of tools that fall under the AI umbrella.”
Generative AI
The most notable AI breakthrough in recent years has been the advancement of Natural Language Processing through Large Language Models, which can produce output that mimic human-generated content quite well. This is also called Generative AI (“Gen AI”) and is becoming more popular as a “co-author” in technical and professional writing.
In a report published in September 2025, Microsoft finds that “information gathering, writing, and communicating with others are the most common user goals in [Gen AI] conversations” and that “information gathering and writing activities … are the most successfully completed tasks.” (p 2) The report also highlights “40 occupations with the highest AI applicability score” including several that apply to technical and professional writing, including translating content, writing content, proofreading, and editing. (p 11)
The US Copyright Office released a report that summarized their understanding of Gen AI and how it works, including opinions about whether or not Gen AI output is sufficiently “transformative” to bestow copyright. This is a long report, but I ask that you focus on section II.B “Generative Language Models” (pp 6-9) about tokenizing, and II.C “Training Data” (pp 9-16) about how AI is trained.
Make multiple passes
Generative AI is a relatively new tool that technical writers can use to generate content. As Card (2025) discusses in his chapter about Generative AI and the Production of Technical Content, “technical communicators will increasingly find generative AI tools integrated directly into their authoring environments. These AI integrations will be designed for domain-specific purposes, such as translating technical content, drafting Application Programming Interface (API) documentation from code, and rewriting content for adherence to brand tone and style.” (p 475)
In addition, “It’s also essential for practitioners to play a proactive role in shaping the application of generative AI in creating technical content, as well as in broader organizational communications.” (p 478) In other words, you (the technical writer) need to understand it so you can talk about it in a work context, and make recommendations that fit your organization.
This sets the context for an exploration of how to use Gen AI iteratively to generate content and improve on it. French and Shim (2024)also discuss how “individuals who rely on AI to generate outcomes, make decisions, or perform tasks may exhibit high confidence to complete tasks but have very low competence and ability. This reliance will put human workers on the peak of mount stupid and in danger of being replaced due to their reliance on AI.” (p 15) In other words, you need to remain the “person in the loop” to ensure that we leverage AI in an appropriate and ethical way.
An October 2024 video from Google about How to Boost AI Output Through Iterative Processes demonstrates using Gemini as part of a web server, but you should adopt the same iterative approach to create writing samples. The same Google channel has a related video about How To Boost Productivity Using Large Language Models, which more directly discusses how a technical writer leverages Gen AI in the writing and editing process.
Fact-checking AI
Gen AI can provide high quality output, but responsible technical writers must also closely examine its output for incorrect information, mistaken assumptions, and misattributions. As Adel and Alani (2025) identified in a study about Gen AI synthesis, “the persistence of hallucination rates (Chelli et al. 2024) underscores a continuing need for rigorous human oversight.” (p 3) Even in areas where Gen AI performs well, such as summarizing and screening information, “hallucination remained a critical concern.” (p 10)
An important takeaway from this study is that technical writers need to be very careful about what Gen AI provides to you. Always check it with a critical eye and verify “facts” that are claimed by Gen AI.
This was demonstrated perhaps most dramatically in a recent legal filing in Case No. 1:22-cv-01129-NYW-SBP (USDC Colorado; Coomer v Lindell, Frankspeech, My Pillow). As cited in an “order to show cause” by Judge Nina Y. Wang in response to an erroneous filing made by the defense, Mr. Kachouroff (attorney) “admitted that he failed to check the authority in the Opposition [in using Gen AI to write a brief] before filing it with the Court.” (p 4) The brief included “misquotes of cited cases; misrepresentations of principles of law associated with cited cases; … misstatements regarding whether case law originated from a binding authority; … misattributions of case law to this District; and most egregiously, citation of cases that do not exist.” (pp 2-3)
Ethics in AI use
One ethical concern around Gen AI is how the data is sourced. For example, the US Copyright Office notes that “Creating and deploying a generative AI system using copyright-protected material involves multiple acts that, absent a license or other defense, may infringe one or more rights.” (p 26) Also interesting is their note that “Although some commenters noted that data may be discarded after the training process, that does not affect the infringement analysis.” (p 27)
Another concern in using Gen AI is infringement on another’s copyright, even unintentionally, by using the Gen AI output verbatim. For example, the US Copyright Office notes that “Generative AI models sometimes output material that replicates or closely resembles copyrighted works” and “Such outputs likely infringe the reproduction right and, to the extent they adapt the originals, the right to prepare derivative works.” (p 31)
An interesting dilemma in using Gen AI in technical writing is reputational. According to Reif, Larrick, and Soll (2025) “people believe others will perceive them negatively if they are known to receive help from AI” including viewing those who leverage Gen AI “as lazier, less competent, and less diligent than people who get help from other sources or who get no help at all.” (p 5). This is a “Catch 22” situation: organizations may ask you to leverage AI-powered tools, but they may not value the output you produce when using Gen AI. One way to navigate this might be not to “advertise” that you use Gen AI unless asked, and always make sure your AI co-authored output “sounds” like a real person.
Adel, A., & Alani, N. (2025). “Can generative AI reliably synthesise literature? exploring hallucination issues in ChatGPT.” AI & Society. doi.org/10.1007/s00146-025-02406-7
Card, D. (Ross, D. G. ed). (2025). “Generative AI and the Production of Technical Content.” The Routledge handbook of ethics in technical and professional communication. Routledge.
Case No. 1:22-cv-01129-NYW-SBP, document 309. (4/23/25, USDC Colorado)
French, A. M., & Shim, J. P. (2024). From Artificial Intelligence to Augmented Intelligence: A Shift in Perspective, Application, and Conceptualization of AI. Information Systems Frontiers. doi.org/10.1007/s10796-024-10562-2
Reif, J. A.; Larrick, R. P.; & Soll, J. B. (2025). Evidence of a social evaluation penalty for using AI. Proceedings of the National Academy of Sciences; PNAS, 122(19), e2426766122. doi.org/10.1073/pnas.2426766122
Tomlinson, K.; Jaffe, S.; Wang, W.; Counts, S.; & Suri, S. (2025) Working with AI: Measuring the Applicability of Generative AI to Occupations. arXiv:2507.07935 [cs.AI]. arxiv.org/abs/2507.07935
USCO. (May 2025). Copyright and Artificial Intelligence Part 3: Generative AI Training. United States Copyright Office.