Teaching Technical Writing with AI: A Study on ChatGPT’s Strengths and Limitations

The EDLI team recently completed an applied study examining ChatGPT’s role in technical writing education. This study investigates ChatGPT’s potential as both a way of modeling technical writing and a tool for critique. Students’ observations from the study noted that ChatGPT-generated instructions were well-structured and easy to follow but often lacked the detail, audience awareness, and accuracy necessary for effective technical writing. Students found that while AI-generated instructions demonstrated strong formatting and clarity, they required more specificity and visual aids to be truly user-friendly. The study highlighted the importance of usability testing in refining both human and AI-generated instructional content.

Study Design and Methodology

The project involved 73 students across five undergraduate courses in Professional and Public Writing and Experience Architecture. The study was conducted in two phases:

  • Phase 1: Students were divided into small groups and tasked with writing instructional documents for either Google Docs or Google Slides without external resources.
  • Phase 2: Students conducted usability tests on both their own and AI-generated instructions, evaluating effectiveness without knowing the authorship of each document.

Throughout this process, students assessed AI-generated instructions and compared them to human-authored versions, highlighting both strengths and weaknesses in clarity, formatting, and audience awareness.

Key Findings

The study demonstrated contrasts between AI-generated and student-authored instructional documents.

  • Success Rates:
    • Task-completion success rates were similar for student-authored and ChatGPT-assembled instructions for Google Slides. 
    • For Google Docs, the student-authored instructions performed better (67% success rate) compared to ChatGPT’s performance (57% success rate).
  • Student Perceptions:
    • AI-generated instructions: Student testers listed simplicity, thoroughness, and formatting as strengths of the AI instructions. They also noted that the AI-generated instructions lacked detail and audience awareness, and the AI introduced some inaccuracies in terminology and step descriptions.
    • Student-authored instructions: Student testers valued more detailed explanations and the inclusion of images in student-authored instructions. They also noted a need for greater specificity and consideration of audience abilities..
  • Usability Testing Observations:
    • Formatting inconsistencies were more common in student-authored documents, while AI-generated instructions lacked precise steps for certain tasks.
  • Learning Outcomes:
    • Many students reported a deeper understanding of usability testing’s importance in technical writing.
    • Responses indicated that exposure to both human and AI-generated instructions enhanced students’ ability to identify effective writing strategies.
    • Several students noted that writing for an unfamiliar audience is more challenging than expected and emphasized the value of structured feedback.

Broader Implications

The study’s results suggest that ChatGPT can serve as an effective pedagogical tool in technical writing courses. While AI-generated instructions often model structured and formatted writing, they lack critical audience awareness and nuanced detail. By critiquing and comparing AI outputs, students gained hands-on experience in assessing usability, clarity, and instructional design principles.

This research demonstrates on possible method for using large language models like ChatGPT to support—but not replace—human-authored technical writing. By positioning AI as both a tool and a subject of analysis, writing instructors can reinforce core writing competencies in technical communication courses.