
Figure: The IRB Process for Generative AI Research. This figure may be shared and used freely with proper attribution to the creator.
When I embarked on my research project analyzing student surveys using ChatGPT, I knew I needed to go through the Institutional Review Board (IRB) process. My goal was to explore whether ChatGPT—or any generative AI tool—could serve as a reliable, supplemental qualitative analysis tool (here’s my recent publication resulting from this exploration). What I didn’t anticipate was the level of additional review and revision required due to the involvement of generative AI.
I was also unsure how to word my consent forms and, more importantly, how to ensure that my research adhered to ethical standards. To address these concerns, I meticulously reviewed AI companies’ data policies, particularly regarding training and retention, and opted out of various data-sharing settings before submitting my IRB application.
Navigating this process required extensive back-and-forth communication with MSU IRB, IT Governance, Risk, and Compliance (GRC), and FERPA representatives—along with deep dives into AI policies. Ultimately, I successfully navigated the approval process, and in this post, I’ll outline the steps I took—along with key takeaways—to help others who, like I was, may feel overwhelmed by the complexities of conducting similar studies.
Step 1: Determine if IRB Approval is Required
Not every study requires IRB approval, so the first step is assessing whether yours does. The key questions to ask are:
- Does my study involve human subjects?
- Am I collecting identifiable data?
- Does my study pose more than minimal risk to participants?
Since my project involved analyzing student survey responses and used AI for qualitative analysis, I needed IRB approval. Even though my study did not involve direct interaction with participants, the fact that I was working with student data—and using an AI tool—meant I had to demonstrate compliance with ethical research standards.
Step 2: Prepare the IRB Application
Once I confirmed that my study required IRB review, I developed my research protocol and gathered all necessary documentation, including:
- Survey instruments
- Consent forms
- Data collection and analysis procedures
One of the challenges was explaining how I would use ChatGPT while ensuring participant privacy and data security. My initial wording needed revision to clarify how I would de-identify data before inputting it into the AI model.
Original wording:
“Survey responses will be analyzed using ChatGPT to identify themes.”
Revised wording:
“Survey responses will be anonymized before being processed using ChatGPT, a third-party generative AI tool. No personally identifiable information (PII) will be input into the AI system, and only aggregated themes will be reported.”
Step 3: Complete the HRP 503 Form
As part of the IRB application, I had to submit the HRP 503 form. This form required a detailed study description, including participant eligibility, data handling procedures, and an explanation of how AI would be used in my research.
One of the key aspects was demonstrating that my study posed minimal risk to participants. Since AI tools can introduce concerns around data security, I had to justify that my research methods adhered to ethical standards.
Step 4: Submit the IRB Application
With all documents prepared, I submitted my application through my institution’s IRB submission portal. At this stage, I expected a straightforward review, but I soon learned that additional approvals were required due to the involvement of AI.
Step 5: Addressing AI-Specific Compliance Requirements
The IRB review team flagged two areas of concern:
- FERPA Compliance: Since my study involved student data, I needed to confirm whether my survey responses qualified as FERPA-protected education records.
- IT Approval: I needed approval from my university’s IT Governance, Risk, and Compliance (GRC) team to use an external AI tool.
For FERPA, I had to clarify whether my surveys contained identifiable data.
- Anonymous Surveys: If no personally identifiable information (PII) is collected, the data is not considered FERPA-protected.
- Identified Surveys: If responses are linked to student records, additional protections are required.
Since my study focused on anonymized responses, I provided a justification that my data did not fall under FERPA protections.
Step 6: Obtaining IT GRC Approval
Using AI tools in research often requires an additional layer of institutional approval. To obtain IT GRC approval, I had to:
- Submit an IT help ticket requesting a review.
- Provide details on my project, including how the AI tool would be used and what safeguards I had in place.
- Await feedback and recommendations from the GRC team.
The GRC team’s feedback prompted me to revise my consent form language. My initial version was too vague about how AI would process the data.
Original consent form statement:
“Your responses may be analyzed using AI-based tools.”
Revised consent form statement:
“Your anonymized responses may be analyzed using ChatGPT, an AI-powered tool developed by OpenAI. No personally identifiable information will be input, and only aggregated themes will be reported.”
Step 7: Making Revisions and Resubmitting
With feedback from both the IRB and IT GRC, I revised my application to align with their recommendations. Some of the key changes included:
- Clarifying data anonymization procedures
- Updating consent form language to ensure transparency about AI usage
- Explicitly stating that I would not input any PII into ChatGPT
Once I incorporated these changes, I attached the approval letters from the IT GRC team and any necessary FERPA documentation, then resubmitted my application.
Final Consent Form for Students
Below is a snippet of the final student consent form that was approved after multiple rounds of revision, specifically highlighting the sections relevant to the use of generative AI (I also had to gain a similar consent form from the instructors):
“Your participation in this survey is voluntary. You may decline to answer any particular question, and you may exit the survey at any time. You will not directly benefit from your participation in this study. However, your participation in this study / your responses will play an important role in understanding students’ perceptions about the course and your overall learning. All responses will be kept strictly confidential and securely stored. Your identity will not be disclosed. The responses collected may be used in future academic publications.
Additionally, in order to handle the high volume of responses and provide timely feedback to your professor, the researchers may use a “ChatGPT Team” subscription plan (not the regular ChatGPT 3.5 or 4.0; ChatGPT is a third-party generative AI tool developed by OpenAI) to assist in identifying themes. Prior to submitting data to ChatGPT, Dr. Sun (Researcher / PI) will ensure that all data are carefully reviewed and redacted to remove any identifying, personal, and/or private information relating to the course, instructor(s), or students. Thus, participating in this study poses minimal risk.
The anonymized data that may be inputted into the “Team Subscription” of ChatGPT does NOT use data for AI model training, as stated on the OpenAI’s policy: “We do not use your ChatGPT Team, ChatGPT Enterprise, or API data, inputs, and outputs for training our models”. Also, for the Team Subscription, “OpenAI encrypts all data at rest (AES-256) and in transit (TLS 1.2+), and uses strict access controls to limit who can access data.” Thus, participation implies consent to this use, governed by OpenAI’s policies, as follows: https://openai.com/enterprise-privacy
If you have questions at any time about the survey/study, please contact the EDLI (Evidence Driven Learning Innovation) team at sunhala@msu.edu. If you have questions or concerns about your role and rights as a research participant, would like to obtain information or offer input, or would like to register a complaint about this study, you may contact, anonymously if you wish, the Michigan State University’s Human Research Protection Program at 517-355-2180, Fax 517-432-4503, or e-mail irb@msu.edu or regular mail at 4000 Collins Rd, Suite 136, Lansing, MI 48910.
Again, we appreciate your honest feedback. Thank you!
ELECTRONIC CONSENT: Please select your choice below. You may request a copy of this consent form for your records by emailing sunhala@msu.edu. Clicking on the “Agree” button indicates that (1) You have read the above information; (2) You voluntarily agree to participate; and (3) You consent that your anonymized data may be inputted into ChatGPT, a third-party generative AI tool developed by OpenAI.
- Agree (1)
- Disagree (2) “
These revisions ensured that all participants understood how AI would be used in the research, that their data would remain anonymous, and that they had the choice to participate with full transparency.
Key Takeaways
Going through the IRB process for a generative AI research study was an eye-opening experience for me. Here are my biggest takeaways:
- Expect extra scrutiny when using AI tools. Be prepared to explain how you’ll handle data security, de-identification, and ethical AI usage.
- FERPA compliance is crucial for student data. Even if you think your data is de-identified, the IRB may require additional justification.
- Institutional IT approval may be required. AI tools like ChatGPT often fall under IT governance policies, so factor in extra time for approvals.
- Be clear in your consent forms. Participants need to understand exactly how their data will be used, especially when AI is involved.
By the end of the process, I had a much stronger understanding of what it takes to conduct AI research ethically and responsibly. While it took multiple rounds of revision, I’m glad I went through the extra steps to ensure my study met all compliance requirements.
If you’re planning a similar study, I highly recommend starting early and anticipating these additional review steps. It may take longer than a standard IRB approval, but the extra diligence is worth it to ensure ethical and responsible research practices. I’d also be more than happy to meet for a hands-on consultation or give a presentation on this process to help others navigate it more smoothly.
Contact: Dr. Hala Sun, Associate Director of Assessment & Evaluation, sunhala@msu.edu
Acknowledgment and Disclaimer
This blog post was initially drafted with the assistance of ChatGPT, using my original content wording and the IRB process figure I designed in Canva, which is included at the top of this post. I provided ChatGPT with the figure and prompted it to generate an initial draft based on my words. Afterward, I extensively revised the draft to reflect my voice and incorporated my own examples from my experience navigating the IRB process. The final version represents my insights, reflections, and refinements.
