Graduate School Bulletin
Spring 2025
Large language model (LLM) generative AI is an evolving field that requires responsible use and a firm understanding of academic expectations, which includes copyright concerns, data privacy, and academic integrity. We believe that LLMs can offer insight and inspiration into graduate level work, be integral to academic research and scholarly activities, facilitate data processing, and allow for new discoveries. However, LLMs should not be used in an unattributed manner, and/or as a substitute for your own scholarly writing.
Graduate Programs will make available a statement on the appropriate use of LLMs considering the norms and expectations within individual disciplines. We also encourage Graduate Programs to include statements regarding the use of Generative AI detection software, which can provide guidance on the likelihood of generative AI content, but have the same limitations as current LLMs and currently lag behind the advances of the LLMs generation tools themselves. Graduate students must read and understand these program statements, and appreciate that individual faculty may have different levels of approved LLM use in their classes, ranging from no use of LLMs allowed, to students being encouraged to use LLMs freely. Non-adherence to syllabus and program-level statements on LLM use can be considered grounds for a report of a potential academic integrity violation.
All theses and dissertations must be written by the author. Exceptions to allow LLM generated content will only be considered if the LLM generated content is integral to the purpose of the study. However this type of content cannot make up the majority of the document attributed to the author. Theses and dissertations cannot list multiple authors, and this includes LLMs.
Guidelines to Programs about Potentially Appropriate Uses of LLMs (all with discipline specific appropriate attribution)
- Initial brainstorming
- Suggest improvements/edits to user-generated content
- Generation of reading lists
- Image generation
- Computer code testing
Guidelines to Programs about Inappropriate Uses of LLMs
- Complete text generation, including documents to satisfy major academic milestones
- Complete problem solution
It is critical for graduate students to understand that they are responsible for any content that they produce, submit, and publish in any form. LLMs can and do generate content that is inaccurate or entirely false (‘hallucination’). LLM responses can also be biased, based on the coding constraints and the training data sets. Since LLMs cannot serve as authors on publications (e.g., they cannot agree to the authenticity of the work submitted, they cannot independently verify results/conclusions, etc.), it is important that students are reminded of integrity policies surrounding submitted work; Graduate School policies are described in Probation, Conduct and Grievances Section of the Graduate Bulletin. It is also important to understand that data submitted to LLMs may be used in future responses generated by the LLMs or can be queried by other users using Prompt Engineering techniques. Thus, data security and privacy are essential concepts to understand when using LLMs.
The Graduate School supports LLM literacy. While many academics/students are using LLMs, this does not mean that they understand the limitations of the content that is generated or the ethics surrounding using LLM generated content.