FSD 23-11 Guidance on use of AI in Teaching and Learning

June 7, 2024

FSD 23-11 Guidance on use of AI in Teaching and Learning


Proposed: Guidance on use of AI in Teaching and Learning

Committee: Education Policy Committee

For Discussion: April 12, 2024

Please note that many of the policies and a lot of the wording for the following document are obtained from [1,2,3,4].

Rationale: Artificial Intelligence (AI) has many possible uses to improve and expand students’ learning experience. Knowledge of proper use of AI and its limitations may provide employment advantages for students. Furthermore, AI tools could assist instructors in content generation, grading, and the detection of cheating. Purdue University Northwest supports the autonomy and choice of faculty and instructors to utilize instructional technology that best suits their teaching and learning environments. As such, there is no official university policy restricting or governing the use of Artificial Intelligence, Large Language Models (LLM), or similar generative technologies. Faculty should determine the type and degree of AI use that is allowable in their course and require students to document AI production that is not their own.

In the PNW Code of Conduct [5], it is noted that the following behaviors are subject to disciplinary action:

  • Cheating: “Students are expected to adhere to the guidelines provided by instructors for academic work so that no student gains an unfair advantage. Using or attempting to use unauthorized materials, information, study aids, notes, or any other device in any academic exercise will not be tolerated. Unauthorized materials may include anything which or anyone who gives a student assistance that has not been approved by the instructor in advance.”
  • Plagiarism: “Intentionally or knowingly representing the words or ideas of another as one’s own in any academic exercise. The sole exception of the requirements of acknowledging sources is when the information is considered common knowledge.”

Thus, it is important for students to know the limits of acceptance of AI use in the work that they produce and how they should cite it. Because appropriate usage of AI is determined at the course level by the instructor, the instructor should determine and inform students when and to what extent the use of AI is allowed. Transparency and clear expectations are most helpful for students.

Proposal: Since AI is changing very quickly, it is recommended that each instructor have a syllabus statement addressing the current parameters and boundaries for the use of Artificial Intelligence in each course, whether for student content generation, instructor grading, or any other purposes.

If AI use by students is allowed in the course, the instructor should:

  • Become familiar and stay current with the uses and limitations of the specific AI tools they are allowing before allowing students to use them in their courses.
  • Provide clear guidelines for when it is appropriate to use and how to use AI in the course. Adding guiding statements to each assignment and assessment description is suggested to reinforce instructor position on student use of these tools to complete the assignments.
  • Emphasize how students should reference their usage of AI resources for generative content, such as assignments, computer code, or visual/musical media.
  • Ensure students understand that AI agents are known to acquire information and data from across the internet, which can lead to factual inaccuracies, implicit biases, fabricated information, and references and links that don’t exist. Thus, students need to carefully consider this when assessing their output.
  • Explain to students that they are responsible for their learning and will not develop the skills they need to be successful in their careers by relying entirely on AI to do the work for them.
  • Since some AI-based tools encourage users to upload copyrighted material as training data for specific AI models, particularly those used to create content for ‘personalized learning.’
  • Make clear to students that sharing of copyrighted material with third-party AI tools is prohibited.
  • Remind students that instructor-generated materials should never be uploaded to any third-party site (whether AI-oriented or not).
  • Be aware that an unfair advantage might be obtained by some students, due to financial differences, by using a newer or more advanced version of AI-tools, such as a paid versus free access account.

If AI tools are used for grading student work, the instructor should:

  • At the beginning of the course, inform students what part of the grading might be performed by AI.
  • Clearly communicate with students how their data or interactions with AI tools may be used or shared before an instructor puts any of the students’ work into an AI tool, including AI detectors.
  • Create and apply a process by which any AI-based tools used for grading student work produce explainable and defendable grades. This process should:
  • Ensure that a human remains in the loop to the maximum extent possible.
  • Integrate a quality assurance procedure to ensure grading automation features (e.g., Gradescope’s grouping tool) work accurately and as intended.
  • Avoid “blackbox” grading methods/tools that cannot be validated or explained.
  • Commit to never sharing personally identifiable information about students with any third-party AI tool so as to ensure that the grading process does not violate Family Educational Rights and Privacy Act (FERPA) and privacy considerations.

If AI tools are used to detect the unauthorized use of AI, the instructor should:

  • Consider the appropriateness of using AI detection tools.
  • Know that AI detection algorithms produce many false positives.
  • The algorithms necessarily reproduce the biases inherentin the training material and reflect specific cultural and societal 
  • Purported detection tools have disproportionately returned false positivesfor non-native English speakers.
  • Describe the procedures they will follow if an unauthorized use of AI is suspected. It is especially important, given the high false positive rates of current AI detection tools, that a positive result from one of these tools is not the sole determining factor of an academic integrity violation and subsequent consequences.
  • Explain the potential consequences faced by students who are determined to have used AI tools in an unauthorized way, including any impacts on grades and possible referral to the Office of Student Rights and Responsibilities.


As a guideline, sample syllabi statements can be found at:


and in the appendix of this document.


The Educational Policy Committee

Committee Members:

David Kozel (Chair)

Shreya Bhandari

Robert Kramer

Amanda Kratovil

Yun Liu

Pamela C Saylor

Donna L Whitten

Ed Policy Members Voting in favor of this resolution: 7

Ed Policy Members Voting against this resolution: 0

[1]        https://www.purdue.edu/provost/teachinglearning/ai.html

[2]       https://www.purdue.edu/innovativelearning/teaching/module-category/generative-ai/

[3]        https://www.purdue.edu/innovativelearning/teaching/module/considerations-for-your-syllabus-and-course/

[4]        https://www.purdue.edu/senate/documents/meetings/Senate-Document-23-17-Statement-about-the-Use-of-AI1.pdf

[5]        https://www.pnw.edu/dean-of-students/policies/code-of-conduct/




Appendix A

Draft of Generative Artificial Intelligence (GenAI) Policy Statements

(Obtained from: Mark Mabrito)

Overview: Sample statements for syllabi regarding the use of generative AI (GenAI) in the classroom are presented below.  Examples of GenAI tools include ChatGPT, CoPilot, Claude, Google Gemini, Perplexity, Dall-E, and many others. These statements reflect three tiers of engagement with GenAI that instructors can use depending on the class and the degree to which individual instructors may or may not want to incorporate GenAI. In all cases, the assumption is that instructors would explain and have a conversation about GenAI tools (their potential and limitations) and how the policy works in their classroom. In cases where GenAI is permitted, instructors would define how and in what contexts, and students would need to cite/document their use of GenAI (see “Student Resources” at the bottom of this document). Additionally, instructors still need to provide standard policy statements regarding “traditional” plagiarism and academic dishonesty.


Tier 1 GenAI Policy: Students may not use or use in a very limited way GenAI tools 

“All assignments submitted for a grade are to be authored by the student. Developing strong competencies in the skills associated with this course, from student-based brainstorming to project development, will prepare you for success in your degree pathway and, ultimately, a competitive career. Therefore, the unauthorized use of generative AI (GenAI) tools to complete any aspect of assignments for this course is not permitted and will be treated as academic dishonesty. If you have questions about what constitutes a violation of this statement, please contact me.”

Tier 2 GenAI Policy: Students may use GenAI tools in certain contexts

“To ensure all students have an equal opportunity to succeed and to preserve the integrity of the course, students are not permitted to submit content that is generated by generative AI (GenAI) systems such as ChatGPT, CoPilot, Claude, Google Gemini, Perplexity, Dall-E, or any other automated assistance for any classwork or assessments. This includes using GenAI to generate answers to assignments, exams, or projects, or using GenAI to complete any other course-related tasks. Using GenAI in this way undermines your ability to develop critical thinking, writing, or research skills that are essential for this course and your academic success. Students may use GenAI as part of their research and preparation for assignments, or as a text editor, but the text that is submitted must be written by the student. For example, students may use GenAI to generate ideas, questions, or summaries that they then revise, expand, or cite properly. Students should also be aware of the potential benefits and limitations of using GenAI as a tool for learning and research. GenAI tools can provide helpful information or suggestions, but they are not always reliable or accurate. Students should critically evaluate the sources, methods, and outputs of GenAI tools. If you have any questions about this policy or if you are unsure whether a particular use of GenAI is acceptable, please do not hesitate to ask for clarification.”

Tier 1 and Tier 2 adapted from policy statements from The University of Texas at Austin


Tier 3 GenAI Policy: Students may use GenAI tools in broader contexts

“Development as a writer requires personal investment and practice. Generative AI (GenAI ) tools such as ChatGPT, CoPilot, Claude, Google Gemini, Perplexity, Dall-E are tools that good writers may rely on in some situations. Part of your development as a writer entails critically considering different occasions and developing a rationale for the appropriate use of AI writing tools. In this class, we ask that you keep an open line of communication with the instructor regarding the use of AI writing tools. It is important to consult your instructor BEFORE using them in an assignment. If, in consideration with your instructor, you do use Generative AI tools, cite them in your Works Cited/Reference page (see “Student Resources” below) and be prepared to argue a rationale for the appropriateness of their use. GenAI tools can provide helpful information or suggestions, but they are not always reliable or accurate. Students should critically evaluate the sources, methods, and outputs of GenAI tools.   If you have any questions about this policy or if you are unsure whether a particular use of GenAI is acceptable, please do not hesitate to ask for clarification.”

Adapted from Paul Slovin, Ohio University


Resources for students – Citing generative AI