You are now in the main content area

Generative Artificial Intelligence in Learning and Teaching at TMU

CELT Homepage Banners - 16

Community Update and Guidance on the Use of Generative Artificial Intelligence in Learning and Teaching at TMU

The emergence of new generative artificial intelligence (GAI) technologies presents both opportunities and challenges to the way we learn and teach at Toronto Metropolitan University. As with any new technology, students and instructors will need to approach GAI technologies with openness, curiosity, and caution as we strive to adapt to the changing conditions in our classrooms and the world beyond TMU.

This update is intended to inform our community of our ongoing work on this matter, share available resources, and provide instructors with guidance as we respond to these technologies and their effects on learning and teaching at TMU.

Update on Activities to Date

Since 2022, the Centre for Excellence in Learning and Teaching (CELT) has been engaged with studying and advising on matters relating to GAI and the classroom. In March 2022, the Senate Learning and Teaching Committee undertook preliminary research into the implications of GAI on teaching and learning. The committee has remained engaged on this issue and will be taking up further discussions working toward the development of institutional guidelines.

In January 2023, the Academic Integrity Office (AIO) published an FAQ page that includes recommendations for faculty and students as well as a curated list of resources maintained and updated on the CELT website.

Guidance for Instructors

Authorization for the use of GAI in coursework is always at the discretion of the instructor and expectations should be communicated to students clearly in a syllabus or other course policy statement. Unless explicitly communicated by the course instructor, the use of GAI for coursework is not permitted.

When considering a course policy on the use of GAI, we advise that instructors:

Sample syllubus statement suggestions:

  • “If you submit work that doesn’t reasonably reflect your knowledge of the material and/or the skills being assessed, that work will be considered to be in breach of Policy 60: Academic Integrity.”
  • “The unauthorized use of Generative AI (e.g., ChatGPT, Quillbot, Grammarly, Google Translate) is prohibited and will be considered to be in breach of Policy 60: Academic Integrity.”
  • “Generative AI may only be used for idea generation or as a study aid, but not for the creation of submitted work.”
  • “Falsified citations or misrepresentation of source material will be considered a breach of Policy 60. You are responsible for the accuracy of the work you submit.”

Supports and Resources

The pace at which GAI is evolving, alongside the proliferation of AI-driven tools, requires us to remain agile and open to the changing landscape of assessment. Instructors are encouraged to evaluate their assessments and define clear guidelines for students on what is appropriate and inappropriate use of GAI. CELT staff including educational developers and academic integrity specialists are available for consultations to assist with these decisions. Instructors who choose to permit GAI in part or in full may wish to incorporate student education on the ethical concerns that permeate discussions of these technologies, including but not limited to algorithmic bias, dataset bias, susceptibility to generate false information, potential for mis/disinformation, and labour and ecological impacts.

In preparation for the Fall term, a GAI working group consisting of staff across the CELT developed resources including a workshop series to prepare faculty for the challenges and opportunities of teaching in the era of GAI, and the AIO will soon be offering an educational game, developed in partnership with Seneca College, to build students’ AI literacy skills.

Academic Integrity Considerations

We understand that these technologies pose a significant risk to academic integrity. Unauthorized use of GAI can constitute a breach of Policy 60: Academic Integrity in several instances:

  • Course work that is generated in part or in full by GAI and presented as if it is one’s own original work without appropriate referencing
  • The use of any such technology not expressly allowed by the instructor during an examination, test, quiz, or other evaluation

The Academic Integrity Office does not currently endorse the use of GAI detection tools. There are a number of tools available but the tools are unreliable and potentially raise privacy concerns. Instructors should consult with the Academic Integrity Office and consider attending an upcoming workshop on detecting unauthorized use of AI.

Instructors who suspect that a student has used GAI tools in a manner that constitutes academic misconduct should follow the  (google doc) formal process (external link)  for academic misconduct concerns. The heart of this process is a non-adversarial conversation between the student and the instructor where the instructor will be able to assess the student’s understanding of their work and educate the student on appropriate and inappropriate uses of GAI.

Policy 60 is currently under review as part of the normal review cycle and the Policy Review Committee welcomes your perspective on this and any other Policy 60 matters. Please share your views using our feedback form (external link) .

 To learn how to detect unauthorized use of AI, attend one of our upcoming workshops:

If you’d like to participate in the ongoing discussion about the challenges and opportunities that GAI brings to teaching and learning in higher education, join our vibrant Artificial Intelligence Community of Practice (external link) 

This page was last updated: September 2023