Submission by Alexa Matwychuk

While I do not make extensive use of AI tools, I occasionally employ ChatGPT to explain course concepts. In the provided photograph, for instance, I asked ChatGPT to explain linear regression, thus augmenting my existing knowledge of this concept. ChatGPT and similar generative AI tools are furthermore useful for recommending creative names for my academic papers and projects. When asked to generate titles for a research project on the relationship between social anxiety and academic performance, for example, ChatGPT offered multiple clever and original ideas, such as “Behind the Grades: Understanding the Impact of Social Anxiety on Academic Success”.
AI tools may be used in accordance with academic integrity policies by augmenting, rather than replacing, students’ work. Grammarly is an example of such a tool, aiding in assessing the clarity, grammar and originality of one’s work. However, Grammarly also offers a generative AI writing tool that will complete a student’s essay based on the provided prompt. Such a use not only constitutes academic misconduct, but also often results in a piece that is of poorer quality than if the student completed it themself. As such, these tools should only be used to make minor improvements to existing work.


AI is beneficial for augmenting one’s existing work, such as by checking answers to practice questions and generating creative titles for a project. The former, however, may also reflect a challenge of AI, as the generated answers are too often partially (or completely) incorrect. In the corresponding photograph, for instance, ChatGPT was asked to explain how to complete a practice question from a statistics course. The provided steps and final answer are completely incorrect. While AI (ChatGPT, specifically) is useful for minor tasks for which there is no right or wrong answer, it may struggle with more complex ones.