Dealing with Academic Integrity
The growing concerns for academic integrity have created the immediate need for educators to adapt to new teaching practices. This change is to ensure AI supports the learning process without reducing students’ cognitive abilities and preserving their access to prerequisite skills and the social aspects of teacher-student and peer learning relationships (UNESCO, 2022). In addition to concerns regarding cheating and fraud, there are also valid concerns about the reliability of AI-generated results and their potential to exhibit bias.
Integrity Problems with AI
ChatGPT has already been documented to fabricate information and adamantly defend these fabrications (Knight, 2022). Often these cases are referred to as hallucinations, as the chatbot produces responses as though they are correct. It is estimated that ChatGPT produces approximately 4.5 billion words a day (Vincent, 2021). This flow of content has the potential to degrade the quality of information that is available from the internet. There is no way at this point for AI tools to authenticate the content it retrieves, which suggests using caution when choosing an AI tool or trusting the outputted information.
False Honesty of AI
Kidd and Birhane (2023) argue that repeated exposure to AI (in daily life, like chatbots and search engines) conditions people to believe in the efficacy and “honesty” of AI. They contend that AI’s method of using declarative statements without expressions, nuance, and caveat continues the process of convincing people to “trust” the AI. The use of unmonitored AI tools may result in a decline of critical thinking and may negatively impact content area learning, retention, writing development, creativity, and application (Miller, 2023). The burden of creativity and validity, therefore, lies with the humans who use AI to do their thinking instead of themselves.
Psychological Impact of AI
Artificial Intelligence is rich in potential but cannot be counted on to be accurate or representative. Both of these are of concern for our students. We do not want students to believe and/or use misinformation, and we do not want the information presented to them to be based on misleading data. There’s also a potential mental health concern that arises when dealing with chatbots. Chatbots have come closer to sounding as if they are human, which can have a psychological impact on students as they build relationships with bots that may not respond humanely and with the student’s best interest in mind (D’Agostino, 2023).
Interacting with generative AI tools may increase anxiety, addiction, social isolation, depression, and paranoia (Piscia et al., 2023). Although the studies of the impact of interacting with AI systems are in progress and shaping our understanding of the potential impacts of the tool on individuals and on society, a deeper, more complete understanding is yet to come and will develop in the coming years.