Opportunities and Threats in Higher Education

There are many opportunities and threats related to generative AI, and both must be weighed as we move forward with policy development. Given the magnitude and variable nature of AI, there will not likely be a one-size-fits-all solution to the application and adaptation of generative AI in higher education instruction (Piscia et al., 2023). However, there are still many important points to consider concerning generative AI.

It seems impossible and inadvisable to not consider the interoperability of ethics and equity across domains of higher education (Currie, 2023; Hutson et al., 2022; Nguyen et al., 2023). One cannot underestimate the significance of privacy, security, safety, surveillance, or accountability whatsoever. The integration of AI into medicine and healthcare, financial systems, security systems, and smart city technologies represent very real-world situations in which machine malfunction or bad actors can result in loss of life, access to essential services, or loss of resources (Ayling & Chapman, 2022; Currie, 2023). However, many of the challenges surrounding higher education involve barriers to equitable access and educational services and resources. Therefore, any way in which AI may undermine equity should be treated as a significant ethical concern. It is worth noting, however, that AI also has the potential to improve or enhance accessibility and inclusivity (Çerasi & Balcioğlu, 2023). AI also has the potential to enhance teaching and learning (du Boulay, 2022; Perkins, 2023; Sabzalieva & Valentini, 2023; Sullivan et al., 2023) in ways that can improve or increase equity, which suggests that perhaps higher education has an obligation to integrate AI into its operations as much from an equity and ethics perspective as it does an experiential learning/workforce development or industry obligation to adequately prepare its students for real world work.

In considering the ethics of AI in higher education, it may be most useful to approach this situation through different stakeholder groups, namely students, instructors, and the institutions themselves (du Boulay, 2022; Holmes et al., 2023; Irfan et al., 2023; Miron et al., 2023; Ungerer & Slade, 2022), as well as through external groups such as industry collaborators and the communities in which those institutions operate. Within the institution, as noted above, AI has the potential to affect non-academic elements which cannot be ignored. Furthermore, the impact on the educational elements can vary in terms of programs, disciplines, and modalities, such as in-person instruction versus distance-based education (Holmes et al., 2023). Some researchers have expressed concern around how AI may or can compromise the autonomy of both students and instructors (du Boulay, 2022).

What does this all mean for educators?  If we are to believe the experts as well as our own recent experiences, many issues need to be addressed. The current version of artificial intelligence seems to be just the beginning. The emergence of AI has been described as the dawn of a new era, a virtual big bang if you will. That is the world for which our students need to be prepared.

It is important to acknowledge and consider the positive aspects of the learner’s experience regarding the use of generative AI in higher education. In many cases, generative AI may improve the experiences of our students both in the classroom and in their assigned work by introducing new methods of teaching and assessment (Piscia et al., 2023). As learners experience these tools in the classroom, students are learning and strengthening skills for their future endeavors and new realities within the classroom and in the workforce.

The inclusion of current and up-and-coming technology is imperative in education in the same way it drives progress and change in society. Fluency with generative AI tools will increase digital literacy and technology application for learners (Piscia et al., 2023). Additionally, students may be drawn to the inclusion of this tool in instruction, increasing the sense of relevancy of classwork and participation for students (Piscia et al., 2023).

The application of generative AI by instructors can also strengthen instruction, personalize learning opportunities, increase adaptability of instruction and learning, and strengthen accessibility for all learners (Piscia et al., 2023 and Shonubi, 2023). Each of these opportunities together increases inclusion in the classroom for all learners. AI tools can also be applied to the creation and/or modification of instructional objectives, pedagogy, and assignments and assessments.

Further, generative AI can be used to automate administrative tasks to improve workflows, decrease human transcription errors, and decrease processing times in many areas. Additionally, the application of generative AI in this way has the potential to decrease administrative costs and streamline administrative tasks (Parasuraman & Manzey, 2010, Piscia et al., 2023, and Shonubi, 2023). Generative AI has significant potential across a variety of higher education settings; instructional and learning environments in particular.

Additionally, institutions could be preparing students now for professions that are reduced or eliminated by generative AI presence in the workforce in the future. And the human aspect of interacting with generative AI must be not only considered, but studied as we move forward with this new tool at our disposal (Piscia et al., 2023 and Shonubi, 2023).

Congruently, it is imperative to consider the negative aspects of generative AI as well. The current lack of regulation and inconsistent accuracy of output are shortcomings that cannot be ignored. Generative AI is an evolving tool that needs to be carefully considered prior to its use.

Equity and Access in Higher Education

As indicated above, equity is crucial. Equity and access concerns for AI in higher education include fairness in outcomes across demographic groups, strategizing to identify and curb biases, and ensuring inclusivity and accessibility in the utilization of AI tools. Traditionally marginalized, underrepresented, and vulnerable groups must be consulted to ensure their experiences are represented in the datasets that drive AI and to limit AI from amplifying historical inequities (Munn, 2022, p. 874). Institutions of higher education must safeguard data privacy and protect against reinforcement of existing inequities. According to Roshanaei et al. (2023), this involves protecting sensitive student data, transparency and consent, strong data protection measures, and new regulatory frameworks. They draw attention to three risks to equitable AI integration in educational settings: biases in AI algorithms, the digital divide, and undermining the role of teachers (Roshanaei et al., 2023). Systems of higher education have an obligation to close the digital gap for students from low-income backgrounds who might not have access to reliable wifi, not to mention more advanced AI educational and assistive technologies only available behind a paywall.

Furthermore, the teaching and learning relationship must retain a human touch (Holmes & Miao 2023; Nguyen et al., 2023; Roshanaei et al., 2023). AI in education should function as an assistive tool and should not be used in place of teachers, tutors, mentors, counselors, or advisors. Holmes and Miao’s (2023) UNESCO publication Guidance for Generative AI in Education and Research underscores the danger of generative AI to undermine human agency. One of its recommendations is to, “Prevent the use of GenAI where it would deprive learners of opportunities to develop cognitive abilities and social skills through observations of the real world, empirical practices such as experiments, discussions with other humans, and independent logical reasoning” (Holmes & Miao, 2023, p. 25). According to the same report, use of these tools have unknown impacts on human connection, human intellectual development and psychological factors for learners. Maintaining humanistic principles and involving people in the decision-making process is necessary to ensure fair representation and equal access in both the development and use of AI technologies for education (Nguyen et al., 2023; Munn, 2022).

Although AI poses risks to equity and access, it also has the potential to enhance and increase equity by supporting student success through personalized learning support and analytics to aid student persistence and achievement. Roshanaei et al. (2023) describe how personalized learning systems can offer targeted learning experiences tailored to various learning styles and paces. For instance, personalized learning systems can foster educational equity by providing assistive technologies for visual and hearing impairments, engaging and adaptive learning

environments, and analytics geared toward interventions for at-risk students. Of course, data privacy remains a concern, especially when using predictive analytics, but some benefits to predictive analytics might include identifying students’ strengths and weaknesses, potential risks of dropout or failing, and developing interventions uniquely designed to get individual students back on track (Rashanaei et al., 2023).

Any use of AI tools must foster inclusion, equity, cultural and linguistic diversity and provide access regardless of gender, ethnicity, special educational needs, socio-economic status, geographic location, displacement status and any other barrier to equitable opportunities for learning. Holmes and Miao’s (2023) UNESCO publication Guidance for Generative AI In Education and Research highlights three policy measures to reach this goal:

  • Identify those who do not have or cannot afford internet connectivity or data, and take action to promote universal connectivity and digital competencies in order to reduce the barriers to equitable and inclusive access to AI applications. Establish sustainable funding mechanisms for the development and provision of AI-enabled tools for learners who have disabilities or special needs. Promote the use of GenAI to support lifelong learners of all ages, locations, and backgrounds.
  • Develop criteria for the validation of GenAI systems to ensure that there is no gender bias, discrimination against marginalized groups, or hate speech embedded in data or algorithms.

Develop and implement inclusive specifications for GenAI systems and implement institutional measures to protect linguistic and cultural diversities when deploying GenAI in education and research at scale. Relevant specifications should require providers of GenAI to include data in multiple languages, especially local or indigenous languages, in the training of GPT models to improve GenAI’s ability to respond to and generate multilingual text. Specifications and institutional measures should strictly prevent AI providers from any intentional or unintentional removal of minority languages or discrimination against speakers of indigenous languages, and require providers to stop systems promoting dominant languages or cultural norms. (p. 24)

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Optimizing AI in Higher Education: SUNY FACT² Guide, Second Edition Copyright © by Faculty Advisory Council On Teaching and Technology (FACT²) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book