Societal Impact

When considering the current and potential social impacts of artificial intelligence, we should place the rise of artificial intelligence within the context of larger global shifts that are changing how humans live, communicate, work, and interact. These include climate change, global economic shifts, aging populations, migration, technological advances in areas such as AI, automation, biotechnology and renewable energy, social and political changes, and environmental degradation. These shifts have been exacerbated, and in some cases accelerated, by the recent Covid-19 pandemic and the changes this brought to social, economic, health, including mental health, and workplace experiences and environments (British Academy, 2021, Alizadeh et al., 2023).

According to the IMF report titled Gen-AI: Artificial Intelligence and the Future of Work, “Artificial intelligence (AI) is set to profoundly change the global economy, with some commentators seeing it as akin to a new industrial revolution. Its consequences for economies and societies remain hard to foresee” (Cassinga et al., 2024). Global society has experienced at least four industrial revolutions since the late 16th century, from mechanization to electrification, automation, and digitization (Groumpos, 2021). Each of these revolutions had a transformative effect on society and the world economy. In some cases, this was for the benefit of most common citizens. In others, workers were exploited and labored under dangerous conditions until unions formed and regulations were developed. Specific changes included urbanization and demographic shifts, changes in labor practices, wide-scale economic growth (often uneven, with growing disparities), and social changes, including changes in family structures and social norms.

Revolutions in communications technologies also had a radical impact on society. These included the telegraph, the telephone, radio, cinema, television, personal computing, the internet, and cellular communications (Kovarik, 2016). In our lifetime, many of us have experienced the rapid changes, positive and negative, incurred by the adoption of these new technologies in our professional and personal lives. The rise of artificial intelligence is usually placed in the broader category of changes we have seen since the 2000s that are a logical extension of automation and digitization within a world with the Internet of Things.

Given its wide reach into all aspects of 21st Century work and social interactions, areas for potential social impacts of artificial intelligence include the potential for significant influence on communication and relationships, employment and the economy, inequity and social structure, urban and rural population shifts, social and cultural norms as well as the potential to interfere with political discourse (Polyportis & Pahos, 2024). AI introduces complex ethical dilemmas, including privacy concerns, the potential for ubiquitous surveillance, the proliferation of biases, and problems that arise from potentially biased algorithmic decision-making. Existing concerns about data privacy, the digital divide, and growing disparities become even more pressing as we witness a wide-scale adoption of artificial intelligence tools across different fields.

The role of artificial intelligence (AI) in aiding the dissemination of false information, fake news, and fabricated images and videos is a topic of growing concern and complexity. AI technologies have the potential to both positively and negatively impact the information ecosystem. These have a role in spreading misinformation through the creation of convincing false content, including deepfakes, which are videos and audio recordings manipulated to make it appear as if individuals are saying or doing things they never did. AI can produce photorealistic images or write convincing fake news articles, blurring the line between fact and fiction (Kapoor & Narayanan, 2023). It could also automate the generation and dissemination of false information on a scale that was previously unimaginable. For example, AI tools could be used to create a multitude of fake accounts on social media platforms to instantly spread false information across networks. AI algorithms can analyze vast amounts of data to identify the most effective ways to disseminate false information to specific groups and amplify the impact of disinformation campaigns.

Recent research reports by leading organizations such as the International Monetary Fund, Goldman Sachs, and the Organisation for Economic Co-operation and Development (OECD) indicate that AI’s impact on the labor market will be significant, with potential for both job displacement and creation. Goldman Sachs reported that 2 in 3 occupations could be partially automated by AI (Briggs & Kodnani 2023). According to the OECD Employment Outlook 2023, a key distinction of evolving artificial intelligence tools, including generative AI, is that it has the capacity to automate non-routine cognitive tasks such as information ordering, memorization, perceptual speed, and deductive reasoning. This extends the scope and reach of automation into white collar professions held by college graduates. The authors also note that while automation may replace routine jobs, new roles will arise as industry, education, and society transform in response to accompanying affordances and challenges. McNeilly and Smith (2023) note that “Whether the changes are good or ill for individual workers will depend on their occupation, firm, individual capabilities and ability to adapt. Some will adjust better than others. There will be winners and losers.”

Artificial intelligence holds the promise of tremendous benefits to society, as long as its adoption is aligned with ethical frameworks and mitigation of potential misuse, whether unintentional or by design. In health care fields, for example, artificial intelligence is already improving diagnostic accuracy, patient care, and treatment personalization, and shows promise of providing more immediate access to health services. However, it also presents challenges related to data privacy, consent, and potential bias in treatment. While AI has the potential to significantly improve healthcare for the elderly, there’s a risk of age-related biases in AI models if they are not trained on diverse age groups. There is also a risk of widening the gap in gender disparity in relation to working with artificial intelligence and the acquisition of AI fluency. To date, early AI adopters are overwhelmingly male, with the median AI user being male, aged 35-49 (FII 2024). Women are underrepresented in the field of AI development. In 2022, just one in four researchers who published academic papers on AI were female (Cairo, Lusso & Aranda, 2023). This impacts the types of AI technologies that are developed and can lead to gender biases in AI algorithms.

Existing AI systems already exhibit gender biases, typically reflecting biases in the training data. This could perpetuate stereotypes and discrimination in areas like hiring, healthcare, and finance. Goldman Sachs data indicates that “6 in 10 men vs. 8 in 10 women in the US workforce are exposed to generative AI replacing their jobs” (McNeilly & Smith, 2023). The same report predicts that 300 million jobs could be replaced by AI. On the flip side, in a report from the Organization for Economic Co-operation and Development, 63% of workers in finance and manufacturing agreed that AI improved job enjoyment, and 55% said that it improved mental health (OECD, 2023). It can be easier for people to talk to chatbots about mental health issues than HR personnel. Chatbots can help direct them to the most appropriate on-demand resources for their everyday stresses, or, for more serious issues, to therapists that best fit their needs (Cohen, 2023). The IMF report states that women and highly educated workers are consistently more exposed to, but also more likely to benefit from, AI (Cazziniga et al., 2024).

Other disparities include age and geographic distribution. Older generations may lag in their adoption of AI technologies, and the impact of AI on job automation may disproportionately affect older workers employed in fields replaced by new technologies. Older workers may be less adaptable and face additional barriers to mobility, as reflected in their lower likelihood of reemployment after termination. Historically, older workers have demonstrated less adaptability to technological advances; artificial intelligence may present a similar challenge for this demographic group (Cazziniga et al., 2024). There is already a significant disparity regarding  access to AI technology  between developed and developing countries. Less developed regions may lack the educational resources and infrastructure necessary for AI development and implementation. Wealthier countries are more likely to benefit economically from AI, which would exacerbate global inequalities. The impact of AI is also likely to differ significantly across countries at different levels of development or with different economic structures (Cazziniga et al., 2024, OECD 2023).

We can learn from historical industrial and technological disruptions to navigate the challenges and opportunities presented by AI. Higher education will benefit from taking a balanced approach that leverages AI’s benefits while mitigating related risks, and ensuring equitable access to AI’s advantages.

AI in Society: Ethical and Legal Issues

AI began to affect society and the employment landscape prior to the introduction of ChatGPT. Customer service chatbots and self-checkout have become commonplace. Some jobs, particularly in the customer service and manufacturing sectors, have been replaced by AI. Many resources, articles, and subject-matter experts assert that AI is a powerful tool that is not a passing fad. Generative AI will continue to impact employment and the economy. Experts suggest that AI will continue to take on specific tasks, but it will not fully take the place of workers (Hawley, 2023). There are many concerns that AI will replace human work and interaction. While AI can perform some tasks well, it is not perfect and cannot replace the human-to-human experience required by fields such as teaching and nursing.

Although AI may be used to replace task-based, entry-level jobs, new jobs will likely be created to design and manage it. Individuals and businesses must plan for the growth of AI and anticipate the ways in which they will be affected (Marr, 2023). From a higher education standpoint, this creates an opportunity to re-evaluate educational pathways in order to prepare students for a world in which AI is prevalent (Abdous, 2023). How can we prepare students to use AI effectively, and to have the skills required to work with and manage AI at a higher level?

Artificial intelligence has been used in one way or another in education for years. For example, search engines, personal assistants on phones, assistive technology to increase accessibility, and other technology all use some form of applied artificial intelligence. However, the recent widespread availability of generative Artificial Intelligence tools across higher education gives rise to ethical concerns for teaching and learning, research, and instructional delivery. Implicit bias and representation (Chopra, 2023), equitable access to AI technologies (Zeide, 2019), AI literacy education (Calhoun, 2023), copyright and fair use issues (De Vynck, 2023), academic integrity, authenticity, and fraud (Weiser & Schweber, 2023; Knight, 2022), environmental concerns (Ludvigsen, 2023; DeGeurin, 2023), and ensured development of students’ cognitive abilities (UNESCO, 2022) all represent ethical challenges for higher education as AI integrates further into the curriculum, the classroom, and our work and personal lives.

Data sets play a critical role in machine learning and are necessary for any AI that uses an Artificial Neural Network (including what runs ChatGPT) to be trained. The characteristics of these sets can critically mold the AI’s behavior. As such, it is vital to maintain transparency about these sets and to use sets that can promote ideals that we value, such as mitigating unwanted biases that may promote lack of representation, or other harms (Gebru et al., 2022). Biases have already been identified in AI systems used in healthcare (Adamson & Avery, 2018; Estreich, 2019) as well as in auto-captioning (Tatman, 2017). The EEOC, DOJ, CFPB, and the FTC issued a joint statement warning how the use of AI “has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes” (Chopra et al., 2023). The FTC is investigating OpenAI’s potential misuse of people’s private information in training their language models and violation of consumer protection laws (Zakrzewski, 2023). Industry leader Sam Altman, CEO of OpenAI (the developers of ChatGPT), recently testified at a Senate hearing on artificial intelligence expressing both concerns and hopes for AI. He warned about the need to be alert regarding the 2024 elections, and he suggested several categories for our attention, including privacy, child safety, accuracy, cybersecurity, disinformation, and economic impacts (Altman 2023).

Furthermore, lawsuits have been filed alleging everything from violation of copyrights to data privacy to fair use issues (De Vynck, 2023). In addition to these legal challenges, labor concerns factor into this conversation as low-wage, uncontracted workers and labor from the global South have been used to train AI away from violent and disturbing content (Perrigo, 2023). Because AI is transforming labor and the economy through automation, higher education must respond to AI’s potential to displace workers in many industries; we must teach our students the unique attributes and capabilities that humans bring to the labor market (Aoun, 2017).

Environmental factors add to the list of ethical concerns, especially in terms of energy and water consumption. For instance, ChatGPT uses as much electricity as 175,000 people (Ludvigsen, 2023) and the ChatGPT engine (GPT3) used 185,000 gallons (700,000 liters) of water to train. Each use of ChatGPT uses roughly a one-liter bottle of water (DeGeurin, 2023). In November 2022, New York became the first state to enact a temporary ban on crypto mining permits at fossil fuel plants (Ferré-Sadurní & Ashford). Academia must be cognizant of the environmental costs of generative AI.

UNESCO AI Ethics

AI ethics are a set of guiding or moral principles and techniques that help ensure the responsible development and use of artificial intelligence (AI). These principles are intended to ensure that AI is safe, secure, humane, and environmentally friendly, and that all stakeholders, such as engineers to government officials, use and develop AI responsibly.

Ethics Recommendations

Recommendations on the Ethics of Artificial Intelligence is a document adopted by UNESCO in November 2021.​ It provides a framework of values, principles, and actions to guide the responsible development and use of AI systems.​ The recommendation emphasizes the respect, protection, and promotion of human rights, human dignity, and the environment throughout the AI system life cycle. ​ It highlights the importance of diversity, inclusiveness, fairness, transparency, accountability, and sustainability in AI governance.​ The document outlines policy areas such as ethical impact assessment, governance, data policy, international cooperation, and more. ​ It aims to ensure that AI technologies work for the benefit of humanity while preventing harm and promoting peace, justice, and interconnectedness in societies ​(UNESCO, 2022, pp.7–22).

Transparency

Transparency plays a crucial role in AI systems by promoting accountability, trust, fairness, ethical considerations, legal compliance, public scrutiny, explainability, bias detection and mitigation, user empowerment, and regulatory compliance. ​ It ensures that AI systems are accountable for their actions and decisions and builds trust between AI systems and users. In March 2024, the U.S. Department of Commerce (Goodman & United States Department of Commerce, 2024) concurred with the need for data transparency from AI tech companies. The government proposed that tech companies provide an “AI warning label” much like a nutrition label on food products, providing the details on how  personal data is used to train AI models:

Standardizing a baseline disclosure using artifacts like model and system cards, datasheets, and nutritional labels for AI systems can reduce the costs for all constituencies evaluating and assuring AI. As it did with food nutrition labels, the government may have a role in shaping standardized disclosure, whatever the form (Goodman & United States Department of Commerce, 2024, p. 71).

Transparency helps to address biases and discrimination, allows for ethical considerations, ensures compliance with legal frameworks, enables public scrutiny and oversight, facilitates the explainability of AI systems, empowers users, and ensures regulatory compliance ​(UNESCO, p. 22).

It is important to note that while AI technologies can greatly enhance education, according to the document on AI Ethics Recommendation, they should always be used in a way that respects and protects the rights and well-being of students. ​Privacy, data security, and algorithmic transparency should be carefully addressed for the responsible and ethical use of AI in education. ​

According to UNESCO (2022), stakeholders that should be involved in the monitoring and evaluation processes include: ​

  • Government is responsible for developing and implementing legal and regulatory frameworks, as well as ensuring compliance with international law and human rights obligations.
  • Intergovernmental organizations, such as the United Nations and its specialized agencies like UNESCO can provide guidance, facilitate cooperation, and promote the adoption of ethical standards at the international level.
  • The technical community, including researchers, programmers, engineers, and data scientists have the expertise to assess the technical aspects of AI systems and identify any potential risks or biases. ​
  • Civil organizations, including non-governmental organizations (NGOs) and advocacy groups​ can raise awareness, advocate for transparency and accountability, and ensure that the interests and rights of individuals and communities are protected.
  • Academia and researchers can conduct independent research, provide expertise, and contribute to the development of ethical guidelines and best practices.
  • The media can raise awareness, report on any potential risks or abuses, and hold AI actors accountable. ​
  • Policy-makers at the national and international levels can develop policies and regulations, review the impact of AI systems, and make informed decisions based on the findings of monitoring and evaluation processes. ​
  • Private companies have a responsibility to ensure that their AI systems are ethically implemented and to address any potential risks or biases. ​
  • Human rights and equality groups can provide guidance, investigate complaints, and ensure that AI systems are compliant with human rights standards. ​
  • Youth and children’s groups are directly affected by the impact of AI technologies. ​ Their perspectives and experiences should be considered to ensure that AI systems are inclusive and do not discriminate against or harm young people. ​

It is important to note that the involvement of these stakeholders should be inclusive and diverse in order to offer  different perspectives, experiences, and interests. ​ Collaboration and dialogue among these stakeholders are essential to ensure effective monitoring and evaluation of AI systems. ​

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Optimizing AI in Higher Education: SUNY FACT² Guide, Second Edition Copyright © by Faculty Advisory Council On Teaching and Technology (FACT²) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book