Using AI to Develop Research and Scholarship
AI Literacy
Given the nature of AI, ironic as it may sound at first, teaching AI should, first and foremost, be about teaching students to retain their agency, and participate in the training of the tool/s themselves (Dai et al., 2023). With a platform like ChatGPT that is trained on the internet up to January 2022, if students aren’t offered basic if not somewhat intermediate AI literacy training, by default, usage will rely too heavily on these nascent tools, and output will be suboptimal.
The teaching of AI literacy as it regards prompt engineering (or incremental prompting, breaking down larger goals into smaller tasks) though far from the norm, will be key moving forward (Lingard, 2023; Wallter, 2024). This includes submitting the writing i.e. the research under consideration and/or one’s own research (from where the bulk of training is imparted), the purpose, audience, tone, and length. These criteria will help researchers connect the dots between their agency (training, prompting, critical thinking) and the end results.
Opportunities
Opportunities for the use of AI can be found in all stages of the research process, beginning with idea generation, working through literature reviews, identifying and preparing data, determining and implementing testing frameworks and analyzing results (Xames and Shefa, 2023). Tasks involved in getting funding for research, including writing funding proposals, are also prime candidates. AI may not only assist with these tasks, it is also being considered as a primary actor in them (Messeri and Crockett, 2024; Bano et. al., 2023).
AI has certainly already been tagged for its ability to offer research assistance, be it for faculty or students.. Circling back to the topic of agency in the AI interface, several new AI tools are basing output on required input (ie. pdf’s), giving researchers the ability to train the tool as they use it (e.g.Humata, ChatPDF, Research Rabbit). There are other AI tools whose generative capacity is already trained on specific scholarly writing [read: not the entire internet up to 2022] (e.g. Elicit, Scite, Consensus). And as with most AI tool platforms, providing a login typically enables usage history to be logged, and in the more adept tools, collections to be created, a user profile etched, with tailored and trained output to follow, not wholly different from a commercial web browser.
Risks
All of the risks that come along with AI tools in other domains follow them to the research domain. Most of these risks apply to traditional forms of research as well, but have new sources and take new forms when AI is involved. Bias may be introduced through missing and incomplete data sets as well as through the algorithms that are used to train the AI models (Bano et al, 2023). (Several types of bias are explained in Appendix B.) There are ethical concerns regarding the source of the data used to train the models and whether it has been appropriately credited, as well as questions about how to ensure the fairness and reproducibility of research was conducted with the use of (or entirely by) AI. As AI is relied upon more and more, there are risks of reproducing old patterns of excluding diverse perspectives and biases, while making it increasingly difficult to identify and distinguish when this is happening (Messeri & Crockett, 2024; Shah, C. 2024). With all of this in mind, the European Commission has released a downloadable set of Living guidelines on the responsible use of generative AI in research based on reliability, honesty, respect, and accountability (Directorate-General for Research and Innovation, 2024).
And although the question of how to develop a student’s confidence and independence in their scientific maturity is not new, the easy access to tools which appear to be able to do the act of research for them introduces significant new implications. Students may not be confident about their contributions. It will be important to consider that:
- As part of the research process, students are learning and using the foundational principles to guide new discoveries; how will AI affect this?.
- We need to make sure students are still seeing their value and having their own original thoughts and contributions.
- We need to find the balance between using the tools and developing students’ own skills and strengths.
- Student agency is retained.
One new risk is in the area of privacy. Traditional research methods and processes closely guard the importance of intellectual property for ideas before they are published. Normally a thesis might be embargoed, for example. New structures will need to be developed and put in place to address questions of how to apply similar constraints in a world in which ideas might be ingested into AI models during the research process, whether inadvertently or intentionally, before they are ready to be shared.
Citing/Disclosing
As with any other source, it is essential that AI is cited and disclosed clearly. Several common citation resources now include guidance on citing information derived from generative AI. Some recent examples follow, although these standards are still evolving and should not be considered normative:
APA Style Example Text and Reference
Text
When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).
Reference
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
MLA Style Example Text and Reference
Text
While the green light in The Great Gatsby might be said to chiefly symbolize four main things: optimism, the unattainability of the American dream, greed, and covetousness (“Describe the symbolism”), arguably the most important—the one that ties all four themes together—is greed.
Reference
“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
(Citation information is derived from the Harvard LibGuide on Citing Generative AI, which was itself adapted from a LibGuide created by Daniel Xiao, Research Impact Librarian at Texas A&M University Libraries.)