AI Tools Can Support Teaching: Examples

AI tools are useful for faculty as well as for students. One use of these tools is to generate ideas and content for our courses. Below are examples from Dr. Keith Landa (SUNY Purchase, Director of the Teaching, Learning, and Technology Center) and Stephanie Pritchard (SUNY Oswego).

Example 1: Course Design

Dr. Landa wanted to use Claude, an generative AI generator, to see what course learning objectives he might consider for a new class on AI Literacy. Here’s his exchange:

Prompt: “As a higher education faculty member designing a course on AI Literacy, what would be the top learning outcomes that you would list in your syllabus?”

AI: Here are some potential key learning outcomes I would aim for in a course focused on AI literacy:

  • Understand fundamental concepts of artificial intelligence – e.g. machine learning, neural networks, data dependence, training vs. inference.
  • Recognize common applications of AI – such as computer vision, natural language processing, recommender systems, robotics.
  • Evaluate societal impacts of AI – economic, legal, ethical issues surrounding use cases.
  • Detect misinformation about AI capabilities – critically assess claims in media/marketing vs reality.
  • Analyze AI risks and mitigation strategies – e.g. data bias, job automation, legal liability, superintelligence.
  • Discuss governance challenges – privacy, accountability, transparency, oversight.
  • Consider philosophical questions – machine consciousness, human-AI relationships, AI existential risk.
  • Empower learners as citizens – understand basics to follow policy issues, separate hype from reality, advocate responsibly.
  • Hands-on experience – interact with AI models, experiment with sample projects, complete risk assessment.

AI: The goal would be to not just impart technical knowledge but also develop students’ critical thinking on the real-world impacts of AI. Focusing on risks and responsible innovation priorities would equip learners to participate ethically and intelligently in our AI-integrated world.

There are too many more learning outcomes here for  a single course, so this list would need to be focused and refined  to settle on the appropriate learning objectives  for an AI Literacy course. But this is a good start, and  faculty can ask the AI for more details to begin brainstorming class activities to these outcomes (i.e. – “What learning activities and assessments would you map onto the learning outcomes listed above?”). AI could also help faculty begin to assemble a list of course resources (i.e. – “What textbooks, articles, web videos, and other learning resources would you assign for this class?”). Of course, faculty will still need to carefully vet the AI’s response to ensure the AI is providing reliable information. This article from The New York Times provides additional information about generative AI and its hallucinations.

Example 2: Developing Question Banks for Formative Testing

This LinkedIn post “Yes, ChatGPT can answer exam questions. But can it write them too?” provides an interesting example of how ChatGPT integrates different levels of Bloom’s taxonomy to write multiple choice questions on natural selection. However, this sort of one-off question generation is not an efficient approach to generating a large test bank of questions for use in repeated formative testing.

Here’s an example of the kind of prompt Dr. Landa has used to create test bank questions for a class he’s offered previously:

Please write 10 multiple choice questions based on the content at this web page, https://astrobiology.com/1998/04/what-is-astrobiology.htm, targeted at the remembering level of Bloom’s taxonomy, and provide the correct answer for each.

Dr. Landa has used this prompt in the free version of ChatGPT and also in Claude; both chatbots recognize and can correctly reference the different levels of Bloom’s taxonomy and both are able to access the content using the external URL.

Dr. Landa also altered the prompt to generate questions at the “analyzing” level of Bloom’s taxonomy. While it is still necessary to review the questions that are produced by the chatbots, this approach makes it much easier for us to set up learning activities to promote ongoing retrieval practice by our students.

To use AI for generating sample questions for PDF articles that are used in class, faculty can use tools that allow file uploads (like Claude) or that allow  them to copy and paste in large source texts.

 

Example 3 : Creating Rubrics

Some course-related material, like rubrics, can be extremely time-consuming to create, and generative AI can help faculty speed up that process. While AI won’t be able to create a perfect rubric on the first try, users can modify prompts to get a better template.

Here’s an example based on an assignment that Stephanie Pritchard (SUNY Oswego) created to assess student writing at the end of a summer Educational Opportunity Program (EOP). The assignment has been condensed to save space:

Prompt: Create a rubric to assess the following writing assignment: The purpose of writing persuasively is to get your reader to agree with your opinion. You have more freedom to make claims without always using research to back up what you’re saying.

What grade do you think you should receive for the summer program? You can use any examples from the program to make your point, but you should choose at least two reasons from your EOP experience to answer this question. Here are some options for you to consider (you are not limited to this list):

your class attendance and class participation (including reading workshops, writing workshops, etc.)

grades you received on major assignments

your attendance and participation at social events as well as your general attitude during the program

what you’ve learned (or not learned) from this experience

How you approach this is totally up to you – your tone can be formal or informal, funny or serious, creative or more traditional. Effective persuasive writing always has a purpose in mind.

AI: Criteria | Excellent (5) | Good (4) | Satisfactory (3) | Needs Improvement (2) | Inadequate (1)

Claim | The writer clearly | The writer’s | The writer’s claim | The writer’s claim | The writer’s claim | states their | claim states | is somewhat clear | lacks clarity or | lacks clarity and | position on the | their position| but may require | specificity, making it | specificity, and | grade they should | on the grade | further elaboration | challenging to | does not take a | receive and is | they should | or additional | understand their | clear stance on | persuasive with | receive, but | examples to | intended grade or | the grade they | compelling | might benefit | strengthen the | reasons for it. | should receive. | arguments. | from more | argument. |
| | elaboration.

The AI’s first attempt at the rubric was hard to follow (it generated six additional paragraphs that looked like different versions of this on the following topics: use of examples, organization, persuasiveness, tone and style, and overall impact). Professor Pritchard decided to refine the prompt a little more:

Prompt: I find the formatting of your response confusing. Can you answer the same prompt but format your response differently?

AI: Sure! Let’s break down the rubric in a more straightforward and easy-to-read format:

Claim:

    • Excellent (5): The writer’s claim is clear, persuasive, and well-defined, taking a strong stance on the grade they should receive. Compelling arguments are presented.
    • Good (4): The writer’s claim is clear, but may benefit from further elaboration or more compelling arguments.
    • Satisfactory (3): The writer’s claim is somewhat clear, but lacks specificity and may require additional examples to strengthen the argument.
    • Needs Improvement (2): The writer’s claim lacks clarity and specificity, making it challenging to understand their intended grade or reasons for it.
    • Inadequate (1): The writer fails to take a clear stance on the grade they should receive.

Use of Examples:

    • Excellent (5): The writer effectively uses specific examples from the EOP program to support their claim, making their persuasive writing more convincing.
    • Good (4): The writer incorporates relevant examples from the EOP program to support their claim, but might benefit from stronger connections between examples and arguments.
    • Satisfactory (3): The writer includes some relevant examples from the EOP program, but their connection to the claim may be somewhat unclear or disjointed.
    • Needs Improvement (2): The writer provides limited or irrelevant examples to support their claim, weakening the persuasiveness of their writing.
    • Inadequate (1): The writer does not include any examples to support their claim, significantly weakening their persuasive writing.

While these two examples aren’t perfect, the chatbot was able to establish some general parameters and provided ideas about what parts of the assignment could be assessed. The AI initially suggested assessing based on the following criteria: claim, use of examples, organization, persuasiveness, tone and style, and overall impact. Professor Pritchard decided to ask the chatbot for some additional suggestions:

Prompt: I like the claim, use of examples, and organization points, but found persuasiveness and tone/style to not be as necessary. Can you come up with a few other points that we might consider when assessing student writing for this assignment?

AI: Certainly! Here are a few additional points that you might consider when assessing student writing for this assignment:

Engagement:

    • Excellent (5): Keeps the reader engaged throughout the writing with a compelling introduction, thought-provoking points, and an effective conclusion.
    • Good (4): Maintains reader’s interest but might benefit from a more captivating introduction or stronger conclusion.
    • Satisfactory (3): Some engagement, but the writing may lack a consistent flow and reader interest.
    • Needs Improvement (2): Fails to engage the reader effectively due to disjointed points or weak conclusion.
    • Inadequate (1): Lacks any form of engagement.

Clarity:

    • Excellent (5): Ideas are expressed clearly and concisely, making the writing easy to follow.
    • Good (4): Generally clear, but some points could use further clarification.
    • Satisfactory (3): Ideas are somewhat clear, but the writing might be confusing or overly verbose in places.
    • Needs Improvement (2): Lack of clarity, reader struggles to understand key points due to poor expression.
    • Inadequate (1): Extremely unclear, ideas are virtually impossible to comprehend.

Creativity:

    • Excellent (5): Demonstrates a high level of creativity in approach, examples, and presentation.
    • Good (4): Shows creativity, but could benefit from more original ideas or approaches.
    • Satisfactory (3): Some attempt at creativity, but the writing may feel somewhat predictable.
    • Needs Improvement (2): Lacks creativity and originality, ideas are mundane and unremarkable.
    • Inadequate (1): Completely devoid of creativity.

 

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Optimizing AI in Higher Education: SUNY FACT² Guide, Second Edition Copyright © by Faculty Advisory Council On Teaching and Technology (FACT²) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book