Your Name and Title: Araminta S. Matthews, MFA, GCDF, DC / Senior Instructional Designer / Professor of Instructional Design, Ethics, Information Literacy, Psychology, Writing
Library, School, or Organization Name: Kennebec Valley Community College and the University of Maine
Co-Presenter Name(s): N/A
Area of the World from Which You Will Present: Maine (EST, GMT-5)
Language in Which You Will Present: English
Target Audience(s): Instructional Designers and/or Educators / talent-developers looking to implement AI-generated content into professional or instructional practices and applying the ACRL's Information Literacy Standards to that purpose
Short Session Description (one line): Apply the ACRL's Information Literacy Standards to implementing AI-generated content for your own professional productivity, instructional strategy, student-use-case, and detecting AI-generated content.
Full Session Description (as long as you would like): Having recently been accepted into the March 2025 edition of Advances in Online Education: A Peer-Reviewed Journal for her lead-co-authored article entitled "Pay Attention to the Chatbot Behind the Curtain: A
Framework and Toolkit for Critical Thinking and Information Literacy when Integrating AI Is No Place Like Home," Senior Instructional Designer and part-time professor of instructional design, Araminta Matthews, has studied the methods by which Large Language Model chatbots were programmed to generated what is statistically likely to be true and not what is “not necessarily true” (Gent, 2023, p. 34) and the means by which average users looking to improve productivity or introduce AI into instructional scenarios can both determine the credibility of this alogrithmally-generated content and likewise correctly cite the use of such tools, Araminta has developed a toolkit and choice-based flow-chart to help you determine use-cases for effective AI-integration. Determining the need for Intellectual Property attribution or potential copyright-infringement, identifying the source of generated content, applying a credibility test to the content, or generally distancing oneself from generated-content by way of citing that generation are all methods we can use to bring AI into a proactive conversation to improve our efficacy using this tool. On example from our paper and test includes co-authors asking a major LLM chatbot the same question daily for two months: "Is 2 pounds of feathers heavier than 1 pound of lead?" only to receive variations of the same incorrect response for that time period of "No! 2 pounds (of feathers) is NOT heavier than 1 pound (of lead) because feathers have less density than lead" forgetting, of course, that a pound is a pound is a pound. Similar examples will be shared in this interactive session where attendees will be encouraged to play with chatbots to test their reliability and also determine use-safe cases in which AI can not only improve your productivity but also help you determine if work you are reading is perhaps AI-generated itself.
Reference: Gent, E. (2023). How to think about AI... And how to live with it. New Scientist, 259(3449), 32–40.
Websites / URLs Associated with Your Session: Not at this time, but if selected, I will create a public-facing resource with CC-licensed content to support attendees in playful engagement with LLM chatbots, attributing AI-generated content, and examining the credibility of content generated by AI that can be shared with the session's recording.
Replies