AI and Generative AI in Adult Education

0 of 14 lessons complete (0%)

1.09 Evaluating and verifying AI outputs

Generative AI tools, with their capacity to produce a vast array of content rapidly, are undeniably impressive. They can draft text, create images, and even generate audio or code with remarkable speed. However, it is crucial to understand that these tools are not infallible; they do not always provide perfect, completely accurate, or contextually appropriate answers. A significant concern is that AI can sometimes create content that appears correct, coherent, and sounds authoritative, but which actually contains factual errors, is misleading, or, in some instances, is entirely fabricated information. This phenomenon of AI confidently inventing information that is not grounded in its training data or reality is often referred to as an AI “hallucination.” For both adult learners seeking reliable knowledge and educators aiming to provide high-quality instruction, developing and consistently applying the habit of carefully evaluating and rigorously verifying (checking against trusted sources) any AI-generated materials is not just good practice – it is an absolutely essential skill for ensuring trustworthy, reliable, and effective learning.

WHY VERIFICATION MATTERS SO MUCH – THE POTENTIAL CONSEQUENCES

The implications of using unverified AI-generated content can be significant and far-reaching. Imagine an adult learner using an AI tool to help draft their Curriculum Vitae (CV). If the AI suggests including certain technical skills, qualifications, or past experiences that the individual doesn’t actually possess, and this unverified information is then submitted to a potential employer, it could lead to serious misrepresentation, embarrassment during an interview, or even damage to their professional reputation. Consider an educator who uses an AI to quickly generate a quiz for their adult numeracy class. If some of the AI-generated questions contain mathematical errors, or if the provided answers are incorrect, it could deeply confuse learners, lead to them internalizing incorrect procedures, unfairly affect their assessment results, and undermine their confidence in the subject. In specialized contexts, such as the HER[AI]TAGE project which deals with preserving authentic cultural heritage, presenting AI-generated information about historical events, traditional practices, or cultural interpretations that has not been meticulously verified by subject matter experts and community knowledge holders could inadvertently lead to the spread of inaccuracies, misinterpretations, or even the trivialization of deeply meaningful cultural narratives. The ease of generation must be matched by the rigor of verification.

HOW TO EFFECTIVELY EVALUATE AND VERIFY AI OUTPUTS

A structured approach can help. The SIFT method (Stop, Investigate the source, Find better coverage, Trace claims) is a practical framework for evaluating online information, highly relevant for AI-generated content. Additionally, checklists focusing on accuracy, bias, and logic are useful.

Here’s a table outlining key evaluation steps, incorporating elements of SIFT and other best practices:

Evaluation step / framework (based on SIFT & checklists)Key action(s) for each stepSpecific question(s) to ask when evaluating AI-generated content
STOPPause before engaging deeply. Assess initial trust in the AI tool or the platform presenting the AI content. Remind yourself of your information goal if getting overwhelmed.Do I inherently trust this AI tool’s output without verification? Is the platform presenting this AI content known for reliability? Am I clear on what I need to verify, or am I going down a rabbit hole?
INVESTIGATE THE SOURCE (of the AI tool & its output)Identify the AI model/tool used, if known. Understand the provider’s reputation, potential biases, or commercial interests. For content about AI, investigate the author/publisher’s expertise and agenda.What specific AI model generated this (e.g., GPT-4o, Claude 4)? What is known about its training data or inherent biases? If this is an article about an AI tool, who wrote it and what might be their perspective or purpose?
FIND BETTER COVERAGE (cross-check with reliable sources)Seek information on the same topic from multiple, diverse, and reliable human-expert sources (e.g., academic journals, reputable news, official reports, textbooks). Compare the AI’s claims with these established sources.Do trusted, independent (non-AI) sources corroborate the factual claims made by the AI? Are there discrepancies or alternative perspectives presented by human experts? Is the AI output presenting a fringe view as mainstream? Is the information up-to-date?
TRACE CLAIMS, QUOTES, AND MEDIA (verify specifics)If the AI provides citations or refers to specific data/studies, attempt to find and verify these original sources. Be aware that AI can “hallucinate” sources. For AI-generated images or media, consider if they accurately represent what they claim or if they could be misleading.Can I find the actual research paper, news article, or dataset the AI is supposedly referencing? Does the original source support the AI’s interpretation? (For images/media) Is this visual representation accurate, or could it be subtly biased or misrepresent reality?
SCRUTINISE FOR COMPLETENESS, NUANCE, AND CONTEXTAssess if the information is comprehensive enough for the purpose. Does it lack important details or contextual understanding?Is the AI’s answer too simplistic? Does it miss key aspects of the topic? Is the information relevant to my specific situation or local context (e.g., for HER[AI]TAGE)?
ASSESS CLARITY, COHERENCE, AND SUITABILITYRead the AI-generated text critically. Does it make logical sense? Is the language clear, precise, and appropriate for the intended audience’s understanding and language proficiency?Is the text well-structured and easy to follow? Is it free of jargon, or is jargon explained? Is the tone appropriate? Could it be misunderstood?
ACTIVELY LOOK FOR BIAS AND STEREOTYPESCritically review for any signs of inherent bias, harmful stereotypes, or lack of diverse and equitable representation. Check for influence from commercial or advocacy interests.Does the content unfairly favour one viewpoint or group? Is it culturally sensitive and respectful? Does it use neutral language, or is it overly dramatic or emotionally manipulative?

DEVELOPING AND HONING CRITICAL THINKING SKILLS

Adults typically bring a rich tapestry of life experience, accumulated knowledge, and common sense to their learning endeavours. These are invaluable assets when interacting with AI. If an AI-generated summary of current workplace safety procedures seems outdated or contradicts your practical experience, or if an AI-powered translation tool produces wording that sounds very odd, unnatural, or culturally inappropriate, trust your instincts. These “red flags” are important cues to investigate further, question the AI’s output, and diligently verify the information with a human expert or by consulting additional reliable sources. Critical thinking in the age of AI is not about outright rejection of the technology, but about engaging with it thoughtfully, sceptically, and with an evaluative mindset. Broader AI literacy frameworks, like UNESCO’s AI Competency Framework for Students and the developing EC/OECD AILit Framework, also highlight the importance of critical judgment.

LEVERAGING FEEDBACK AND COLLABORATIVE GROUP VERIFICATION

In adult education settings, fostering a collaborative approach to evaluating AI-generated materials can be highly effective and enriching. Working in pairs or small groups to review and critique AI outputs allows learners to share their diverse perspectives, pool their knowledge, and learn from each other’s critical insights. For example, after using an AI to generate a list of potential solutions to a complex community problem for a HER[AI]TAGE-related project, learners can engage in a structured group discussion to assess which solutions are the most realistic, ethically sound, culturally appropriate, and likely to be effective in their specific local context. They can also collaboratively identify which AI-suggested solutions might require significant improvement, further research, or might be entirely impractical. Institutional guidelines often suggest that AI-generated materials must be reviewed and adapted by teachers, and automated grading should be seen as a draft.

By consistently and diligently practicing these strategies for evaluation and verification, adult learners and educators can more confidently and effectively harness the considerable power of AI tools. This critical engagement helps to minimize the inherent risks of errors, misinformation, and bias, ultimately leading to more robust, reliable, and meaningful learning outcomes.

PRACTICAL EXAMPLES

  • An adult learner is using an AI chatbot to get a detailed explanation of a complex scientific concept for their advanced biology course. After receiving the AI’s explanation, they meticulously compare it with the relevant chapter in their peer-reviewed university textbook, a recent review article from a reputable scientific journal, and lecture notes from their professor to ensure they have a correct, nuanced, and comprehensive understanding of the topic.
  • A workplace educator is tasked with creating a new internal policy document on ethical data handling for employees. They use two different AI text generators to create initial drafts of the policy. They then carefully compare the two outputs, identifying common core principles, noting differences in emphasis or suggested clauses, and meticulously checking everything against their company’s specific legal obligations (like GDPR), industry best practices, and existing HR guidelines before compiling a final, robust policy document.
  • A group of adult learners is collaborating on a presentation about significant local historical landmarks as part of a HER[AI]TAGE community engagement project. They use an AI research assistant to gather initial facts, dates, and anecdotes related to these landmarks. Subsequently, they divide the landmarks among themselves, and each member takes responsibility for rigorously verifying the AI-generated information by consulting local library archives, official museum websites, historical plaques, and, where possible, by conducting short interviews with members of the local historical society or long-term residents.
  • A tutor in a critical thinking workshop consistently encourages their learners to adopt a “fact-checking mindset” using a method like SIFT for any surprising, novel, or statistically significant piece of information they encounter, regardless of whether it originates from an AI tool, a website, a news report, or even a statement made by a classmate. This practice helps develop a lifelong habit of critical inquiry and healthy scepticism.
  • An adult who is diligently revising for a comprehensive citizenship test uses an AI-powered app to practice answering a wide range of potential interview questions. However, they make it a strict rule to always cross-reference the AI’s answers regarding specific laws, government structures, key historical dates, and citizens’ rights with the official government-issued study materials and websites to guarantee absolute accuracy.
  • An educator is using an AI image generator to create a series of illustrations for a children’s storybook based on a traditional local folktale, intended for use in an intergenerational learning program connected to HER[AI]TAGE. Before finalizing the images, they arrange a feedback session with community elders who are deeply familiar with the tale. They show the AI-generated images to the elders to verify if the depictions of traditional clothing styles, historical settings, character representations, and symbolic elements are culturally appropriate, respectful, and accurate according to local traditions.
  • A learner writing a research paper on the socio-economic impacts of a new technology uses an AI tool to find supporting statistics. The AI provides several figures. The learner then traces each statistic back to its original source (e.g., a government report, an academic study) using the “Trace Claims” step of SIFT to verify its accuracy, methodology, and the context in which it was presented before including it in their paper.