
While Artificial Intelligence (AI) and its rapidly advancing subset, Generative AI (GenAI), present a wealth of exciting benefits and transformative potential for adult education, it is absolutely crucial to approach their use with a strong ethical compass and a keen awareness of potential pitfalls, particularly the pervasive issue of bias. In the context of AI, ethics encompasses the principles and values that guide right, fair, and just conduct in the design, development, deployment, and use of these powerful technologies. It involves considering the potential impact on individuals, groups, and society as a whole. Bias in AI refers to systematic and often unfair tendencies within an AI system’s outputs or decision-making processes, which may disproportionately favour or disadvantage certain ideas, groups of people (based on characteristics like gender, race, age, socioeconomic status, or cultural background), or types of information. This bias is frequently, though not always, unintentional, often stemming from the data used to train the AI models.
UNDERSTANDING BIAS IN AI: ORIGINS AND MANIFESTATIONS
AI systems, especially the large-scale models that power GenAI, learn by analysing and identifying patterns in the vast quantities of data they are trained on. This training data is typically sourced from the internet, extensive collections of books, articles, images, and other human-created content. If this foundational data itself reflects existing human biases, societal stereotypes, historical inequities, or systematically underrepresents certain groups or perspectives, the AI model can inadvertently learn and internalise these biases. Consequently, the AI may reproduce, or even amplify, these biases in its outputs, leading to unfair, misleading, or harmful results.
EXAMPLES OF HOW BIAS CAN MANIFEST
- Gender stereotypes: If an AI image generator is prompted to create a picture of a “software engineer” or a “nurse” and predominantly generates images of men for the former and women for the latter, it might be reflecting and reinforcing historical gender stereotypes present in its training data. This can subtly influence perceptions and limit aspirations.
- Cultural and geographic skews: If an AI text generator has been trained primarily on news articles and business documents from Western, English-speaking countries, its advice on topics like CV writing, business etiquette, or even its understanding of cultural nuances might be less relevant, less accurate, or potentially misleading for learners from different cultural, linguistic, or economic contexts.
- Linguistic bias: AI tools designed for analysing or moderating language could misinterpret or unfairly flag language patterns, dialects, or expressions commonly used by minority ethnic or linguistic groups as “incorrect,” “unprofessional,” or even “offensive” if the training data predominantly featured a specific standard dialect. This can lead to feelings of exclusion or misjudgement.
- Exclusion in data: If training data for a facial recognition system predominantly features faces from one ethnic group, the system may perform less accurately for individuals from other ethnic groups, leading to potential misidentification and its associated negative consequences.
In adult education, where fostering inclusivity and catering to diverse learners is paramount, and especially when dealing with sensitive topics such as cultural heritage (as in the HER[AI]TAGE project which aims to capture diverse voices), it is critically important for educators and learners alike to develop the skills to spot these potential biases. Openly discussing these issues, critically examining AI-generated content for stereotypes or lack of balanced representation, and learning to craft specific, nuanced prompts that actively encourage more equitable and diverse outputs are essential mitigation strategies.
PRIVACY, DATA PROTECTION, AND RESPONSIBLE USE
The use of AI tools, particularly those accessed online, frequently involves the collection and processing of user data. This can include the prompts users type, the content the AI generates for them, and other metadata about their interactions.
- Data privacy and security: Adults must cultivate a strong awareness of what personal or sensitive information they share with AI tools. It is always good practice to carefully review the privacy policy and terms of service of any AI platform before extensive use. Avoid inputting highly confidential details (such as personal identification numbers, private family histories intended only for specific sharing, sensitive medical information, or proprietary work-related data) unless absolutely necessary and you have a high degree of trust in the platform’s security and data handling practices. For projects like HER[AI]TAGE, ensuring fully informed, consent-based data collection from participants (especially vulnerable individuals like seniors sharing personal stories) and implementing robust measures to protect that data throughout its lifecycle is a non-negotiable ethical imperative.
- Academic and professional integrity: The ease with which AI can generate text, code, or images presents new challenges to academic and professional integrity. Using AI to generate entire essays, reports, or exam answers and submitting them as one’s own original work constitutes plagiarism and is unethical. Adult learners must be guided to understand and use AI as a powerful supportive tool for learning, brainstorming, research assistance, or getting feedback on their own drafts, not as a shortcut to avoid the learning process or to misrepresent their own knowledge and abilities.
- Intellectual property (IP) and copyright: The legal status of AI-generated content and the use of copyrighted material in AI training data are complex and evolving. Be aware of the terms of service of AI tools regarding ownership of generated content. Understand that AI-generated images may not be copyrightable. Prefer tools trained on ethically sourced or commercially safe data (e.g., Adobe Firefly) where IP clarity is important.
CHECKING FOR FAIRNESS, ACCURACY, AND APPROPRIATENESS (THE INDISPENSABLE “HUMAN-IN-THE-LOOP”)
Given the potential for errors and biases, it is essential to always critically evaluate AI-generated materials:
- Mistakes, inaccuracies, and “hallucinations”: AI systems can sometimes provide information that is factually incorrect, misleading, or even “hallucinate” – confidently presenting fabricated information as if it were true. Rigorous fact-checking against reliable sources is vital.
- Outdated information: The training data for most AI models has a specific cut-off point in time. Therefore, information about very recent events, discoveries, or policy changes might be missing, incomplete, or inaccurate.
- Insensitive, unfair, or inappropriate content: If an AI generates a case study for a business ethics class that inadvertently reflects only one cultural viewpoint or contains culturally insensitive assumptions, an educator should actively seek to add diverse perspectives, prompt the AI for alternative scenarios, or use the biased output as a teaching moment to discuss these issues.
It is also considered good ethical practice to transparently cite or acknowledge when AI tools have been significantly used to assist in the creation of materials, much like one would credit a human author, a research paper, or a specific website.
DISCUSSING AI ETHICS IN ADULT LEARNING ENVIRONMENTS
Fostering open, informed, and critical conversations about the ethical dimensions of AI helps adult learners develop into discerning and responsible users of technology. Educators can facilitate group discussions, debates, or case study analyses focusing on topics like fairness in AI, data privacy rights, intellectual property in the age of AI, the potential for AI-induced misinformation, and the societal consequences of widespread AI adoption. For instance, learners involved in the HER[AI]TAGE project could engage in a rich discussion: “What are the specific ethical considerations if the HER[AI]TAGE project uses AI to generate a visual representation (e.g., an image or avatar) of a historical figure from our local community, based on limited historical descriptions? How can we ensure such a representation is respectful, avoids caricature, and acknowledges the speculative nature of the AI’s output?”
ADHERENCE TO KEY ETHICAL FRAMEWORKS AND REGULATIONS
Internationally recognized guidelines, such as UNESCO’s “Guidance for Generative AI in Education and Research” (last updated April 2025) 3, provide valuable frameworks. These often emphasize core principles such as ensuring human agency and oversight, prioritizing safety and security, promoting fairness and non-discrimination, demanding transparency and explainability in AI systems, and ensuring educator preparedness.
Emerging regulations, like the EU AI Act (entered into force August 2024), are also establishing legal frameworks for the responsible development and deployment of AI. Key provisions include those on prohibited AI practices and the requirement for AI literacy among staff dealing with AI systems (from February 2025). Crucially for education, AI systems used to “determine access or admission or assign persons to educational and vocational training institutions” or “to evaluate learning outcomes” are often classified as high-risk. This mandates strict obligations like robust risk assessment, high-quality data governance, human oversight, and transparency. The Act also prohibits emotion recognition systems in workplaces and education institutions. Educational institutions need to take practical steps like AI inventories and developing AI use policies.
The adult educator’s role increasingly includes that of an “ethical steward” – modelling responsible AI use, making ethically informed decisions about which tools to integrate and how, and ensuring that AI is always used for pedagogically sound purposes that genuinely enhance learning rather than detract from it or cause harm.
The following table summarises key ethical considerations:
Ethical consideration / risk | Description / manifestation in AI | Mitigation strategy / responsible use guideline for adult learners / Educators | Relevance to HER[AI]TAGE project |
Algorithmic bias (e.g., in training data, gender stereotypes, cultural/geographic skews) | AI systems may perpetuate or amplify existing societal stereotypes or inequities if trained on biased data, leading to unfair or skewed outputs. E.g., image generators predominantly showing certain demographics in specific roles. | Critically evaluate AI outputs for stereotypes and lack of diverse representation. Prompt AI for more equitable and diverse outputs. Seek diverse training data where possible. Be aware of the EU AI Act’s emphasis on high-quality datasets for high-risk systems. | Ensuring AI-generated representations of Intangible Cultural Heritage (ICH) are respectful, accurate, and avoid cultural caricatures or misinterpretations. Ensuring diverse community voices are represented. |
Data privacy and security | AI tools, especially online ones, collect user data (prompts, generated content, interaction data). This data could be stored, re-used for model training, or breached. | Scrutinise privacy policies and terms of service. Avoid inputting highly confidential or sensitive personal information. Use strong passwords and 2FA. Be aware of data rights (e.g., GDPR). Practice data minimisation. | Protecting the privacy of community members, especially elders, sharing personal stories or Traditional Ecological Knowledge (TEK). Ensuring informed consent and secure data handling for all collected heritage data. |
Lack of transparency / “black box” effect | Some AI models make decisions or generate content without clear, understandable explanations of their reasoning process. | Seek AI tools that offer explanations for their outputs where possible. Advocate for transparency in AI systems. Understand that AI “hallucinations” can occur. | Ensuring that if AI is used to interpret or present cultural information, the basis for these interpretations is as clear as possible, and any speculative elements are acknowledged. |
Misinformation and “hallucinations” | AI can generate plausible-sounding but factually incorrect, misleading, or entirely fabricated information. | Always critically evaluate and rigorously verify AI-generated factual claims against multiple reliable sources. Do not use AI output as a sole source of truth. | Preventing the spread of inaccurate historical or cultural information related to HER[AI]TAGE. Ensuring all presented facts are meticulously verified. |
Academic and professional integrity | The ease of generating content with AI raises concerns about plagiarism and misrepresentation of one’s own work. | Use AI as a supportive tool for learning, brainstorming, or drafting, not as a replacement for original thought and effort. Properly cite or acknowledge AI assistance where appropriate. | Ensuring any AI-assisted creation of HER[AI]TAGE materials is ethically conducted, with proper attribution and without misrepresenting authorship or originality. |
Intellectual property and copyright | The legal status of AI-generated content and the use of copyrighted material in AI training data are complex and evolving. | Be aware of the terms of service of AI tools regarding ownership of generated content. Understand that AI-generated images may not be copyrightable (e.g., Midjourney ruling). Prefer tools trained on ethically sourced or commercially safe data (e.g., Adobe Firefly ). | Clarifying ownership and usage rights for any AI-generated content (text, images, audio) created for the HER[AI]TAGE project, especially if intended for public dissemination. |
By cultivating a strong awareness of these multifaceted ethical issues, both educators and adult learners can navigate the evolving landscape of AI tools more responsibly, fairly, and effectively, harnessing their benefits while mitigating potential risks.
PRACTICAL EXAMPLES
- A literacy teacher is using an AI image generator to create a series of pictures for a children’s story being co-written with adult learners. They notice that when prompted for “a family enjoying a picnic,” the AI consistently generates images of a very specific, stereotypical family structure (e.g., two parents of opposite genders and two children). Recognising this potential bias, the teacher then refines the prompt to explicitly ask for “diverse family structures enjoying a picnic, showing different ages, ethnicities, and family compositions” to obtain more inclusive and representative results.
- In a workshop specifically discussing the ethical dimensions of the HER[AI]TAGE project, learners and project staff engage in a deep discussion about the ethics of using AI to “recreate” or synthesize the voice of an elderly storyteller (even with their explicit consent) to narrate their collected story for a publicly accessible audiobook. They carefully consider aspects such as authenticity, the potential for misrepresentation if the AI voice lacks the original speaker’s unique emotional nuances, the rights of the storyteller’s family, and the long-term implications for how oral heritage is preserved and presented.
- An adult learner is using a new online AI writing assistant tool to help them draft an application for a competitive community development grant. Before inputting detailed information about their innovative project budget, personal contact information, and potentially sensitive community needs assessment data, they meticulously check the AI tool’s privacy policy and terms of service to understand precisely how their input data will be stored, used for model improvement, and protected from unauthorized access. They opt for a tool with strong data encryption and clear data deletion policies.
- A tutor delivering a course on academic research methods explicitly addresses the topic of academic integrity in the age of AI. They facilitate a discussion on how AI can be ethically used as a tool for brainstorming research questions, finding relevant literature, or checking grammar and style in a draft, but emphasize that submitting an essay, research paper, or any assignment substantially written by AI as one’s own original work is a serious breach of academic ethics and constitutes plagiarism. They also discuss appropriate methods for acknowledging AI assistance.
- During a digital skills workshop for older adults, the facilitator dedicates a session to online safety and digital citizenship. They specifically discuss how to create strong, unique passwords for AI platforms and other online accounts, how to recognize sophisticated phishing scams that might try to trick users into revealing personal data by impersonating legitimate AI services, and the importance of regularly reviewing privacy settings on social media and other platforms where AI algorithms are active.
- An educator preparing educational materials on local traditional crafts for the HER[AI]TAGE project uses an AI tool to summarize historical texts and ethnographic studies. Before incorporating this AI-generated summary into their teaching materials, they meticulously cross-reference all factual claims, dates, and interpretations with original scholarly sources and, where possible, consult with local historians and craft practitioners to ensure the information is accurate, culturally sensitive, and free from misinterpretations. They also add a note indicating that AI was used as an initial drafting tool.
- A group of adult learners is tasked with evaluating different AI-powered translation tools for a multilingual community project. They test each tool with the same complex passage containing cultural idioms. They then compare the translations for accuracy, nuance, and cultural appropriateness, leading to a discussion about how AI might inadvertently lose or distort meaning in cross-cultural communication and the importance of human review for sensitive translations.