On the Ethical Questions of AI in Adult Education

By Ryan Stivers and Joey Lehrman

Artificial Intelligence (AI) offers the promise of transformative change in education, creating new opportunities for teachers and students alike. Yet, as with any emerging technology, it comes with a set of questions we need to consider to ensure the technology promotes equity, accessibility, and meaningful learning for all.

In a recent workshop we led in partnership with the Coalition on Adult Basic Education (COABE), one topic generated a significant amount of discussion among educators: concerns around plagiarism, copyright, and bias.

This post aims to outline the ongoing debate surrounding these issues. Since AI is a relatively new technology, there’s no definitive answer or industry-wide agreement on these topics. So our goal here is to provide context as we navigate these questions together. 

AI-generated content: Is it plagiarism?

Plagiarism, simply defined, is using someone else’s work without giving them credit. In the digital age, it’s obviously quite easy to copy and paste information found online. But what about content created by AI? 

Tools like ChatGPT and Bard can generate entire articles. So if a student copies content from AI, is that the same as copying human-created content? 

The issue gets more complicated when considering how AI models are trained. For example, ChatGPT uses a dataset of publicly available text. So while the generated blog post might be ‘new’ in the sense that it hasn’t been published before, it’s stitched together from the work of others. Is that plagiarism? 

At the core, the questions of plagiarism and AI-generated content seems to be:

  • Should content generated by a robot be treated the same as content created by a human?
  • Does the source of the data used to train the AI change the perspective?

The team at TurnItIn says no, since “no original work is being copied,” while other educators aren’t so sure. As can be seen, it’s a complicated issue without a clear answer. 

Our take on AI and plagiarism: Engage students in meaningful dialogue

As can be seen, there’s no simple answer to whether AI-generated content counts as plagiarism. What we do know is that AI’s role in education is likely only going to grow. Therefore, it’s essential that we as educators explore these new tools so we can figure out its role in our classrooms, and so we can help to inform the wider policy conversations around the future of AI in teaching and learning.

We believe AI is a tool, similar to calculators and cell phones. Remember, when calculators were first invented, they were viewed as cheating by some educators. And that conversation remains complicated. But, most now agree that calculators can’t do the most important parts of learning: critical thinking, problem-solving, resilience, communication, and teamwork. With advancements in AI, however, it’s increasingly plausible that some of these “human-only” skills could be emulated by machines, making it even more essential that we be explicit with students about where and how AI should be used. Engaging in these conversations can help students shift their mindset about the goals of learning and the role of technology in the classroom.

Additionally, banning new tech in classrooms is impractical. If you find a classroom where cell phones are banned, you’ll probably also find a classroom where a student is using their cell phone under the desk. Some might think software can help detect when AI has been used by a student, but that’s not currently feasible.

So our belief is that we have two options when it comes to AI: we can engage students in a critical conversation that helps everyone learn about AI and the learning process itself, or we can ban it and miss the opportunity to learn alongside our students about ethics and plagiarism.

Copyright in the Age of AI

AI’s role in adult education also brings up questions around copyright and intellectual property. AI has the capability to generate text, images, music, and videos, leading us to ask: who owns these materials? Is it the student who used the AI tool, the company that developed the AI, or perhaps the original creators of the content the AI was trained on?

To illustrate, imagine a hypothetical scenario where an educator incorporates AI-generated images into their slide deck. This could immediately raise red flags at the institutional level, as it would be unclear whether the educator, the AI software company, or the original creators of the data the AI was trained to hold the copyright to those images. That scenario would likely spark an urgent need for discussions and policy considerations regarding copyright, ownership, and proper attribution.

It’s also crucial to differentiate among various AI tools when discussing copyright and intellectual property issues. For instance, ChatGPT, one of the most popular tools, is trained largely on publicly available data. In contrast, organizations like ISTE are creating AI models specifically for educators. Similarly, some companies are training AI on their internal documents to enable staff to query that data. Getty Images is working with NVIDIA to on a text-to-image generator that is commercially licensed for institutional use. In these latter cases, the copyright situation seems more straightforward as the tools are being licensed for open use. However, when it comes to AI tools trained on publicly available data, the copyright question is more complex.

Our take on AI and copyright: There’s no clear answer just yet

There’s no one-size-fits-all answer to these questions. Ownership can vary depending on the tool used. For example, using ChatGPT is different from companies that train models based on their own data and grant usage rights to their employees. These are also larger questions that are currently being considered by congress and the courts

As classroom educators, it’s not our role to define copyright laws, but it is our role to help our students understand and engage in those conversations. So at a minimum, we believe these questions offer a valuable opportunity to open conversations with students about copyright, intellectual property, and attribution.

AI and the amplification of bias

A third ethical concern is bias. AI tools learn from the data they’re trained on, so if that data has biases or misinformation, the AI will pick up on that. The lack of diversity in tech can also lead to built-in biases, like we’ve seen with facial recognition software

In addition to inherent biases in the data, it’s worth considering how AI’s learning process can inadvertently amplify these biases. For example, if an AI model learns from social media data, it might absorb not just prevalent viewpoints but also the most extreme opinions, since these are often the loudest voices online. Educators and students should therefore be cautious when using AI-generated content in discussions, particularly on sensitive or controversial topics.

Our take on AI and bias: Powerful teaching moments

Being aware of these biases opens the door for impactful conversations with students. Educators can use examples of AI bias to spark critical discussions about the credibility and impartiality of sources. These conversations can be powerful learning moments, helping students to question the information they encounter and to understand the potential pitfalls and limitations of relying on AI tools for information.

And most importantly, it’s okay to develop and implement your own policy. Questions around the use and future development of AI are complex and yet to be definitely answered. So we suggest trying to learn about these tools alongside students and to collaboratively establish a classroom use policy. Here are some examples that could serve as a starting point.

The complexity of AI ethics can be a powerful learning experience

To harness the full potential of AI responsibly, we believe educators should:

  • Talk with their students about the ethical implications of AI and the importance of academic integrity.
  • Promote creativity and critical thinking in student work.
  • Establish clear guidelines for the ethical use of AI-generated content
  • Encourage students to use AI as a supplementary resource while emphasizing the importance of independent thinking and information verification.

The integration of AI into adult education raises ethical questions about plagiarism, copyright, and bias, as well as the potential for amplifying societal prejudices. By embracing AI responsibly and addressing its ethical complexities, adult educators can harness the benefits of technology while maintaining the integrity and purpose of education – to foster knowledge, creativity, and ethical engagement in the pursuit of lifelong learning.

If you’re interested in diving further into the topic of AI in adult education, consider joining our upcoming AI in Adult Education Bootcamp, developed in collaboration with COABE. This two-month program offers live talks, coaching sessions, and a nationwide peer learning community, all aimed at supporting educators in effectively using AI in the classroom. 

About the Authors

  • Ryan Stivers is a former adult education student and GED recipient who has dedicated himself to transforming adult education to support students like himself in having better life outcomes. Ryan is the Co-Founder of Impactful Education, and Principal Consultant at RP Stivers Consulting. Ryan has spent his career at the intersection of highly-effective teaching, data-based program improvement, and strategic educational leadership. He has held senior leadership positions for K-12 and Higher Education programs, including leading programming at Johns Hopkins University, Tulane University, and serving as a founding leader of the national award-winning Rooted School system. Connect with Ryan on LinkedIn.
  • Joey Lehrman is an award-winning leader in adult education and advocate for innovation in learning and teaching. As a Co-Founder and Technical Advisor for Impactful Education, Director of Community Learning Partners, and Project Manager for ISTE’s SkillRise, Joey has led hundreds of national, state, and local initiatives focused on online learning, technology transformation, and digital resilience. Joey’s accolades include receiving COABE’s 2019 State Innovation of the Year Award for developing Louisiana’s first fully online adult education program as well as being a 2x Finalist for COABE Adult Education Administrator of the Year. Connect with Joey on LinkedIn.