AI in education case report: Crafting responsible use policies for emerging technologies in US school districts

This article is based on "A Position Paper for NSF Grant 2334525: Responsible, Ethical, and Effective Acceptable Use Policies for the Integration of Generative AI in US School Districts and Beyond" by Pati Ruiz, Ed.D., Sana Karim, Amanda LaTasha Armstrong, PhD, Alison Shell, PhD, Babe Liberman

This article delves into the complexities surrounding the integration and adoption of emerging technologies, including GenAI, by diverse school districts across the United States. By sharing the insights from this collaborative effort, we aim to equip the field with knowledge and resources to navigate the evolving emerging technology landscape and help education leaders make informed decisions that benefit their learners and communities.

Background

Hundreds of applications that integrate GenAI are emerging daily but lack evidence to support claims about their impact and utility for learning, which can affect the appropriate adaptation of policy [i]. Users are also rapidly adopting using GenAI tools: The Walton Family Foundation (2023) recently surveyed 1,002 K-12 educators and 1,000 learners in the US and found that 51% of educators are using ChatGPT. Many of them (40%) reported using it at least once a week. Despite its rapid adoption, many people are uncertain about what GenAI is, how it works, and what the implications are for learners, families, and educators.

There are widespread ethical concerns around the adoption of GenAI tools in education [ii] as well as concerns about the environmental impact of these tools [iii]. More work also needs to be done to ensure that AI tools and systems are meeting the accessibility needs of all learners and educators.

In order to understand the integration of GenAI in schools and districts across the United States, and the implications for learners, educators, and families, the GenAI Working Group project convened a diverse group of education leaders to collaboratively make sense of GenAI in education and inform the development of exemplar R/AUPs that centre responsibility, ethics, and effectiveness.

Research Activities

Digital Promise’s League of Innovative Schools (League) network brings together forward-thinking education leaders from 150 school districts in 38 states. The League is at the forefront of designing, piloting, and scaling solutions that can impact learning outcomes, with a focus on historically and systemically excluded learners. Recent network discussions have highlighted challenges that League leaders are facing regarding AI and emerging technologies for teaching and learning. The project team invited League districts to participate in the GenAI Working Group.

The GenAI Working Group research team and members met once per month in the 2023–2024 school year to discuss district progress on their Responsible, Ethical, and Effective Responsible/Acceptable Use Policies (R/AUPs) and related ongoing conversations they were having with their district communities.

The GenAI Working Group team conducted two focus groups over the course of the project.

Outcomes

The focus groups, interviews, and monthly meetings highlighted the shifting priorities of the participating school districts based on the emergence of AI and emerging technologies in

education. Although all GenAI Working Group members were developing strategies to better understand GenAI use in their districts, they understood that their communities’ modes of entry differed and that it was important to not rush into implementation.

GenAI Working Group members shared that educators are hesitant to integrate GenAI for several reasons, including the unknown impact of AI on the learning process, overall cost, and in some cases, a lack of AI literacy and technical expertise. In their conversations with district leaders, learners showed a mixed understanding of how generative AI works, and uncertainty about how to use it productively. Community members shared how their workforces are using generative AI, but had concerns about how rapidly-changing technology will affect both their own daily lives as well as those of learners in their communities.

GenAI Guidance Topics

Conversations among district leaders and their community members reiterated the importance of resources to help understand and integrate AI and other emerging technologies in educational environments. Based on these conversations with districts and grounded in Digital Promise’s AI Literacy Framework [iv] [v] and the EdSafe AI Alliance SAFE Benchmarks [vi] we have determined six focus topics for R/AUPs: AI Literacy, Safety, Ethics, Transparency, Implementation Guidelines, and Evaluation and Impact. These topics are described in more detail below.

AI Literacy

This topic provides guidance on how to develop media literacy, digital literacy, and critical thinking skills to understand the safest and most ethical use of AI for learners, educators, and districts. Community needs in this area were common. We define AI literacy as the knowledge and skills that enable humans to critically understand, evaluate, and use AI systems and tools to safely and ethically participate in an increasingly digital world [vii].

Figure 1. Expanded AI Literacy Framework, including Core Values, Modes of Engagement, Types of Use, and AI Literacy Practices [viii].

Although AI literacy skills do not need to be explicitly stated in use policies, district leaders can offer guided language on “look-fors” when using GenAI tools. Additionally, rather than developing AI literacy as standalone courses, these skills and practices should be integrated across the curriculum and across grade levels where existing media literacy, digital readiness, and computational thinking skills are also addressed.

Safety

The safety topic prioritises protecting student, teacher, and community data and privacy while managing potential cybersecurity risks, building on federal policies such as the Family

Educational Rights and Privacy Act (FERPA) [ix] and the Children’s Online Privacy Protection Rule (COPPA) [x]. These policies require parental consent before data is shared and set an age minimum for using AI tools, such as 13 years old for chatbots.

Ethics

GenAI users need to be responsible, fair, and equitable, and acknowledge the biases in both humans and the synthetic outputs of GenAI. Fairness, ethics, and bias have emerged as high priority concerns when using AI. The Working Group shared that educators were primarily concerned about cheating but were also concerned about how to support learners who may be potential victims of bias in AI systems and tools.

Transparency

Leaders should be open about the processes they use when selecting GenAI tools for their schools and about the development and changes in their guidance. As the Office of Educational Technology shares, “In education, decision makers will need more than notice—they will need to understand how AI models work in a range of general educational use cases, so they can better anticipate limitations, problems, and risks” [xi].

In addition, guiding language should suggest that learners and educators be transparent when using GenAI tools in their work.

Implementation Guidelines

Guiding language on implementing guidelines includes the ways that educators, learners, and others can use AI tools to the best of their abilities and to generate results that benefit them and their communities. This topic was front of mind for students and educators: both groups wanted clarity on when it is acceptable and not acceptable to use AI. Although users have agency to make the ultimate decision whether or not to use AI tools, they still need direction, especially when it comes to emerging technologies.

Evaluation and Impact

The final topic indicated the importance of providing guiding language on how users should evaluate AI systems and tools, as well as the impact of their use, on an ongoing basis. Districts, educators, learners, and community members should continue to ask themselves if individual tools are beneficial or causing harm. Guiding language on evaluation should provide scaffolding on which tools a user may use depending on the need and consider the consequences or benefits of their decision. Helpful guidance on the evaluation of AI can be found in: AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology [xii].

The GenAI Working Group project’s primary deliverable Emerging Technology Guidance Language Deck [xiii] serves as a living document which will update as districts learn more about how to incorporate language about emerging technologies in their acceptable use policies.

Read the entire AI Guide for Teachers here.

Sources

[i] Lodge, J., Thompson, K., & Corrin, L. (2023). Mapping out a research agenda for generative artificial intelligence in tertiary education. Australasian Journal of Educational Technology. 39.1-8. https://doi.org/10.14742/ajet.8695

[ii] Nguyen, A., Ngo, H.N., Hong, Y., Dang, B., Nguyen, B. T. (2023). Ethical principles for artificial intelligence in education. Education and Information Technology 28, 4221–4241. https://doi.org/10.1007/s10639-022-11316-w

[iii] UNESCO. (2023). ChatGPT and Artificial Intelligence in higher education: Quick start guide. United Nations. https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligencein-higher-education-Quick-Start-guide_EN_FINAL.pdf

[iv] Ruiz, P., Armstrong, A. L., Karim, S., Shell, A., Singmaster, H., Giang, M. (2024) Emerging Technology Acceptable Use Policy [Google slides]. Digital Promise. https://bit.ly/DPAUPLanguage

[v] Mills, K., Ruiz, P., Lee, K., Coenraad, M., Fusco, J., Roschelle, J. & Weisgrau, J. (2024, May). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218

[vi] EDSAFE AI Alliance. (2024). Safe benchmarks framework. EDSAFE AI. https://www.edsafeai.org/safe

[vii] Mills, K., Ruiz, P., Lee, K., Coenraad, M., Fusco, J., Roschelle, J. & Weisgrau, J. (2024, May). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218

[viii] Digital Promise  https://digitalpromise.org/initiative/artificial-intelligence-in-education/ai-literacy/

[ix] FERPA https://studentprivacy.ed.gov/faq/what-ferpa

[x] COPPA https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa

[xi] Office of Educational Technology. (2023). Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, U.S. Department of Education.

[xii] Mills, K., Ruiz, P., Lee, K., Coenraad, M., Fusco, J., Roschelle, J. & Weisgrau, J. (2024, May). AI Literacy: A Framework to Understand, Evaluate, and Use Emerging Technology. https://doi.org/10.51388/20.500.12265/218

[xiii] Digital Promise Emerging Technology Guidance Language Deck  https://digitalpromise.dspacedirect.org/items/f84dedc7-762c-460e-b9ba-eb4ba28a9b51

Read the entire AI Guide for Teachers here.

edu@faktabaari.fi

Evästeet

Käytämme sivustollamme yksityisyyden suojaavaa analytiikkaa palveluidemme parantamiseksi.

Lue lisää tietosuoja käytännöistämme täältä.