The use of AI in education requires careful planning, open communication and continuous development by teachers. This will ensure that the technology serves educational goals in a safe and pedagogical way. Teaching and assessment plans need to be adapted to ensure that the selection and use of AI tools is appropriate. New technologies should support teaching objectives and not be an additional burden.
If properly harnessed, rapidly evolving GenAI applications have the potential to boost the efficiency of everyday school routines, create teaching materials and promote learning.
It is the responsibility of education and training providers to assess how the use of AI applications is implemented in practice, while respecting data security, data protection, intellectual property rights and the right of learners and staff to their own data and privacy. At the same time, care must be taken to ensure that learners are not exposed to, for example, political or ideological influence, commercial advertising or marketing, or harmful or disturbing content.
Training providers must also assess whether an AI application meets the EU definition of an AI system, and with it the attached obligations and conditions.
It is particularly important to pay special attention to the ethical challenges of AI applications and the support of teachers and students, as well as the accessibility of the AI tools. The use of AI applications should support the work of teachers, improve governance and promote learning for all students in line with curriculum objectives.
New technology should support teaching objectives and not be an additional burden.
Created with Napkin
Key ethical issues in the educational use of AI
The European Commission has published [i] a list of key ethical and AI-related questions for education and training providers and teachers, published by a group of experts. The questions focus on human agency, transparency, diversity, social welfare, privacy, technical reliability and accountability. The following is a summary of the ethical aspects in the use of AI in education.
Ethical requirements for education and training providers
The purchase and use of AI-based systems, tools and applications should comply with local legislation and support curricula and learning objectives. Teachers’ representatives should be given the opportunity to participate in the procurement of systems and related decisions.
AI systems for educational use should be ethical, reliable, safe and human-centred. Human-centredness means, among other things, that AI systems should not be given autonomous decision-making power without human supervision, e.g. in student selection and assessment.
Laws and regulations
The EU’s AI Act [ii] classifies the use of AI in education as high-risk because it may compromise the fundamental rights of the user, such as privacy, non-discrimination or freedom of expression. In particular, EU legislation protects the digital rights of minors.
- Article 4 of the AI Act calls for measures to be taken: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
- AI systems must comply with the EU Data Protection Regulation. Access to data is restricted to those who need it, and users should know what happens to their data and how it is used. Teachers and school leaders should be aware of how to report privacy or data protection issues.
- Adequate security measures should be in place in case of a data breach, as well as control mechanisms for data processing (collection, storage, minimisation). Appropriate training should be provided for staff to comply with security guidelines.
- The potential risks of using AI tools, such as the risks of increasing commercial interests or widening the digital divide, need to be monitored and critically assessed. This will ensure that AI tools do not cause injustice or discrimination.
Accessibility
Systems should be accessible to all learners, regardless of their gender, ability or characteristics. Particular attention should be paid to learners with special needs.
When choosing AI solutions, care should be taken to ensure that they do not widen the existing digital divide. All learners should have access to the opportunities offered by AI.
It is therefore important to ensure that both teachers and students have access to the same AI tools. This will also facilitate collaboration, sharing of information and guidance, and in-service training for teachers.
Summary of the ethical aspects of the use of AI in education
- Human agency: The development and use of AI must consider the role and impact of people. It is important to ensure that people remain at the centre of decision-making and that their rights and needs are taken into account.
- Transparency: The principles and decision-making processes of AI should be clearly understood. Users and stakeholders must be able to understand how and why AI makes certain decisions.
- Diversity, non-discrimination and fairness: It is important to promote diversity and prevent discrimination in the development of AI. AI systems must be fair and must not reinforce existing prejudices or discriminatory practices.
- Social and environmental well-being: The use of AI must contribute to social and environmental well-being and AI solutions must support sustainable development and improve people’s quality of life.
- Privacy and data management: Privacy is a key issue in the use of AI. It is important to ensure that personal data is handled securely and that users have the power to control their own data.
- Safety and security: AI systems must be safe to use and they should work reliably and predictably. This requires careful planning and testing to minimise the risk of errors and failures.
- Responsibility and accountability: The developers and users of AI are responsible for its impact. It is important to define who is responsible for the decisions and possible consequences of AI.
Ethical expectations for teachers
AI offers new opportunities for teachers, but it also poses new ethical challenges.
Teachers should understand how AI technologies work and be aware of its potential, limitations and risks. They should also ensure in practice that AI is used ethically and to support learning.
Teachers’ representatives should have the opportunity to participate in decision-making related to the acquisition and use of AI tools. Teachers should also have the opportunity to provide feedback on the tools and their performance. Staff should only use systems approved by the training provider in their teaching, in accordance with the instructions given.
Further training is needed to understand and use AI tools responsibly
Implementing AI systems requires training and some familiarity with how, for example, large language models are trained, how they work and why there are always uncertainties in their operation that require verification of outputs.
Training providers should ensure that teachers have a good basic knowledge of the tools and software used and clear guidance on the functions, accessibility and limitations of the applications.
The provider’s data protection officer should provide teachers with information on what data the systems collect about students, how the data collected is processed, what it is used for and where it is stored. Ultimately, it is the teacher’s responsibility to ensure that the tools are used appropriately to support teaching and learning.
Teachers themselves should be aware of which tools have an AI component, how it is used and what the potential risks are. They also have a role to play in communicating to learners how to use the AI applications available in the school and in instructing learners how to use them safely.
AI pedagogy
AI tools should be used at pedagogically appropriate age levels and where they add value to learning. Teachers should be aware which tools are appropriate for different age groups and how they can be integrated into teaching to support learning.
Teaching should be designed to ensure that learners have equal access to AI tools, and accessibility gaps should be reported to the education provider. This will ensure that all learners have access to AI tools, regardless of their background. AI has proven to be a very useful tool in special education, offering many possibilities for differentiation, tailoring of individualised teaching materials and tutoring to support learning.
Student assessment should never rely solely on AI tools without human verification. Teachers should ensure that AI does not replace the role of the teacher but acts as a tool and aid to support teaching.
It is important for teachers to inform learners how and why to use AI tools. This transparency will help learners understand the role of AI in teaching and reduce potential doubts, fears and misuse.
AI systems should be accessible to all learners, regardless of their gender, abilities or profiles.
Monitoring
Artificial intelligence tools and systems are evolving at a tremendous pace. Their performance should be continuously evaluated and staff should have the opportunity to provide feedback on their experiences to training providers. This feedback will help ensure that AI tools continue to evolve and improve.
Teachers who actively participate in the use and development of AI tools can create new opportunities for learning and improve teaching outcomes. This requires ongoing dialogue, training and monitoring to ensure that AI serves, rather than threatens, educational goals.
Teachers should be given sufficient basic knowledge to be able to detect and report potential security breaches. Teachers should also be aware that AI tools can be misused, for example for academic fraud, and should be able to identify and minimise the potential risks of misuse.
Guidance
Teachers and students should be provided with clear instructions on how to use AI tools.
The Finnish National Board of Education and the Ministry of Education and Culture will publish in spring 2025 more detailed recommendations and legislation on the use of AI in early childhood education, basic education, non-formal education, and upper secondary and vocational education and training. The aim of the support material “AI in education - legislation and recommendations” is to support education and training providers in leveraging AI as part of teaching and learning.
The support material will be published in Finnish, Swedish and English
Points to note
AI is transforming our education system by providing more personalised learning and more individualised mentoring. At the same time, it brings significant challenges.
One of the biggest concerns is the misuse of data. It is important to ensure that the personal data of learners and teachers is handled responsibly and not used as input or prompts for GenAI programmes.
Another major risk relates to the reliability of information. AI-generated information is not always completely reliable, and learners need to learn to critically evaluate AI outputs, e.g. by checking the reliability of sources.
There is also a risk that learning will be overly outsourced to AI. If students rely too much on AI solutions, they will no longer develop their own problem-solving skills or delve into topics independently. This can lead to a situation where learning becomes superficial.
The commercialisation of education is also a worrying trend. AI application developers are often interested in financial gain, which can lead to situations where the content of training is tailored more to commercial than pedagogical interests.
Managing ethical risks requires teachers to be vigilant. Teachers, educational institutions and training providers need to work together to define clear rules for the use of AI in education. This requires ongoing discussion, training and monitoring to ensure that the use of AI serves educational objectives.
The European Commission has published “Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators” [iii], which contains a list of key issues for the use of AI in education, compiled by a group of experts. It is a very useful tool for developing ethical guidelines for schools and educational institutions.
Sitra’s Tarmo Toikkanen [iv] has summed up perfectly why AI outputs need human review and supervision:
Text-producing AI…
• Does not understand the meaning of words
• Has no opinions, gives no valuations
• Does not think, does not predict
• Does not feel, does not care
• Does not take responsibility
• Hallucinates
Created with Napkin
Read the entire AI Guide for Teachers here.
Sources
[i] European Commission, Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union, 2022, https://data.europa.eu/doi/10.2766/153756
[ii] AI Act https://eur-lex.europa.eu/eli/reg/2024/1689/oj
[iii] European Commission, Directorate-General for Education, Youth, Sport and Culture, Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators, Publications Office of the European Union, 2022, https://data.europa.eu/doi/10.2766/153756
[iv] Tarmo Toikkanen (haettu 31.12.2024) Tekoäly opetuksessa https://drive.google.com/file/d/1dFhZjEWXRJn3OYP4iC-zTQJuPd5Qz-u7/view
Read the entire AI Guide for Teachers here.