What should a teacher know about the EU AI Act?

“The deployment of AI systems in education is important to promote high-quality digital education and training and to allow all learners and teachers to acquire and share the necessary digital skills and competences, including media literacy, and critical thinking, to take an active part in the economy, society, and in democratic processes.” 

– AI ACT preface (56)[i] 

AI solutions can be valuable in education, but they need to be developed and deployed responsibly. If we simply let the first AI chatbot we find assess students’ work and then record it in the student’s records, we are being far from responsible. To make it clear to everyone what responsible means, the EU AI Act sets out the situations in which we need to take particular care.

The responsibility for AI solutions in education lies with the institution, not the teacher. On the other hand, all organisations are obliged to ensure their employees are AI literate. The educational institution must therefore train its teachers to work responsibly with AI systems. In this short article, I will describe the general features of the AI Act to help teachers understand the big picture. The main message, however, is that the educational institution must provide teachers with adequate guidance on how to work with different types of AI systems.

Towards reliable AI services

AI can be added to almost anything: cars, toys, recruitment, law enforcement or even student assessment. It’s clear that poorly designed AI in, say, a self-driving car or the grading of students’ essays can cause all sorts of trouble.

There are two kinds of harm. It can: 

  1. pose risks to people’s fundamental rights, or 
  2. it can lead to dangerous products.

Fundamental rights include the right to education, the right to work, freedom of expression, freedom of religion and equality.

Dangerous products are regulated through product regulation. The EU has laws regulating the safety of cars, lifts, food and electrical appliances, for example. The CE marking shows that the manufacturer promises that the product is safe. Similar thinking is being extended to the AI part of these products.

The aim of the AI Act is to ensure that only trustworthy AI services are available on the EU market. Conversely, this means that if AI can cause harm (or risks), it must be made trustworthy – that is, it must be designed properly.

Doing it right takes a lot of work, but it can be summed up by one thing: good data governance. When data flows are identified, described and managed, it also becomes easier to ensure accountability and reliability. Data governance is the solution to both data protection issues and ensuring the trustworthiness of AI. Quality control and risk management should be integrated into the same exercise.

AI systems can be classified according to their risks

  1. Unacceptable risk: some extreme use cases that are not in line with European values are banned altogether. These include social scoring of citizens by combining different databases, real-time biometric identification of citizens or cognitive-behavioural manipulation of people.
  2. High risk: AI is part of a regulated product or may otherwise pose a risk to fundamental rights. This is permissible, but AI must be properly built, deployed, documented and approved by the authorities.
  3. Limited risk: for example, chatbots, image generators and the like that produce content for human viewing. The only obligation is to inform the user that they are dealing with AI.
  4. No risk: certain background processes, such as spam filters or AI-generated terrains in video games. There are no obligations for these.

In the education sector, high-risk AI systems include:

  • Student selection
  • Assessment of learning outcomes
  • Assessing a person’s future educational needs
  • Monitoring students during exams

The above list is indicative, and the level of risk is influenced by how autonomously the AI system operates. If, for example, the AI only assists the teacher in assessing learning outcomes, it may not be high risk. If, on the other hand, it autonomously generates grading recommendations, the level of risk is higher.

How to acquire AI

An AI system provider needs to know the risk level of its service. The CE label on an AI system indicates that it has been manufactured in accordance with EU quality standards.

The AI Act requires certain reliability assurance measures when deploying high-risk AI. However, the same measures are good practices that should be followed even if they are not mandatory.

While the AI vendor has done their part and shared them with you, as a deployer you must do your part.

  • Data management: how the system is used, what data can be entered into it and who has access to it. Data confidentiality, data protection issues and ethical considerations will be considered.
  • Quality control: How the system’s performance is monitored and how errors are detected. Monitoring and description by professionals is of paramount importance.
  • Risk management: how to recover from incidents, security problems, etc.

How to develop AI

As an AI system developer, you need to identify your customers’ use cases and their highest level of risk. If it’s a high-risk system, AI regulation will necessitate several steps, but it’s worth it.

  • Data management: Data sources, metadata, training algorithms, data access.
  • Quality control: Automated test suites, training-quality monitoring, in-service monitoring.
  • Risk management: Systemic risks, developer’s own risks, risks caused by customers’ activities. Risk management plan.

Additional information

Traficom is the single point of contact for the Finnish supervisory authorities, and its website provides more detailed guidance and contact details for the different authorities.

Sitra is producing a self-study course on AI regulation as part of the ABC of Data Economy course series. This course will be ready in spring 2025 and published at  https://www.sitra.fi/en/projects/basics-of-the-data-economy/

Sources

[i] AI ACT Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance.

Read the entire AI Guide for Teachers here.

edu@faktabaari.fi

Evästeet

Käytämme sivustollamme yksityisyyden suojaavaa analytiikkaa palveluidemme parantamiseksi.

Lue lisää tietosuoja käytännöistämme täältä.