Our society has changed irreversibly – or has it? AI crept into our everyday life and the world of education, partly stealthily, partly with a bang. When talking about AI, we always remember to mention that AI has been part of our devices and applications for a long time [i]. On the other hand, AI suddenly came into the midst of education in one way: with ChatGPT in November–December 2022.
For many teachers, generative AI has changed the way they plan lessons, approach learning, and learn with their community of learners. Numerous guides and explanations of AI and its uses are being written and produced in various places [ii] [iii] [iv] [v]. GenAI tools help with lesson planning and construct content. Pupils and students are happy to take advantage of the possibility of a miracle machine that gives supposedly good-sounding answers. Today – faster than ever before in human history – teachers are learning to use tools at the same time as learners [vi].
Three attitudes
There are three main attitudes to learning about AI applications: denialist, techno-optimist and cautious. These can certainly be broken down in more subtle and specific ways, but these are probably the main lines.
The denialist has decided that there are so many risks, uncertainties and fears associated with AI that they refuse to engage with AI applications altogether. They refuse to use AI in teaching, forbid students from using it in their work, or refuse to learn new applications..
There are three main attitudes towards AI applications in learning: denialist, cautious and techno-optimist.
The denialist invokes environmental concerns, highlights the risks of data retention, or worries that humans will forget various skills that we have learned over time if we rely on AI. For the denialist, every litre of computational water consumed by an AI application is proof that the use of AI is unsustainable from the outset, and nothing will make it otherwise.
There is no sustainable way to build data centres or to produce energy without consuming the environment, they say, so it is not worth trying to improve AI. Literacy and writing are threatened and will die out if we use more AI applications, because like any intoxicating product, they lead to addiction and decay.
The techno-optimist sees AI as an opportunity to take human potential to the next level. AI technology enables a productivity trap by freeing us from trivial tasks towards new spheres of creativity.
The techno-optimist sees only opportunities in AI applications, trusts in risk management and looks forward to the next steps. They do not wait for common government guidelines or national directives but fearlessly explore the possibilities of AI in education and as an individual. For the optimist, this new technology is enabling things and producing previously impossible applications across disciplines and technologies. They cite medicine, transport, personalised learning and rescue from places previously inaccessible as examples.
The cautious one stays firmly within set boundaries. They argue that experimentation can only take place once you have defined what is safe to experiment on. If we don’t know where learners’ data – any data, such as writing, videos or personal data – is stored, the cautious one will not dare to experiment with AI applications in education. The possibility of personal data ending up in the wrong hands is too great a threat. Permission for AI use should come from the managerial level.
Those with a cautious approach will scan forums and discussion boards before experimenting. They only trust AI after a trusted source has informed them that a particular application is safe to use. Varovainen wants us to have considered all possible agreements and to anticipate risk spots. Cautious is scared by the out-of-control use of AI [vii].
Why use AI and what to watch out for
There are already many different AI applications for different purposes, and more are being created all the time [viii]. At this point, between 2024 and 2025, we are certainly only at the beginning of the AI applications era. We expect great benefits, which have been realised for some, but many are still cautious and afraid to even try. Those who remain denialists are fighting the windmills – AI is here to stay, at least in some form.
Above all, AI should be seen as a utility for huge amounts of data; data so huge that it is not worth trying to work on it by human effort or brainpower. Among other things, AI applications help in extracting what is relevant from the source data in a way that is far superior to the time before generative AI. Of course, the skill of creating prompts is essential here: browsing the source material is only as effective and useful as the prompts you use to guide the application to retrieve the desired information.
As for the material, it is of course as “good” and “real” as what is put there. As mentioned elsewhere in this guide, the material used by AI can be corrupted or the source can be very skewed. This can lead to so-called “hallucination”, which is discussed further elsewhere in this guide.
Above all, AI should be seen as a utility to exploit vast amounts of data.
The key to using AI applications is source criticism. GenAI applications are not purely search engines – although even that line starts to blur when integrating search engines with applications that use language models.
The most merciful approach to teaching and learning is to have a learner’s source criticism as the basis of everything. Users should be able to assess the following: if the text generated by an AI application is correct; whether it is appropriate in the context; whether it can be improved, if the data retrieved by the AI application can be trusted and whether it answers the question posed.And finally, whether the image/audio/video recording produced by the AI application is appropriate, correct or saves time – also taking into account ethical considerations.
The key to using AI applications is source criticism. GenAI applications are not purely search engines - although even that line starts to blur when integrating search engines with applications that use language models.
The most merciful approach to teaching and also to learning is to have a learner’s working source criticism at the basis of everything: being able to tell tentatively whether the text generated by the AI application is correct, appropriate to the context and can be made even better, whether the data retrieved by the AI application can be trusted and whether it answers the question posed in a prompter way, or whether the image/audio/video recording produced by the AI application is appropriate, correct or saves time - also taking into account ethical considerations.
A final note on ethics
At the time of writing, there are both good and bad examples of the ethical side of AI application development. The bad ones include environmental and legal issues. Evidently, developing and working with AI requires a huge amount of energy and natural resources. Not many forests have yet been cleared in Finland to make way for data centres, but they are certainly on the way.
Furthermore, additional energy production capacity will be needed if we are to sustain the development of AI. For example, some of the big AI developers such as Google and Microsoft, just bought energy for their own needs from nuclear power plants in the US in 2024 [ix]. New technologies will require new solutions and possibly more consumption. At the same time, we are in an accelerating state of species loss and loss of diversity, which is exactly what we should avoid.
Legislation is behind AI on many fronts: there are many opinions on GDPR, copyright is unresolved [x] and different countries have different positions. Individual privacy and MyData need to be effectively protected.
On the positive side, we can mention both Europe’s investment in the AI Act and GDPR’s versatility [xi], along with the huge drive to maintain the ethical side. This protects us against blindly going off the rails in a technological frenzy of non-intelligence [xii].
Read the entire AI Guide for Teachers here.
Sources
[i] Karmiva löytö TikTok-sovelluksesta: ”Poista se viimeistään nyt”, https://www.is.fi/digitoday/mobiili/art-2000009014304.html read 10.12.2024
[ii] Matleenan blogi: Generatiivinen tekoäly, https://www.matleenalaakso.fi/p/tekoaly.html, read 10.12.2024
[iii] Prompt Library — AI for Education, https://www.aiforeducation.io/prompt-library, read 10.12.2024
[iv] https://tekoalyoppimisentukena.wordpress.com/tot-oppaat/, read 10.12.2024
[v] Tekoäly koulutuksessa ─ lainsäädäntö ja suositukset, Opetushallitus, https://www.oph.fi/fi/kansainvalisyys/tekoaly-koulutuksessa-lainsaadanto-ja-suositukset
[vi] Change blindness - by Ethan Mollick - One Useful Thing, https://www.oneusefulthing.org/p/change-blindness, read 10.12.2024
[vii] AI Reaches Human-Level Reasoning: Should We Be Worried? by Fix Your Fin, https://www.nbcnewyork.com/news/business/money-report/why-big-tech-is-turning-to-nuclear-to-power-its-energy-intensive-ai-ambitions/5892398/, read 10.12.2024.
[viii] GenAI opetettava kone → Generation AI, https://www.generation-ai-stn.fi/genai-opetettava-kone/, read 10.12.2024
[ix] Why Big Tech is turning to nuclear to power its energy-intensive AI ambitions, https://www.nbcnewyork.com/news/business/money-report/why-big-tech-is-turning-to-nuclear-to-power-its-energy-intensive-ai-ambitions/5892398/, read 10.12.2024.
[x] The US Copyright Office says an AI can’t copyright its art - The Verge, https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise, read 10.12.2024
[xi] https://technologymagazine.com/articles/data-privacy-day-data-protection-in-the-age-of-gen-ai%5C, read 10.12.2024
[xii] The impact of the General Data Protection Regulation (GDPR) on artificial intelligence, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf, read 10.12.2024
Read the entire AI Guide for Teachers here.