Threat or opportunity? The answer is both for generative AI, and for other types and applications of AI. This was the view of Melvin Kranzberg, a professor of technology history, as far back as the 1980s, who even gave his name to Kranzberg’s Law: no technology is good or bad, but neither is it neutral. It all depends on how, in what context and for what purpose the technology is used [i].
This is good to remember when faced with either wild threats or glowing reviews of AI. GenAI itself agrees.
When the generative AI application ChatGPT is asked that headline question, it answers as follows:
- ChatGPT reminds us that not everyone has access to AI, let alone the ability to use it. AI can also quickly produce propaganda and disinformation and manipulate our opinions in many ways. We also do not know the principles by which different AI applications operate.
- On the other hand, ChatGPT can list many good things about itself. AI makes it easier for citizens to become informed and active. AI supports decision-making, for example, by developing scenarios or simplifying complex texts. Generative AI can be used as tools to detect and combat online disinformation. Generative AI can also capture and provide unbiased and balanced perspectives on controversial issues, contributing to public debate.
The open version of ChatGPT produced the above answers in English and Finnish in no time. It’s excellent for democratic communication when information can quickly be translated into different languages.
On the other hand, in the summary of threats and opportunities there is not much detail about the effects on the so-called value chain. Firstly, when designing an application, one can consider how it will affect democracy; for example, the fundamental rights of citizens. Secondly, one should consider how data is collected, the quality of the data and the training of the data. Is it done ethically and democratically or not? And finally, the way an application is used can either support or undermine democracy.
ChatGPT’s answers, while correct in themselves, illustrate the main challenges and opportunities for democracy presented by generative AI. Virtually every positive feature ChatGPT lists is true, but the same feature can also be a problem.
For example, if AI is good for us, it is also an excellent tool for the producers of lies, propaganda, and hate speech. Equally, since we often cannot get information on how AI works and what data it uses, we cannot have complete confidence in how it will advise us, or even medical diagnoses or policymakers. So Kranzberg’s law can be applied to AI: there are challenges and opportunities.
The answers also show how narrowly ChatGPT understands the big, complex concepts and practices we call democracy. For example, it neatly bypasses privacy and profiling with, say, facial recognition technology. Similarly, it does not mention the economics of AI (i.e. big business) nor the inequalities involved e.g. we can use ChatGPT’s free app, but paying for it would give us a better service [ii].
Democracy is about participation in society through work wherever possible, as recalled in the UN Universal Declaration of Human Rights [iii]. However, ChatGPT does not list disappearing jobs or concerns about copyright [iv], as challenges to democracy in its responses described above. Nor does AI mention issues related to sustainable development, particularly energy consumption, despite being an unprecedented energy guzzler [v]. Environmental crises are arguably central to modern societies – and democracy.
With AI already ubiquitous and increasingly public, dozens of statements and regulations exist on ethical and democratic principles. Most of them fear risks – to workers, privacy, national security, and economic growth – but fewer mention the rights and opportunities of ordinary people. [vi]
However, if another technology thinker is to be believed, every new technology changes our relationship with ourselves and each other. More than half a century ago, Marshall McLuhan noted that stories around the campfire built and maintained communities. The world became more individualistic when books became more accessible, since one could read them by oneself [vii]. According to McLuhan, the age of electronic communication gave birth to new types of communities where the same stories travel across geographical boundaries. Social media is a prime example of this.
Will generative AI bring us entirely new kinds of communities and societies? The greatest threat to democracy is that we will live with customised and always sympathetic AI-created virtual friends without trusting any of the information we receive – let alone each other – in real life. It is, therefore, time to insist that we all develop AI skills, understand the societal implications of AI and have a say in how, where, why, and who uses generative AI [viii].
Read the entire AI Guide for Teachers here.
Sources
[i] Melvin Kranzberg (1986) Technology and History: “Kranzberg’s Laws”. Technology and Culture Vol. 27, No. 3 (Jul., 1986). https://doi.org/10.2307/3105385
[ii] E.g. https://openai.com/chatgpt/pricing/
[iii] UN Universal Declaration of Human Rights https://www.un.org/en/about-us/universal-declaration-of-human-rights; https://um.fi/yk-n-ihmisoikeusjulistus
[iv] Generative AI Has an Intellectual Property Problem, https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem; Tekoäly ja EU – myös tekoälyn on kunnioitettava tekijöiden oikeuksia. https://www.teosto.fi/teostory/tekoaly-ja-eu-myos-tekoalyn-on-kunnioitettava-tekijoiden-oikeuksia/
[v] AI has an environmental problem, https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about; Googlen päästöt karkasivat käsistä tekoälyn vuoksi, https://yle.fi/a/74-20098239
[vi] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689; World Economic Forumin The Presidio Recommendations https://www.weforum.org/publications/the-presidio-recommendations-on-responsible-generative-ai/ and Governing AI for Humanity: Final Report by United Nations https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf
[vii] Marshall McLuhan (1964/1994 3rd ed.) Understanding Media. The Extensions of Man. https://mitpress.mit.edu/9780262631594/understanding-media/
[viii] Kirsi Hantula (tulossa 2025) Kuinka turvata tiedollisia oikeuksia tekoälyn aikakaudella? SITRA; also e.g., Minna Horowitz, Hannu Nieminen, Katja Lehtisaari & Alessandro D’Arma (2024) Introduction. Epistemic Rights in the Era of Digital Disruption. Palgrave MacMillan. https://link.springer.com/book/10.1007/978-3-031-45976-4
Read the entire AI Guide for Teachers here.