Artificial intelligence requires continuous renewal of one’s own thinking

17.3.2026

Did you try using AI, but it didn’t produce the desired result? Or perhaps you ended up spending time fixing the mistakes made by the AI. It may be that the AI you used isn’t yet good enough to solve your problem, but it’s also very likely that you used it incorrectly, relying on old ways of working. AI works better when you dare to question your own approach: do things always have to be done this way? Email takes twice as much working time compared to fax Let’s imagine a world where an employee is used to typing a document on a typewriter and then faxing it to the recipient. The process is simple: once the document is ready, it’s placed in the fax machine and, with the push of a button, it’s sent. When the employee is told that they should use email instead of fax, they become frustrated. Now the work takes twice as long! First, the document has to be typed on a typewriter, and since it can’t be fed directly into a computer, it has to be retyped using a word processor before it can be sent by email. Surely working time could be used more efficiently? Email was supposed to make everyday life easier! Only when the employee questions their way of working and changes it do they gain the real benefits of email. Why does the document have to be written first on a typewriter? Could it instead be written directly on a computer? Is it even necessary for a paper copy of the document to exist somewhere in a folder? Many clever people can come up with good reasons to cling to old ways of working. For example, they might argue that writing on a computer weakens literacy and accuracy, because spelling errors are automatically highlighted and mistakes are too easy to fix—so you can write carelessly, whereas a typewriter requires real professional skill, and so on. But these are just excuses. The world is changing, and we must change with it. AI shouldn't be used because... In higher education, the same arguments against using AI tend to come up repeatedly. Students will never learn the conventions of scientific writing if they use AI to write everything. To this, I would like to pose a question: what are the conventions of scientific writing actually needed for? In my opinion, no longer for anything at all. Let AI do what it does well, and leave to humans what AI cannot achieve. I know many academically very talented and intelligent people who simply are not good writers. Writing skill and the ability to conduct research and produce new knowledge do not go hand in hand. It is actually unfair that academic success has so far been measured by the ability to arrange words into an eloquent form. Writing is thinking, and if you don’t do it, you won’t learn to think. This argument makes me sad. I feel sorry for those who cannot think without writing. You can think and reflect better without pen and paper, if you give it time. For example, a one-hour walk outside gets your thoughts flowing in a completely different way than staring at a blank page. And going for a walk is something AI will never do on anyone’s behalf. AI consumes a lot of electricity and harms the environment. If someone wants to rely on this argument, I hope they also refrain from using social media, watching videos, or listening to music via streaming. Hopefully, they also don’t play video games or use a computer or phone for anything beyond absolute necessities. Of course, AI uses electricity—but so does everything else. If this argument is used to avoid only AI, while continuing to use all other data center–driven services, it sounds somewhat hypocritical. “AI doesn't know how to write” Sometimes I wonder how people have the patience to use AI for brainstorming ideas and outlining structure, only to then become frustrated when the AI can’t actually write the article. This is an example of how old ways of working prevent the effective use of AI. Many people cling to the idea that text is somehow the ultimate end goal of writing. In my view, the text itself is secondary—unless we are talking about novels and poetry. More important than the text is conveying an idea— yes, an idea that a person themselves must have. I can get perfectly good articles out of AI when I provide it with the desired structure and clearly explain the main idea of the article, along with the key points of thinking and argumentation under the headings. And just like that, the AI produces exactly the kind of article I wanted. If something is off, I simply ask the AI to fix it. I don’t start manually editing anything longer than individual sentences. Of course, it may be that the AI’s text doesn’t fully please you or doesn’t feel like your own. What matters more, however, is asking yourself: does this say what I wanted to say? ChatGPT will probably never write in exactly the same way as I do, but that doesn’t matter. The style may feel unfamiliar, but if the content reflects my thinking, I’m satisfied. I have abandoned the old-fashioned idea of ownership of the text. For me, it is enough that I own the idea behind it. What does the future of work life require? Now, and increasingly in the future, we must break away from old practices. Nothing should be done just because it has to be done, or because it’s supposedly necessary, and certainly not because ‘it might be good to do.’ Why does it have to be done? Why is it necessary? What are we really aiming for? Give AI the intellectual core of the work—the part that requires expertise—and let AI handle everything else around it. This requires creativity and self-criticism. Unfortunately, our education system does not support the development of creativity in all fields. This is a shame, because continuously adapting one’s way of working as AI evolves is not possible without creativity. I challenge you, dear reader, to reflect on how you could do your work differently. Start by dismantling the ‘musts’ and question why you do what you do in the first place. If this text provoked a reaction in you, I hope you reflect carefully on where that reaction comes from. In the end, the question is which side of history you want to be on after the AI revolution.

Metropolia’s AI research strongly featured in an international workshop

16.12.2025

Metropolia and the University of Eastern Finland jointly organized the IWCLUL workshop (International Workshop on Computational Linguistics for Uralic Languages), which brought together researchers of Finno-Ugric languages from across Europe. The workshop was held as part of the international ACL community and provided an up-to-date overview of language technology research on Uralic languages, especially in the era of artificial intelligence and large language models. A broad range of Metropolia’s research Metropolia’s AI research was exceptionally well represented at the workshop. Four full papers produced at Metropolia were accepted for the workshop, addressing both pedagogical and language technology topics from multiple perspectives. The paper From NLG Evaluation to Modern Student Assessment in the Era of ChatGPT: The Great Misalignment Problem and Pedagogical Multi-Factor Assessment (P-MFA) examined the impact of artificial intelligence on assessment practices in higher education. The study highlighted the so-called Great Misalignment Problem, where assessment no longer measures what it is intended to measure when students can produce high-quality outputs using generative language models. The paper introduced a new Pedagogical Multi-Factor Assessment (P-MFA) model, which emphasizes the learning process, diverse forms of evidence, and pedagogical transparency rather than single final products. In a paper co-authored with Waseda University, Benchmarking Finnish Lemmatizers across Historical and Contemporary Texts evaluated Finnish lemmatization tools on both contemporary and historical data. The study made use of the Project Gutenberg corpus and, for the first time, included the Trankit tool in a comparison of Finnish lemmatization. A key finding was that Murre preprocessing significantly improves lemmatization results for dialectal and historical texts, while its impact on modern Finnish is minimal. In the image, Aki Morooka is talking about normalization experiments. A timely application of artificial intelligence to foresight was presented in the paper ORACLE: Time-Dependent Recursive Summary Graphs for Foresight on News Data Using LLMs. The study developed a new method in which temporally evolving recursive summary graphs are constructed from news data using large language models. The ORACLE approach enables the analysis of developments and emerging trends in news content by combining temporal structure with language model–based summarization. The fourth paper, co-authored with the University of Helsinki, Evaluating OpenAI GPT Models for Translation of Endangered Uralic Languages: A Comparison of Reasoning and Non-Reasoning Architectures, focused on machine translation for endangered Uralic languages. The study compared reasoning-based and non-reasoning architectures of OpenAI’s GPT models and analyzed their performance on low-resource languages. The results provide valuable insights into which types of language model solutions are best suited for supporting small and endangered languages. Metropolia’s lightning talks: agile openings on topical themes Metropolia’s visibility at the IWCLUL workshop was not limited to full research papers but extended strongly to the lightning talks as well. The lightning talks provided a concise yet substantively rich overview of rapidly developing research directions that are central to language technology for Uralic and other small languages. The lightning talk UralicMCP: Turning LLMs into Experts in Endangered Languages with MCP presented a new Model Context Protocol (MCP)–based extension to the UralicNLP library. The core idea of UralicMCP is to connect large language models with rule-based language technology tools such as a morphological analyzer, inflector, lemmatizer, and dictionaries. This makes it possible for language models to perform NLP tasks even in endangered Uralic languages for which they have little to no training data. Experiments presented in the lightning talk showed that, with MCP, language models can succeed in tasks that would otherwise be impossible for them. Lev Kharlashkin addressed the current state of the Karelian language. The second lightning talk, From Toki Pona to Uralic: A Grammar-Constrained Pipeline for Low-Resource Language Generation, addressed a methodological approach to training language models for low-resource languages. The work used an extremely controlled language such as Toki Pona as a testbed for grammatically guided synthetic data generation. The goal was not Toki Pona itself, but a scalable method that can be transferred to morphologically rich Uralic languages. The lightning talk highlighted how explicit grammatical constraints and validated synthetic data can compensate for the lack of large datasets. The lightning talk Did Karelian Survive the Year? A Small Data Update provided an up-to-date snapshot of the digital vitality of the Karelian language. The talk presented a lightweight yet repeatable data collection process used to analyze Karelian-language online content, particularly in news and article texts. The results showed that Karelian is actively produced online, especially in short news formats, and that even a small but regularly updated dataset can provide meaningful insights into the current state of an endangered language. The fourth Metropolia lightning talk, Evaluating Finnish Dialect Normalization in GPT Models with and without Reasoning, focused on dialect normalization of Finnish using language models. The study compared traditionally fine-tuned GPT-style models with models explicitly equipped with reasoning (chain-of-thought). The results showed that strong pretraining in the Finnish language is more crucial than explicit reasoning, and that reasoning-based fine-tuning can even degrade normalization performance in this task. The lightning talk highlighted important insights into when and how reasoning capabilities should be leveraged in language technology applications. Artur Roos explained what Uralic languages can learn from synthetic languages. From research to practice: AI in support of small languages The IWCLUL workshop highlighted how Metropolia’s AI research brings together theoretical linguistics, practical language technology, and societal impact. Both the full research papers and the lightning contributions demonstrated that large language models are not viewed at Metropolia as standalone, general-purpose solutions, but rather as tools that can be guided, constrained, and complemented with linguistic expertise. The common denominator across Metropolia’s presentations was the reality of endangered languages: limited datasets, rich morphology, and the need for transparent and maintainable solutions. Whether the focus was on rethinking assessment in education, translation of Uralic languages, the digital vitality of Karelian, or normalization of dialectal Finnish, the research emphasized approaches that work even when ready-made data or perfect models are not available. The workshop reinforced Metropolia’s role in the international language technology community as an actor that brings together artificial intelligence, open-source development, and the needs of language communities. At the same time, it demonstrated that research on small languages is not a side track of AI development, but one of its most important testbeds: it is precisely there that the assumptions, limitations, and design choices underlying language models are forced into the open.

Metropolia Develops AI Solutions for Internal Needs

23.6.2025

Under the leadership of Development Manager Mika Hämäläinen, Metropolia’s AI team is developing various solutions based on large language models to address the organization’s challenges. The core idea is to solve real problems in a user-centered and agile manner. Since large language models are constantly evolving, there is no longer a need to develop the AI itself — instead, our task is to adopt AI and integrate it into everyday life in an easy-to-use form. Our team currently includes software developers Lev Kharlashkin, Melany Macías Morán and Leo Huovinen, as well as student interns Yehor Tereshchenko, Sheng Tai and Aki Morooka. The tools we have developed are named OpintoHain, Oracle, Grant Writing Assistant, Curriculum Tool, and Moodle AI plugin. OpintoHain OpintoHain was developed as part of a project led by Sonja Saarikivi, with the goal of creating a tool for lifelong learners. The target audience consists of individuals external to Metropolia who wish to update their skills and study at Metropolia — whether by taking a single course or potentially pursuing a suitable Master’s degree. We responded to the challenge by developing a chatbot that understands the course offerings of Metropolia’s Open University. The tool is powered by a RAG (Retrieval-Augmented Generation) model that is familiar with Metropolia’s courses and degree programs. It also includes a multi-agent system with dedicated agents for course and degree recommendations, as well as for study guidance. The OpintoHain tool is available for testing on Metropolia’s website. Oracle Foresight has taken on an increasingly important role at Metropolia — everyone is expected to anticipate future developments, but how? We set out to address this challenge with the Oracle tool, which ingests online content such as news articles and job postings. Based on this input, we can analyze the data using vectorization and clustering techniques. We have already developed methods for identifying weak signals and megatrends, detecting drivers of change, conducting data-driven scenario work and implementing an automated multi-agent version of the Delphi method. The guiding idea is that AI processes foresight data into a ready-to-use format, so that the end user can gain maximum benefit from the insights, even if they have little to no prior knowledge of foresight practices themselves. In putting Oracle into practical use to support real-world applications, we are supported by the foresight working group led by Maani Nyqvist, along with foresight expert Marita Huhtaniemi. Grant Writing Assistant The importance of external funding is growing in the higher education sector. Competition for funding is fierce, and often even strong applications go unfunded. We are developing an AI tool in collaboration with Maarit Haataja, Director of RDI and Project Services, and her team, to enhance Metropolia’s chances of securing external funding. In EU Horizon funding calls in particular, it is crucial that every section of the call for proposals is addressed within the application. Even a strong application can be rejected if it fails to mention even a single sub-point. Grant Writing Assistant automatically analyzes the call for proposals and compares it with the content of the application. Any missing elements are clearly reported to the user, who can then choose to correct them manually or have the AI automatically insert the missing content. The tool is also capable of identifying risks and breaking the project down into work packages. Curriculum Tool Writing curricula is a time-consuming process. Each course-level curriculum should reflect both the goals of sustainable development and Arene competencies. To support this, we developed the Curriculum Tool, which analyzes curricula and visualizes the content of degree programs from the perspectives of Arene competencies and sustainable development. In the development of this tool, Metsälintu Pahkin played a valuable role as a liaison with the degree coordinators. You can read the scientific publication describing the tool for more details. Moodle AI Plugin The Moodle AI Plugin was developed for teachers, enabling them to automatically generate assignments directly in Moodle based on their own course materials. The core idea has been to integrate AI directly into a familiar tool, rather than creating a separate system. Senior Lecturer Tricia Cleland-Silva served as a valuable liaison with the teaching staff during the development process. You can read the scientific publication describing this tool for further insights.