Year: 2025

Metropolia’s AI research strongly featured in an international workshop

16.12.2025

Metropolia and the University of Eastern Finland jointly organized the IWCLUL workshop (International Workshop on Computational Linguistics for Uralic Languages), which brought together researchers of Finno-Ugric languages from across Europe. The workshop was held as part of the international ACL community and provided an up-to-date overview of language technology research on Uralic languages, especially in the era of artificial intelligence and large language models. A broad range of Metropolia’s research Metropolia’s AI research was exceptionally well represented at the workshop. Four full papers produced at Metropolia were accepted for the workshop, addressing both pedagogical and language technology topics from multiple perspectives. The paper From NLG Evaluation to Modern Student Assessment in the Era of ChatGPT: The Great Misalignment Problem and Pedagogical Multi-Factor Assessment (P-MFA) examined the impact of artificial intelligence on assessment practices in higher education. The study highlighted the so-called Great Misalignment Problem, where assessment no longer measures what it is intended to measure when students can produce high-quality outputs using generative language models. The paper introduced a new Pedagogical Multi-Factor Assessment (P-MFA) model, which emphasizes the learning process, diverse forms of evidence, and pedagogical transparency rather than single final products. In a paper co-authored with Waseda University, Benchmarking Finnish Lemmatizers across Historical and Contemporary Texts evaluated Finnish lemmatization tools on both contemporary and historical data. The study made use of the Project Gutenberg corpus and, for the first time, included the Trankit tool in a comparison of Finnish lemmatization. A key finding was that Murre preprocessing significantly improves lemmatization results for dialectal and historical texts, while its impact on modern Finnish is minimal. In the image, Aki Morooka is talking about normalization experiments. A timely application of artificial intelligence to foresight was presented in the paper ORACLE: Time-Dependent Recursive Summary Graphs for Foresight on News Data Using LLMs. The study developed a new method in which temporally evolving recursive summary graphs are constructed from news data using large language models. The ORACLE approach enables the analysis of developments and emerging trends in news content by combining temporal structure with language model–based summarization. The fourth paper, co-authored with the University of Helsinki, Evaluating OpenAI GPT Models for Translation of Endangered Uralic Languages: A Comparison of Reasoning and Non-Reasoning Architectures, focused on machine translation for endangered Uralic languages. The study compared reasoning-based and non-reasoning architectures of OpenAI’s GPT models and analyzed their performance on low-resource languages. The results provide valuable insights into which types of language model solutions are best suited for supporting small and endangered languages. Metropolia’s lightning talks: agile openings on topical themes Metropolia’s visibility at the IWCLUL workshop was not limited to full research papers but extended strongly to the lightning talks as well. The lightning talks provided a concise yet substantively rich overview of rapidly developing research directions that are central to language technology for Uralic and other small languages. The lightning talk UralicMCP: Turning LLMs into Experts in Endangered Languages with MCP presented a new Model Context Protocol (MCP)–based extension to the UralicNLP library. The core idea of UralicMCP is to connect large language models with rule-based language technology tools such as a morphological analyzer, inflector, lemmatizer, and dictionaries. This makes it possible for language models to perform NLP tasks even in endangered Uralic languages for which they have little to no training data. Experiments presented in the lightning talk showed that, with MCP, language models can succeed in tasks that would otherwise be impossible for them. Lev Kharlashkin addressed the current state of the Karelian language. The second lightning talk, From Toki Pona to Uralic: A Grammar-Constrained Pipeline for Low-Resource Language Generation, addressed a methodological approach to training language models for low-resource languages. The work used an extremely controlled language such as Toki Pona as a testbed for grammatically guided synthetic data generation. The goal was not Toki Pona itself, but a scalable method that can be transferred to morphologically rich Uralic languages. The lightning talk highlighted how explicit grammatical constraints and validated synthetic data can compensate for the lack of large datasets. The lightning talk Did Karelian Survive the Year? A Small Data Update provided an up-to-date snapshot of the digital vitality of the Karelian language. The talk presented a lightweight yet repeatable data collection process used to analyze Karelian-language online content, particularly in news and article texts. The results showed that Karelian is actively produced online, especially in short news formats, and that even a small but regularly updated dataset can provide meaningful insights into the current state of an endangered language. The fourth Metropolia lightning talk, Evaluating Finnish Dialect Normalization in GPT Models with and without Reasoning, focused on dialect normalization of Finnish using language models. The study compared traditionally fine-tuned GPT-style models with models explicitly equipped with reasoning (chain-of-thought). The results showed that strong pretraining in the Finnish language is more crucial than explicit reasoning, and that reasoning-based fine-tuning can even degrade normalization performance in this task. The lightning talk highlighted important insights into when and how reasoning capabilities should be leveraged in language technology applications. Artur Roos explained what Uralic languages can learn from synthetic languages. From research to practice: AI in support of small languages The IWCLUL workshop highlighted how Metropolia’s AI research brings together theoretical linguistics, practical language technology, and societal impact. Both the full research papers and the lightning contributions demonstrated that large language models are not viewed at Metropolia as standalone, general-purpose solutions, but rather as tools that can be guided, constrained, and complemented with linguistic expertise. The common denominator across Metropolia’s presentations was the reality of endangered languages: limited datasets, rich morphology, and the need for transparent and maintainable solutions. Whether the focus was on rethinking assessment in education, translation of Uralic languages, the digital vitality of Karelian, or normalization of dialectal Finnish, the research emphasized approaches that work even when ready-made data or perfect models are not available. The workshop reinforced Metropolia’s role in the international language technology community as an actor that brings together artificial intelligence, open-source development, and the needs of language communities. At the same time, it demonstrated that research on small languages is not a side track of AI development, but one of its most important testbeds: it is precisely there that the assumptions, limitations, and design choices underlying language models are forced into the open.

Metropolia Develops AI Solutions for Internal Needs

23.6.2025

Under the leadership of Development Manager Mika Hämäläinen, Metropolia’s AI team is developing various solutions based on large language models to address the organization’s challenges. The core idea is to solve real problems in a user-centered and agile manner. Since large language models are constantly evolving, there is no longer a need to develop the AI itself — instead, our task is to adopt AI and integrate it into everyday life in an easy-to-use form. Our team currently includes software developers Lev Kharlashkin, Melany Macías Morán and Leo Huovinen, as well as student interns Yehor Tereshchenko, Sheng Tai and Aki Morooka. The tools we have developed are named OpintoHain, Oracle, Grant Writing Assistant, Curriculum Tool, and Moodle AI plugin. OpintoHain OpintoHain was developed as part of a project led by Sonja Saarikivi, with the goal of creating a tool for lifelong learners. The target audience consists of individuals external to Metropolia who wish to update their skills and study at Metropolia — whether by taking a single course or potentially pursuing a suitable Master’s degree. We responded to the challenge by developing a chatbot that understands the course offerings of Metropolia’s Open University. The tool is powered by a RAG (Retrieval-Augmented Generation) model that is familiar with Metropolia’s courses and degree programs. It also includes a multi-agent system with dedicated agents for course and degree recommendations, as well as for study guidance. The OpintoHain tool is available for testing on Metropolia’s website. Oracle Foresight has taken on an increasingly important role at Metropolia — everyone is expected to anticipate future developments, but how? We set out to address this challenge with the Oracle tool, which ingests online content such as news articles and job postings. Based on this input, we can analyze the data using vectorization and clustering techniques. We have already developed methods for identifying weak signals and megatrends, detecting drivers of change, conducting data-driven scenario work and implementing an automated multi-agent version of the Delphi method. The guiding idea is that AI processes foresight data into a ready-to-use format, so that the end user can gain maximum benefit from the insights, even if they have little to no prior knowledge of foresight practices themselves. In putting Oracle into practical use to support real-world applications, we are supported by the foresight working group led by Maani Nyqvist, along with foresight expert Marita Huhtaniemi. Grant Writing Assistant The importance of external funding is growing in the higher education sector. Competition for funding is fierce, and often even strong applications go unfunded. We are developing an AI tool in collaboration with Maarit Haataja, Director of RDI and Project Services, and her team, to enhance Metropolia’s chances of securing external funding. In EU Horizon funding calls in particular, it is crucial that every section of the call for proposals is addressed within the application. Even a strong application can be rejected if it fails to mention even a single sub-point. Grant Writing Assistant automatically analyzes the call for proposals and compares it with the content of the application. Any missing elements are clearly reported to the user, who can then choose to correct them manually or have the AI automatically insert the missing content. The tool is also capable of identifying risks and breaking the project down into work packages. Curriculum Tool Writing curricula is a time-consuming process. Each course-level curriculum should reflect both the goals of sustainable development and Arene competencies. To support this, we developed the Curriculum Tool, which analyzes curricula and visualizes the content of degree programs from the perspectives of Arene competencies and sustainable development. In the development of this tool, Metsälintu Pahkin played a valuable role as a liaison with the degree coordinators. You can read the scientific publication describing the tool for more details. Moodle AI Plugin The Moodle AI Plugin was developed for teachers, enabling them to automatically generate assignments directly in Moodle based on their own course materials. The core idea has been to integrate AI directly into a familiar tool, rather than creating a separate system. Senior Lecturer Tricia Cleland-Silva served as a valuable liaison with the teaching staff during the development process. You can read the scientific publication describing this tool for further insights.

Metropolia AI at Pedaforum 2025: Demos, Dialogue and a Lot of Excitement

18.6.2025

Earlier this month, the Metropolia AI team, led by Mika Hämäläinen, had the exciting opportunity to take part in Pedaforum 2025, Finland’s largest higher education pedagogy conference, held right in our home base of Myllypuro, Helsinki. Our team ran an interactive demo booth showcasing five AI-powered tools we’ve been developing to support educators, researchers and students. Throughout the day, we had the pleasure of engaging with a diverse and enthusiastic crowd - from university lecturers and curriculum designers to grant officers and edtech innovators. And yes, we gave out plenty of candy too 🍬. What We Showcased: 5 Practical AI Tools for Higher Education We believe that the best way to explore AI is to build with it - and that’s exactly what we’ve been doing. Here are the tools we presented. 🎓 AI Moodle Plugin A favorite among many visitors! A tool designed for teachers to analyze their Moodle course content and slide decks. By using large language models (LLMs), this plugin helps educators uncover structural gaps, identify learning objectives and reflect on the clarity and alignment of their materials. 📘 Curriculum AI A co-pilot for working with degree programme curricula - whether you’re revising learning outcomes, mapping competencies or aligning to national frameworks. Curriculum AI helps make curriculum work more collaborative and intelligent. 💸 Grant Writing AI GrantWritingAI helps researchers draft and refine funding proposals using LLMs. The tool streamlines ideation, structure, and writing support, significantly reducing the time and stress involved in grant applications. 🔮 Oracle – Foresight Tool This tool empowers educators and researchers to conduct horizon scanning and foresight exercises using AI. It synthesizes large volumes of trend data to support future-oriented thinking in academic planning and innovation strategies. 🎯 OpintoHain – AI-Powered Course Recommender One of the stars of the day, OpintoHain uses GPT-4o and multi-agent workflows to recommend relevant courses to learners - whether they’re Metropolia students or lifelong learners from outside our university. Attendees were able to upload their CVs, define learning goals and receive curated learning paths in real time. Community Response: Insightful, Encouraging and Energizing Leo Huovinen and Melany Macías Morán The response we received was overwhelmingly positive. Participants appreciated the hands-on nature of our demos and the clear focus on augmenting, not replacing, educators' expertise. It was especially rewarding to hear feedback from colleagues at Aalto University, University of Helsinki, LUT University and other institutions across Finland. Attendees expressed excitement not just about the tools themselves, but about the broader message: AI can be a meaningful part of academic life today - not just tomorrow. What We Took Away Sheng Tai, Lev Kharlashkin, Yehor Tereshchenko and Aki Morooka Beyond showcasing technology, Pedaforum gave us a chance to listen deeply. Educators shared their needs, aspirations and concerns around AI. These conversations have already sparked ideas for the next iterations of our tools.And yes - we had a lot of fun too! What’s Next? We're continuing to develop, refine and open up these tools to more users across Metropolia and beyond. If you’re curious to explore how AI might fit into your teaching, research or learning journey, we’d love to hear from you.Stay tuned for more demos, workshops and community-building around responsible and empowering uses of AI in education. 🚀

Metropolia students created AI context-aware NPC during Supercell Hackathon

16.6.2025

Non-player characters (NPCs) in games have typically followed one rule: players act, and the character reacts - within strict, predefined limits. They often guide a user through a certain story, predefined by a creator. That can be the main storyline, side quest, etc. Even though it can be various, frequently there is not much diversity in the game line through which players go. While it can be fun for the first time, after replaying a game, it becomes boring and even at the start players have limited abilities to go outside the "borders" defined by the developer. That is where AI can help because it can react to almost everything, and this reaction is unique even in the same circumstances. So why not use it in game development to make the final product and user experience even more inspiring!? What happens when that reactivity becomes context-aware, unscripted, and emotionally intelligent? That's exactly what a group of Metropolia students set out to explore during the Supercell AI Hackathon, hosted by Junction in Helsinki this May. Their entry, developed in just over 24 hours, landed an impressive 3rd place among nearly 50 international teams and offered a glimpse into the future of interactive AI. The team after receiving the "Supercell award". The teammates worked on the Myllypuro campus for two days: on Friday, they brainstormed about ideas and attempted to find inspiration and on Saturday, very active prototype development. After several hours of efficient discussion, they came up with an idea of making the following reactive-NPC prototype: “Purr-suit of Attention” illustration showing the cat and its witch AI-companion A game where the player isn't in control - the AI is The team's prototype, Purr-suit of Attention, flips the classic power dynamic between player and NPC. Instead of commanding the world, the player steps into the paws of a curious cat living with an AI-powered fantasy witch - a non-playable character that responds dynamically to everything the cat does. The task of the game is to explain to the AI what the cat wants to use different activities and make the NPC do a certain action. So, how cat-witch (player to NPC) interaction goes? Meow? She speaks. Knock over a bottle? She sighs, laughs, or reacts with surprise. Jump on a table? She might ignore it, become indignant, or think that the cat is hungry. Meows, scratching the front door, staring at it for a long time? The NPC will definitely think that the cat wants to go inside, but will she open the door? It depends on the mood and a bunch of other possible factors that cannot be described through "if-else statements" The cat meows at a locked door as the witch offers to open it. The twist? None of these reactions are scripted. They're generated in real time using a large language model, meaning each play through is unique and emotionally rich. And the most impressive thing is that the player has no limits or boundaries! "Cat" can do whatever, and the AI will interpret the player's actions itself. How it works? The system relies on a full-stack integration of modern tools: 1. Unity 6 and C# for gameplay mechanics, API calls and animation control, etc. Unity Editor scene view with the cat, witch, and setup. 2. Python (Flask) for backend logic, handling game events and states, LLM calls; 3. Google Gemini for generating context-aware intent, choosing animation to play and speech. Split-screen of the Python backend code alongside the Unity Play window and debug logs. 4. Sesame Voice for natural, spoken responses Actions taken by the cat are tagged and flagged as events (meow, jump, scrap, etc) and states (looking at and near objects). These are processed by the backend, where the AI evaluates them in context using techniques like high-low temperature, prompt enhancement (system, static and dynamic prompts; moving "IMPORTANT" flag), reattempts, and random selection according to heightening parameters from the AI response. The result? A witch NPC that doesn't just respond - she feels alive, true (but a little bit "slow-witted person" because the response from free LLM's API takes some time😊). Unity editor with cat script visualizing and backend answer in logs The Metropolia team worked closely, handling different roles from backend AI logic to design and animation scripting. The final prototype - developed from scratch in just one day - impressed judges and participants for its creativity, interactivity, and potential for further development. More Than a Game The cat sleeps under the task list while the AI-witch teases, “Already sleeping kitty?” The significance of this project goes beyond the podium. It reflects a broader shift happening in AI and game design: toward responsive, emotional, and emergent behavior in digital characters. As large language models become more accessible, fast and controllable, the line between code and personality continues to blur. It isn't just a whimsical experiment, it's a prototype for a world where games don't just entertain - they converse, react, and surprise. 🎮 Watch the gameplay demo 📄 Junction submission 🏆 Award ceremony moment Final cozy art: the witch sips a warm drink by the fire as the cat naps. The contents of this blog reflect the collective effort of Metropolia students (Yehor Tereshchenko, Artur Roos, Unai San Segundo, Kartik Patel) participating in the hackathon. With gratitude to Metropolia for giving an opportunity to join from Myllypuro's co-working places (especially on Saturday), for previous knowledge, and bringing team members together as first-year students; also, thanks to Supercell, Junction, and all participants.

Will artificial intelligence take over?

The rapid development of artificial intelligence has led people to wonder whether AI might one day take power into its own hands. There are plenty of people reassuring us that everything is in human hands and that, ultimately, humans are responsible for everything. But are we? AI may not seize power in the sense of enslaving humanity, but we are already outsourcing power and responsibility to it effectively today. We Worship the Machine Clumsy and poorly designed systems are part of our everyday lives. We already have to spend time clicking buttons in the correct order or remembering to do something in a certain system. And if we fail in these rituals of machine worship, we must sacrifice more working time at the machine’s altar, repeating magic words like “oh, that went wrong,” “hold on, what happened here” and “how did we get there again”. The more time we spend trying to please the machine, the more it heats up its processor – the human is enslaved. Humans must press buttons in the correct order, lest the machine gets angry and punishes them. And how often have you found yourself in a situation where nothing could be done because the machine wouldn’t allow it? These situations have surely happened to many of us. A public transit ticket couldn’t be bought because the app froze, or you missed out on loyalty points at the grocery store because the system didn’t recognize your card. Luckily, a human is ultimately responsible – the same human who can only shrug, because the real power lies with the machine. AI is Already Guiding Us Who gets to decide what truth we believe in? To a large extent, that decision-making power has already been outsourced to artificial intelligences. We often solve our problems by Googling them, but Google doesn't give us answers based on their usefulness or truthfulness – the answers are ranked by AI. Where is the human who takes responsibility when Google's AI feeds us false information or hides things from us? Nowhere – the power lies with AI. AI Easily Learns Which Strings to Pull Large international online stores like Amazon and Temu boldly use AI to steer users toward certain products. Sometimes the cheapest options are hard to find because the smart search has figured out you’re willing to pay more. The responsibility, of course, lies with the person – well, you bought it, didn’t you? We Are Eagerly Handing Over More Power to AI Probably nothing in this text is surprising to anyone; what’s most surprising is the contradiction in our values. The same people who fear AI dominance are often the ones outsourcing more power to AI to make their work easier. One of the funniest examples from the academic world is Turnitin and the automatic checking of essays using AI. We humans are happily giving AI the keys to power Let’s go ahead and let AI decide whose thesis gets approved or rejected, and who gets what grade for an essay. The final responsibility lies with the teacher – who may be incapable of evaluating the reliability of the AI. What could possibly go wrong with this setup?

Does AI only repeat what it has learned?

24.3.2025

Artificial intelligence is often criticized with claims that it can only repeat its training data, and therefore always produces plagiarized and average output. Is there any truth to these claims? Claim 1: AI retrieves its answers from a database I’ve encountered this claim often. The idea is that AI retrieves answers from its database, and thus it plagiarizes or fails to find the correct answer. Large language models and image-generating AI models do not, by default, have access to any kind of database. Instead, these models have learned to generate responses independently. The image or poem produced by AI, for example, does not exist as-is in any database. Large language models don’t use databases, but they can be connected to one Today, large language models can indeed be connected to a database. Currently, the most common method for doing this is the so-called RAG model (Retrieval Augmented Generation). In this setup, the AI can retrieve information from a database to support its answer. However, the AI still writes the response itself. Claim 2: AI only produces average answers This claim is more complex, as there are many types of generative AI models. Images are often produced using diffusion models, which begin with a random mess of pixels and gradually transform that noise into a better image. The AI aims to reach some sort of average optimal output, so its tendency is toward the mean. Diffusion models run iteratively – each iteration creates a better image, one that’s also closer to the average. Somewhere between the initial noise and the average optimal lies an iteration where the AI produces good images that haven’t yet converged into uniform, average-looking ones. These images are by no means simply average, even if they inevitably share something with the optimal average. With an update, Adobe Firefly began producing better, though very similar, images What about large language models? They also aim to produce the best possible answer, which often results in an average-like response depending on the prompt. However, large language models have a feature that allows you to adjust the temperature, which influences how average or creative the responses are. At the extremes, adjusting the temperature can make the model generate either extremely bland text or pure nonsensical gibberish. Emergent intelligence The intelligence of large language models is emergent. They can generalize from what they’ve learned to completely new tasks. This simply means that AI models can generate responses to questions they’ve never encountered in their training data. These responses are not merely average repetitions of what’s already been learned, as the AI cannot just mimic its data like a parrot would. Adobe Firefly’s training data guides it so heavily that it cannot generate a wine glass filled to the brim Image-generating AI models do not show the same level of emergent intelligence, as their training data influences their output more heavily than with text models. It can often be nearly impossible to get certain kinds of images from them. Average or not? The claim that AI only produces average responses oversimplifies things. Training data influences AI more or less depending on the model, but that doesn’t mean AI is only capable of producing dull, obvious answers. AI also doesn’t just repeat what it has learned, since it’s trained to provide responses to problems it has never encountered before.

Does artificial intelligence only look into the past?

http://robotti%20ja%20ihminen%20selkä%20selkää%20vasten.%20Robotti%20katsoo%20vasemmalle,%20ihminen%20oikealle
14.3.2025

Lately, an interesting argument has come to my attention: ChatGPT only looks into the past, whereas humans can look into the future. This idea stems from the fact that AI is trained on past data, and for instance, ChatGPT's knowledge of the world is limited to the last date of its training material. However, this does not directly mean that AI is only looking at the past. Machine Learning Always Faces the Unknown The fundamental principle of machine learning has always been to train AI on past data and test it on new, unseen data. This ensures that AI functions as expected even when encountering entirely new information. Machine learning aims to work with new data Before large language models, language technology-based machine learning models often struggled when faced with completely new types of data. For example, AI trained on product reviews did not perform well in identifying positive and negative expressions in literary texts. However, these limitations have been overcome with large language models, as they can generalize their learning to perform multiple different tasks. Do Humans Really Look into the Future Any Better? When we humans encounter something new, we often rely on past knowledge to act. Our own "training data" also ends at the present moment. If we see an unfamiliar furry creature on a leash walking towards us, we logically assume it is a dog. This assumption is based on previous knowledge. If it turns out to be a completely new and unknown animal species, we are surprised by the encounter. Similarly, AI relies on existing knowledge when encountering new things. The difference is that, at present, we do not have AI tools capable of dynamically learning from their experiences and updating themselves. AI will always assume that the furry creature is a dog until its training data includes information that a new pet-friendly species has been discovered. A human, however, would learn this instantly. Foreseeing the Future is Reasoning Just as humans predict the future using reasoning and scenario planning, AI can also predict the future by drawing logical conclusions. Large language models are already capable of reasoning and performing tasks that require thought. AI can therefore look into the future if properly guided with prompts to make predictions. Many AI tools, such as ChatGPT and Perplexity, can also fetch additional information from the web, allowing them to base their reasoning on up-to-date data.

Can AI Be Used to Forecast Change with MLPESTEL?

http://People%20worshiping%20a%20singularity%20in%20an%20office
10.3.2025

Dr Khalid Alnajjar and Dr Mika Hämäläinen explored in their MBA thesis the capability of artificial intelligence (AI) to forecast change in the operational environment of companies. For this task, they employed a large language model (LLM) and developed a new theoretical framework called MLPESTEL. The Paradigm Shift that Made Forecasting Possible Traditionally, machine learning (ML) techniques have relied on learning patterns form data for individual tasks. Therefore, such models have been able to formulate predictions only in a very limited application area such as weather forecasting or financial forecasting. However, the dawn of LLMs made it possible for AI to conduct reasoning in domains outside of narrow topics and on textual data instead of numerical data. A Call for a New Framework Although LLMs such as ChatGPT have incredible capabilities in terms of reasoning and answering a variety of prompts, they cannot tackle such a difficult problem as forecasting change by a mere prompt. LLMs can reason, but they need to be given the tools to do so - just like us humans. Furthermore, such a complex task must be split into smaller subproblems. The MLPESTEL framework by Alnajjar & Hämäläinen (2024) The researchers elaborated a new framework called MLPESTEL, which draws its inspiration from PESTEL, a framework traditionally used in business research, and the Ecological Systems Theory, a framework commonly used to understand social development of a child. The former framework is important for the business application area of the research whereas the latter was used to divide each individual PESTEL category into four different subsystems – micro, exo, meso and macro systems. The resulting framework was quite complex for a person to conduct analysis with, but not at all too demanding for an LLM which can easily operate on such a level of complexity. Early Results on AI-based Forecasting The researchers investigated the viability of their method by studying the predictive capabilities of an LLM using the MLPESTEL framework on two international companies: Nokia and Tesla. The method was able to correctly predict the opportunity 5G technology brought to Nokia and the difficulties of global chip shortage that impacted Tesla. The results obtained in the thesis work are promising and serve as a proof of concept. LLMs have reached such a maturity level that they can be used in forecasting tasks. MLPESTEL has extended the theoretical capability of conducting forecasting in the context of operational business environment. This research has paved the road for future studies on LLM-driven forecasting and futures studies. The findings serve as a stepping stone for a more comprehensive platform to be developed at Metropolia University of Applied Sciences.