Metropolia AI at Pedaforum 2025: Demos, Dialogue and a Lot of Excitement
Earlier this month, the Metropolia AI team, led by Mika Hämäläinen, had the exciting opportunity to take part in Pedaforum 2025, Finland’s largest higher education pedagogy conference, held right in our home base of Myllypuro, Helsinki. Our team ran an interactive demo booth showcasing five AI-powered tools we’ve been developing to support educators, researchers and students. Throughout the day, we had the pleasure of engaging with a diverse and enthusiastic crowd - from university lecturers and curriculum designers to grant officers and edtech innovators. And yes, we gave out plenty of candy too 🍬. What We Showcased: 5 Practical AI Tools for Higher Education We believe that the best way to explore AI is to build with it - and that’s exactly what we’ve been doing. Here are the tools we presented. 🎓 AI Moodle Plugin A favorite among many visitors! A tool designed for teachers to analyze their Moodle course content and slide decks. By using large language models (LLMs), this plugin helps educators uncover structural gaps, identify learning objectives and reflect on the clarity and alignment of their materials. 📘 Curriculum AI A co-pilot for working with degree programme curricula - whether you’re revising learning outcomes, mapping competencies or aligning to national frameworks. Curriculum AI helps make curriculum work more collaborative and intelligent. 💸 Grant Writing AI GrantWritingAI helps researchers draft and refine funding proposals using LLMs. The tool streamlines ideation, structure, and writing support, significantly reducing the time and stress involved in grant applications. 🔮 Oracle – Foresight Tool This tool empowers educators and researchers to conduct horizon scanning and foresight exercises using AI. It synthesizes large volumes of trend data to support future-oriented thinking in academic planning and innovation strategies. 🎯 OpintoHain – AI-Powered Course Recommender One of the stars of the day, OpintoHain uses GPT-4o and multi-agent workflows to recommend relevant courses to learners - whether they’re Metropolia students or lifelong learners from outside our university. Attendees were able to upload their CVs, define learning goals and receive curated learning paths in real time. Community Response: Insightful, Encouraging and Energizing Leo Huovinen and Melany Macías Morán The response we received was overwhelmingly positive. Participants appreciated the hands-on nature of our demos and the clear focus on augmenting, not replacing, educators' expertise. It was especially rewarding to hear feedback from colleagues at Aalto University, University of Helsinki, LUT University and other institutions across Finland. Attendees expressed excitement not just about the tools themselves, but about the broader message: AI can be a meaningful part of academic life today - not just tomorrow. What We Took Away Sheng Tai, Lev Kharlashkin, Yehor Tereshchenko and Aki Morooka Beyond showcasing technology, Pedaforum gave us a chance to listen deeply. Educators shared their needs, aspirations and concerns around AI. These conversations have already sparked ideas for the next iterations of our tools.And yes - we had a lot of fun too! What’s Next? We're continuing to develop, refine and open up these tools to more users across Metropolia and beyond. If you’re curious to explore how AI might fit into your teaching, research or learning journey, we’d love to hear from you.Stay tuned for more demos, workshops and community-building around responsible and empowering uses of AI in education. 🚀
Metropolia students created AI context-aware NPC during Supercell Hackathon
Non-player characters (NPCs) in games have typically followed one rule: players act, and the character reacts - within strict, predefined limits. They often guide a user through a certain story, predefined by a creator. That can be the main storyline, side quest, etc. Even though it can be various, frequently there is not much diversity in the game line through which players go. While it can be fun for the first time, after replaying a game, it becomes boring and even at the start players have limited abilities to go outside the "borders" defined by the developer. That is where AI can help because it can react to almost everything, and this reaction is unique even in the same circumstances. So why not use it in game development to make the final product and user experience even more inspiring!? What happens when that reactivity becomes context-aware, unscripted, and emotionally intelligent? That's exactly what a group of Metropolia students set out to explore during the Supercell AI Hackathon, hosted by Junction in Helsinki this May. Their entry, developed in just over 24 hours, landed an impressive 3rd place among nearly 50 international teams and offered a glimpse into the future of interactive AI. The team after receiving the "Supercell award". The teammates worked on the Myllypuro campus for two days: on Friday, they brainstormed about ideas and attempted to find inspiration and on Saturday, very active prototype development. After several hours of efficient discussion, they came up with an idea of making the following reactive-NPC prototype: “Purr-suit of Attention” illustration showing the cat and its witch AI-companion A game where the player isn't in control - the AI is The team's prototype, Purr-suit of Attention, flips the classic power dynamic between player and NPC. Instead of commanding the world, the player steps into the paws of a curious cat living with an AI-powered fantasy witch - a non-playable character that responds dynamically to everything the cat does. The task of the game is to explain to the AI what the cat wants to use different activities and make the NPC do a certain action. So, how cat-witch (player to NPC) interaction goes? Meow? She speaks. Knock over a bottle? She sighs, laughs, or reacts with surprise. Jump on a table? She might ignore it, become indignant, or think that the cat is hungry. Meows, scratching the front door, staring at it for a long time? The NPC will definitely think that the cat wants to go inside, but will she open the door? It depends on the mood and a bunch of other possible factors that cannot be described through "if-else statements" The cat meows at a locked door as the witch offers to open it. The twist? None of these reactions are scripted. They're generated in real time using a large language model, meaning each play through is unique and emotionally rich. And the most impressive thing is that the player has no limits or boundaries! "Cat" can do whatever, and the AI will interpret the player's actions itself. How it works? The system relies on a full-stack integration of modern tools: 1. Unity 6 and C# for gameplay mechanics, API calls and animation control, etc. Unity Editor scene view with the cat, witch, and setup. 2. Python (Flask) for backend logic, handling game events and states, LLM calls; 3. Google Gemini for generating context-aware intent, choosing animation to play and speech. Split-screen of the Python backend code alongside the Unity Play window and debug logs. 4. Sesame Voice for natural, spoken responses Actions taken by the cat are tagged and flagged as events (meow, jump, scrap, etc) and states (looking at and near objects). These are processed by the backend, where the AI evaluates them in context using techniques like high-low temperature, prompt enhancement (system, static and dynamic prompts; moving "IMPORTANT" flag), reattempts, and random selection according to heightening parameters from the AI response. The result? A witch NPC that doesn't just respond - she feels alive, true (but a little bit "slow-witted person" because the response from free LLM's API takes some time😊). Unity editor with cat script visualizing and backend answer in logs The Metropolia team worked closely, handling different roles from backend AI logic to design and animation scripting. The final prototype - developed from scratch in just one day - impressed judges and participants for its creativity, interactivity, and potential for further development. More Than a Game The cat sleeps under the task list while the AI-witch teases, “Already sleeping kitty?” The significance of this project goes beyond the podium. It reflects a broader shift happening in AI and game design: toward responsive, emotional, and emergent behavior in digital characters. As large language models become more accessible, fast and controllable, the line between code and personality continues to blur. It isn't just a whimsical experiment, it's a prototype for a world where games don't just entertain - they converse, react, and surprise. 🎮 Watch the gameplay demo 📄 Junction submission 🏆 Award ceremony moment Final cozy art: the witch sips a warm drink by the fire as the cat naps. The contents of this blog reflect the collective effort of Metropolia students (Yehor Tereshchenko, Artur Roos, Unai San Segundo, Kartik Patel) participating in the hackathon. With gratitude to Metropolia for giving an opportunity to join from Myllypuro's co-working places (especially on Saturday), for previous knowledge, and bringing team members together as first-year students; also, thanks to Supercell, Junction, and all participants.
Will artificial intelligence take over?
The rapid development of artificial intelligence has led people to wonder whether AI might one day take power into its own hands. There are plenty of people reassuring us that everything is in human hands and that, ultimately, humans are responsible for everything. But are we? AI may not seize power in the sense of enslaving humanity, but we are already outsourcing power and responsibility to it effectively today. We Worship the Machine Clumsy and poorly designed systems are part of our everyday lives. We already have to spend time clicking buttons in the correct order or remembering to do something in a certain system. And if we fail in these rituals of machine worship, we must sacrifice more working time at the machine’s altar, repeating magic words like “oh, that went wrong,” “hold on, what happened here” and “how did we get there again”. The more time we spend trying to please the machine, the more it heats up its processor – the human is enslaved. Humans must press buttons in the correct order, lest the machine gets angry and punishes them. And how often have you found yourself in a situation where nothing could be done because the machine wouldn’t allow it? These situations have surely happened to many of us. A public transit ticket couldn’t be bought because the app froze, or you missed out on loyalty points at the grocery store because the system didn’t recognize your card. Luckily, a human is ultimately responsible – the same human who can only shrug, because the real power lies with the machine. AI is Already Guiding Us Who gets to decide what truth we believe in? To a large extent, that decision-making power has already been outsourced to artificial intelligences. We often solve our problems by Googling them, but Google doesn't give us answers based on their usefulness or truthfulness – the answers are ranked by AI. Where is the human who takes responsibility when Google's AI feeds us false information or hides things from us? Nowhere – the power lies with AI. AI Easily Learns Which Strings to Pull Large international online stores like Amazon and Temu boldly use AI to steer users toward certain products. Sometimes the cheapest options are hard to find because the smart search has figured out you’re willing to pay more. The responsibility, of course, lies with the person – well, you bought it, didn’t you? We Are Eagerly Handing Over More Power to AI Probably nothing in this text is surprising to anyone; what’s most surprising is the contradiction in our values. The same people who fear AI dominance are often the ones outsourcing more power to AI to make their work easier. One of the funniest examples from the academic world is Turnitin and the automatic checking of essays using AI. We humans are happily giving AI the keys to power Let’s go ahead and let AI decide whose thesis gets approved or rejected, and who gets what grade for an essay. The final responsibility lies with the teacher – who may be incapable of evaluating the reliability of the AI. What could possibly go wrong with this setup?