The conversation around European AI often circles catching up to the scale of LLMs like DeepSeek AI. But what if the real opportunity lies in going in a different direction altogether? At Iris.ai, we believe Europe’s future in AI isn’t about building the most significant models – it’s about creating the smartest, most domain-specialized ones. As featured in Tech.eu, our CEO Anita Schjøll Abildgaard and CTO Victor Botev shared why Small Language Models (SLMs) are key to Europe’s competitive edge in AI. Why SLMs? ✔️ Cheaper and greener — using up to 60,000x less compute ✔️ Easier to fine-tune for specific business needs ✔️ More practical for real-world workflows in chemistry, healthcare, IP, and beyond ✔️ Aligned with Europe’s sustainability and sovereignty goals “If you can distill just the knowledge you need into a smaller model, it’s more efficient and cost-effective.” – Victor Botev 💡 Our collaboration with Sigma2 AS - Norwegian research infrastructure services (NRIS) - Norway’s supercomputing infrastructure - enables us to train, fine-tune, and evaluate domain-adapted models (1–9B parameters) efficiently and securely. It’s not just about performance; it’s about sustainable and transparent AI innovation aligned with Europe’s values. We’ve also launched a powerful RAG (retrieval-augmented generation) system, a decade in the making, built on an agent-based architecture powered by small models, not giants. 📖 Link to the full article in the comments! #AI #RAG #TechEU
Expert in Data Engineering and NLP
5dInstead of building massive generalist models, imagine SLMs as experts with deep domain knowledge like someone with a PhD in physics, computer science, or law. Focused, efficient, and incredibly smart in their specific field. And from a cost and sustainability perspective, the advantage is undeniable. Why spend millions training a giant when a compact, specialized model can deliver sharper results?