
Our colleague and AI Person of the Year 2024 Noëlle shared her practical approach on Responsible AI at an event discussing the challenges of AI in parliamentary contexts.
Many organisations recognise these days the importance of Responsible AI on a conceptual level but translating abstract principles into concrete practice is still a challenge for them. What is considered fair in one scenario may be inappropriate in another.
Our colleagues developed a methodology that bridges this gap by translating high-level Responsible AI principles into specific, actionable software requirements. It helps institutions define what each Responsible AI principle, means within their specific context
Our colleague Noëlle Cicilia, winner of the award AI Person of the Year 2024, presented at the event "AI Dilemmas in Parliaments", which took place at the European Parliament in Brussels on the 17th of October 2025.
Organised by the European Centre for Parliamentary Research & Documentation, the event gathered parliaments from Europe and beyond to discuss the challenges of AI in parliamentary contexts, with Responsible AI playing a crucial role.
Enthusiastic data scientist and co-founder of Brush AI, a sister company within De Cronos Groep, Noëlle has a unique passion for responsible AI and makes algorithmic honesty accessible through intuitive visualisations that render comprehensible complex concepts.
She developed a methodology translating abstract ethical principles into concrete technical requirements and working software, since Responsible AI for Noëlle is more about technical implementation than guidelines, ethical discussions, and policies.
We sat down with her and interviewed her on how exciting it was to share their practical approach on such a stage!
The main insight I shared was our practical approach to implementing Responsible AI principles. While many organisations recognise the importance of Responsible AI on a conceptual level, the challenge lies in translating abstract principles into concrete practice.
Everyone agrees that AI solutions should be explainable, fair, and transparent. However, the difficulty emerges when we try to implement these principles in real systems. My colleagues and I have developed a methodology that bridges this gap by translating high-level Responsible AI principles into specific, actionable software requirements. This approach makes Responsible AI tangible and measurable, moving it from aspiration to implementation.
By converting principles into requirements, we provide development teams with clear guidelines they can work with, test against, and verify. This transforms Responsible AI from a philosophical discussion into a practical framework that can be systematically applied throughout the AI development lifecycle.
I believe there should always be an interplay between regulation and practical implementation. Regulations provide the framework; they set the boundaries and standards that guide us. We desperately need this in AI, particularly because it is a technology that is constantly evolving.
From a technical perspective, we often face a challenge: there are very few established best practices because the technology is so new. We do take inspiration from other disciplines, drawing on lessons from software engineering, data protection, and risk management, but regulation plays a crucial complementary role by helping us define quality standards and establish what 'good' looks like in practice.
However, regulation alone is not enough. The AI Act sets important requirements, but organisations still need practical methodologies to meet those requirements.This is where implementation frameworks become critical. They translate regulatory obligations into actionable steps that technical teams can follow.
What we are seeing is that the most effective approach combines both: regulation provides the 'what', he standards and outcomes we need to achieve, while practical implementation methodologies provide the 'how, the concrete processes and tools to get there. When these two work in tandem, we create an ecosystem where AI systems are not just compliant on paper, but genuinely responsible in practice.
It is essential because it brings together complementary strengths to accelerate trustworthy AI adoption in Europe. My colleagues and I specialise in the practical implementation of Responsible AI such as the technical methodology, the concrete tools and frameworks needed to translate principles into practice.
Cronos Europa brings extensive experience working within EU institutions, understanding the specific challenges, procurement processes, and organisational dynamics that public sector organisations face.
We are then able to combine technical expertise with deep industry insights and understanding of the European public sector landscape.
I always try to stay away from making predictions, especially in a field that evolves as rapidly as AI. However, I do believe that Responsible AI will play an increasingly crucial role in defining how European public services develop and deploy AI systems.
One of the fundamental challenges we face is that regulating AI is extremely difficult because the technology moves so fast. Regulations struggle to keep pace with innovation. But here is where I see a powerful opportunity: if we focus more on Responsible AI and AI ethics as foundational elements, we can make our solutions more future proof.
There is an important principle at work here, laws are very often rooted in ethical principles. Whether it is fairness, transparency, dignity, or non-discrimination, these ethical values tend to remain constant even as technology and specific regulations evolve. If we embed these ethical principles deeply into our AI solutions from the start, it becomes much easier to comply with future regulations, whatever form they may take.
So, in five years, I believe truly "responsible" AI in European public services will be characterised not just by compliance with the AI Act or other frameworks, but by systems that have ethics and responsibility built into their foundation.
Organisations seeking expert support in developing and deploying sustainable, transparent, and ethically aligned AI systems can look to Cronos Europa’s services.
Thanks to more than our 650 sister companies within De Cronos Groep like Brush.ai, we can rely on the knowledge and experience of 10.000 experts.
From initial exploration to full-scale implementation, Cronos Europa helps public institutions unlock their full potential through innovative and customised AI solutions.