5 Min reading time

AI Governance: A Season’s Greetings

19. 12. 2024
Overview

Deep dive into AI governance and building trustworthy, ethical AI systems. And a poem, because even AI deserves a sprinkle of holiday cheer.

Howdy AI friends,

As we flip the calendar to 2025, a new era of AI regulation is dawning. The EU AI Act will officially kick in in February, marking a significant milestone for AI solutions. While the Act quietly slipped into effect on August 1st, 2024, its full impact will be felt in the coming months. This groundbreaking legislation will introduce a complex regulatory framework involving multiple EU and Member State actors.

The heart of the EU AI Act is to ensure AI’s safe and trustworthy use. Rather than regulating the technology, the Act targets the potential risks arising from its practical applications. By categorizing risks into four levels – unacceptable, high, limited, and minimal – the Act aims to establish a robust AI development and deployment framework. Check out the European Commission’s webpage for a deeper dive into the specifics.

A recent study by appliedAI revealed a concerning trend: 18% of AI systems are categorized as high-risk, while the risk classification for a staggering 40% remains unclear. This uncertainty is hindering investment and innovation in the AI space. As we navigate this murky landscape, it’s crucial to ask: What does this mean for the practical application of AI?

The problem of ensuring responsible AI is multifaceted. Let me break it down into its core components:

  • Transparent and Trustworthy AI Ingredients: How can we ensure that the data and models used to build AI systems are transparent and reliable?
  • The AI Kitchen Context: In what specific business context and for what purpose is the AI solution being deployed? Understanding the use case is crucial.
  • The AI Diners: How does the AI solution impact the stakeholders? Considering the potential effects on users, employees, and society is essential.

The ingredients

Data governance is crucial for AI governance because it ensures that the data-feeding AI systems are accurate, secure, and well-managed. We can identify and address gaps in data security and access control by implementing a structured approach to metadata management, like the Data Governance Tool assessment service that my colleagues from the Data team have designed. It protects sensitive information and ensures that AI models are trained on reliable and well-organized data. Effective data governance helps map business requirements to metadata needs, making critical data assets easily accessible for AI applications. Ultimately, this leads to more trustworthy and efficient AI systems, as they rely on high-quality data that is consistently managed and protected.

Data governance is just one piece of the AI puzzle. We also need strong model governance to ensure the smooth operation of AI systems. My colleague Ivan Krizanic’s blog on MLflow highlights the importance of this aspect. MLflow, an open-source platform, acts as a trusty sidekick for data scientists, streamlining the entire machine learning lifecycle. From data collection to model deployment, MLflow simplifies complex tasks, allowing teams to focus on building innovative AI solutions. By tracking experiments, managing models, and ensuring data lineage, MLflow helps maintain high standards of AI governance. Its compatibility with popular ML frameworks and flexibility for custom models make it a valuable tool for any AI team.

human and robot assembling puzzle pieces, symbolizing collaboration and data governance being a piece of the AI puzzle

The context

However, the EU AI Act requires a broader perspective on AI governance. This involves capturing crucial information about the context in which AI solutions are developed. Several frameworks have emerged to address this point, and they emphasize the importance of mapping the technical AI lifecycle against the business context while considering the impact on relevant stakeholders. By understanding the interplay between technology and business, we can ensure that AI solutions are developed responsibly and ethically.

The AI Risk Repository, developed by MIT researchers, is a valuable tool that maps the AI risk landscape across three dimensions: a database of risks from existing frameworks, a risk taxonomy, and a domain taxonomy. This tool helps identify less obvious but impactful areas for AI use cases.

The stakeholders

The final pillar of AI governance is focused on stakeholders. This area is still evolving, and organizations are navigating uncharted territory. Recent conferences like AI@HPI have highlighted the challenges and opportunities.

girl with short black bob and round glasses weighing challenges and opportunities of AI governance

On the positive side, general guidelines for human-centric AI are available, and large organizations like SAP and Infineon are leading the scene in implementing ethical principles. Yet, smaller organizations, particularly SMEs, may need specialized expertise to keep pace.

One significant challenge is ensuring that AI systems are trustworthy by design. While standardized measures for ethical AI, akin to car safety tests, are still in development (see, for example, the work done by TÜV ai.Lab), it’s clear that such measures will need to be tailored to specific use cases.

Additionally, ethical dilemmas continue to emerge. For example, the “functionality dilemma” raises questions about how to access sensitive information without compromising privacy, and the “automation dilemma” highlights the complexities of maintaining human oversight in increasingly autonomous AI systems.

These challenges can be particularly daunting for SMEs, which may need more resources to establish dedicated ethics committees. However, as highlighted in my conversation with Auxane Boch, it’s essential to prioritize ethical considerations even in resource-constrained environments.

Let’s raise a glass to a year filled with AI innovation and ethical considerations. As we savor the holiday season, may the new year bring a feast of AI advancements that nourish our world.

A Festive Ode to Trustworthy AI – by Giulia Solinas and GenAI

In a world of tech so bright and new,
Where AI stirs the festive stew,
We gather ’round this cheerful clime,
To ponder ethics, quite sublime.

With algorithms that knead and rise,
Like Christmas loaves or mince pie pies,
We must ensure they rightly blend,
Fairness baked in, start to end.

Governance strong, like gingerbread,
Shapes AI’s path, where we tread,
Transparency, our guiding light,
To keep each recipe just right.

Ethics folded, policies clear,
Spiced with trust, to hold so dear,
No bitter bias, hidden snares,
Just a platter of fairness and cares.

As we whisk and feast with glee,
Let’s craft a code for AI’s decree,
For in this season, sweet and true,
Ethical AI starts with me and you.

Happy holidays, and may your kitchen-and AI-be filled with joy and balance!

Categories

Get in touch

If you have any questions, we are one click away.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Contact us

Schedule a call with an expert