I joined a German enterprise and finally understood what AI governance actually means
← Back to Blog
January 2025·Enterprise AI·13 min read

I joined a German enterprise and finally understood what AI governance actually means

My first week at the new company, I tried to paste a code snippet into ChatGPT. My colleague physically reached over and closed my laptop. "We do not do that here." That was my introduction to real AI governance.

The Culture Shock

Before joining my current employer - a subsidiary of a large German company in the energy sector - I had spent years in startups and small agencies. "Move fast and break things" was not just a motto, it was the operating system. Ship on Friday, fix on Monday, apologize if needed.

My first week at the new company ended that worldview permanently.

I was debugging a Symfony controller and wanted to ask Claude about an edge case. Standard practice at my previous jobs. I opened the browser, started pasting the code, and my colleague Markus - a senior developer who had been with the company for eleven years - reached across the desk and closed my laptop lid.

"We do not do that here," he said. Not angry. Just matter-of-fact, like telling me the coffee machine was broken.

"Why not?" I asked, genuinely confused.

"Because that controller handles asset fault data for energy infrastructure. If that data ends up in a training set, we have a compliance problem that goes all the way to Berlin."

He was right. And that moment changed how I think about AI forever.

Understanding What Is At Stake

The company builds software for managing physical assets in the energy sector. When a piece of equipment breaks down - a transformer, a pipeline valve, a distribution node - a ticket opens in our system. It gets classified, assigned, tracked, and resolved. The data includes equipment specifications, location data, maintenance histories, and sometimes information that could be classified as critical infrastructure.

This is not a social media app where a data leak means embarrassing photos. This is infrastructure data where a breach could have physical-world consequences. The German legal framework around this is not a suggestion - it is the law, enforced with real penalties.

The Request (And the Resistance)

Six months after I joined, management asked the engineering team to "explore AI integration." They had seen the productivity gains reported by other companies. They wanted the same.

The developers were excited. GitHub Copilot, ChatGPT, Claude - finally, the tools everyone else was using. The legal department was less enthused. Their response was a fourteen-page memo explaining every way AI tools could violate GDPR, the EU AI Act, and internal data classification policies.

Both sides were right. The developers needed modern tools to stay competitive. Legal needed to protect the company from existential compliance risk. Someone had to bridge the gap.

That someone ended up being me.

Building the Framework

I did not just implement AI tools. I built a governance framework that legal, management, and developers could all agree on. It took three months of meetings, drafts, reviews, and compromises. Here is what we ended up with:

AI Usage Policy. Not a vague "be careful" document. A specific, detailed policy that answers questions developers actually have. Which tools are approved? What data can go through them? What happens if you accidentally paste something sensitive? How do you report an incident? Every developer reads it, signs it, and gets a refresher quarterly.

.aiignore Configuration. We adopted a system modeled on .gitignore but for AI tools. It is part of the AI tooling ecosystem, not something I built from scratch, but I configured it extensively for our codebase. Certain directories, file patterns, and even code patterns are automatically excluded from AI context. Configuration files with connection strings? Excluded. Modules that handle customer data? Excluded. Migration files with database schemas? Excluded. The AI tools literally cannot see the sensitive parts of the codebase.

Data Censoring Pipeline. For situations where developers need AI help with sensitive code patterns, I built a preprocessing step. Before code leaves the network, a script redacts sensitive values while preserving structure. Variable names stay (they are needed for context), but values get masked. Connection strings become placeholders. API keys become [REDACTED]. The AI sees the pattern without the data.

Local LLM Deployment. For the most sensitive operations - analyzing production logs, reviewing security-critical code, processing anything that touches personal data - we deployed local models on company hardware. They are slower. They are less capable. But the data never leaves the building. For some operations, that is the only acceptable option.

The Shared Configuration

Once the governance framework was in place, I built the tools to enforce it. A shared configuration for our AI coding agents - Claude Code and Codex - that every developer installs. It includes twelve custom skills I wrote specifically for our workflow:

Jira integration that pulls ticket context without exposing customer data. Bitbucket PR workflows that enforce commit conventions and branch naming. Confluence access for documentation lookup. Outlook integration for scheduling. Laravel and Symfony development workflows with automated worktree management for parallel tasks. PHP testing skills that validate code quality before it leaves the developer's machine.

Four specialized sub-agents that handle common tasks: one for parallel worktree management (so agents can work on multiple Jira tickets simultaneously), one for code simplification and refactoring, one for fast codebase exploration, and one for research that can search the web without exposing internal context.

The result: every developer has the same guardrails, the same integrations, the same workflow. AI-assisted development that legal signed off on.

What Changed

Development velocity increased. That was expected. But the more important change was cultural. Developers stopped seeing compliance as an obstacle and started seeing it as a feature. "Our AI workflow is GDPR-compliant" became a point of pride, not a burden.

The legal team went from blocking AI adoption to actively participating in improving the framework. They started asking technical questions. We started asking legal questions. The wall between departments got thinner.

The Lesson I Carry Forward

This experience fundamentally rewired how I think about AI adoption. Before this role, I would have started any AI project with "which model should we use?" Now I start with three different questions:

What data do you have? What are the rules around that data? And who gets fired if those rules are broken?

Speed is not the only metric that matters. Responsibility, traceability, and compliance are not obstacles to innovation. They are prerequisites for sustainable innovation. The companies that understand this will still be here in ten years. The ones that do not will be cautionary tales in compliance textbooks.

The best AI governance does not slow teams down. It gives them confidence to move faster, because they know the guardrails are real.
Igor Gawrys
Igor Gawrys
AI Engineer & IT Consultant · Katowice, Poland