Why I study Law while building AI systems
← Back to Blog
November 2024·Career·9 min read

Why I study Law while building AI systems

The first time a lawyer asked me "but who is liable when the AI makes a wrong decision?" I realized I had been building systems without understanding the world they operate in.

The Question That Broke My Brain

I was presenting our AI governance framework to the company's legal team when their lead counsel asked a question I had never considered: "If your AI tool generates code that introduces a security vulnerability, and that vulnerability leads to a data breach affecting 50,000 people, who is liable? The developer who accepted the code? The company that approved the tool? The AI provider whose model generated it?"

I opened my mouth. Nothing came out. I had spent months building sophisticated AI systems and never once thought about this question.

That evening, I enrolled in a law program. Not to become a lawyer. To become an engineer who understands the world his systems operate in.

The Two Languages

At my employer, I quickly became the person sitting between two departments that spoke completely different languages. The legal team talked about data controllers, processing purposes, legitimate interests, and proportionality tests. The engineering team talked about API endpoints, model context windows, token limits, and inference latency.

Both teams were making valid points. Neither could understand the other. Legal would say "you cannot use AI on personal data without a legal basis." Engineering would say "but we need AI to be productive." Both were right. Someone had to translate.

I found myself writing documents that used phrases like "the vector embedding process constitutes data processing under Article 4(2) GDPR because the personal data is transformed into numerical representations that could theoretically be reversed." Try saying that at a standup meeting.

What Law Teaches an Engineer

Law school restructured how I think about edge cases. In engineering, an edge case is "what if the user enters an emoji in the phone number field." In law, an edge case is "what if the user's data is processed in a way that violates their fundamental right to privacy under the Charter of Fundamental Rights of the European Union."

The stakes are different. But the thinking pattern is the same: anticipate what could go wrong and build systems that handle it gracefully.

Law also taught me about precedent - every legal decision builds on previous ones, creating a living body of knowledge that evolves over time. Software engineering works the same way. Every library builds on previous libraries. Every architecture pattern evolved from previous patterns. The meta-skill of understanding how knowledge accumulates and compounds is identical in both fields.

The EU AI Act and What It Means

The EU AI Act is now in force. It classifies AI systems by risk level and imposes requirements accordingly. High-risk systems - which include many enterprise AI applications - require risk assessments, human oversight, transparency, and documentation.

Most engineers I talk to have never read it. Most lawyers I talk to do not understand the technical implications. The gap between these two groups is where most AI compliance failures will happen in the next five years.

I intend to be in that gap, building bridges.

The Practical Value

At my current employer, my growing legal knowledge directly shaped our AI governance framework. I wrote policies that made sense technically AND legally. I explained to developers WHY certain restrictions existed, not just WHAT they were. When a developer asked "why can't I use GPT for code review?" I could answer with both the technical risk (data leakage through API) and the legal risk (potential GDPR violation if the code contains personal data references).

When I consult with other companies about AI adoption, I bring both perspectives. Here is what the technology can do. Here is what the law requires. Here is the overlap where business value lives without legal risk. That overlap is larger than most people think - but finding it requires understanding both sides.

The Future I Am Building Toward

I believe the next generation of AI engineers will need legal literacy. Not to pass a bar exam, but to build responsible systems by default. Understanding data protection law, liability frameworks, consent mechanisms, and compliance requirements should be as fundamental to an AI engineer's education as understanding algorithms and data structures.

The engineers who understand both code and law will build the AI systems that society actually trusts. Not because they are the most technically sophisticated, but because they are designed with an understanding of the human and legal context they operate in.

That is the future I am studying toward. One semester at a time, between deployments.

The best AI systems are not just technically sound. They are legally defensible, ethically grounded, and designed by engineers who understand the world beyond the terminal.
Igor Gawrys
Igor Gawrys
AI Engineer & IT Consultant · Katowice, Poland