I built my own CI/CD orchestrator because existing tools were not enough
← Back to Blog
February 2025·Developer Tools·11 min read

I built my own CI/CD orchestrator because existing tools were not enough

The third time a broken test made it to the remote repository because a developer forgot to run the test suite locally, I decided to solve the problem permanently.

The Problem That Kept Recurring

It was a Friday afternoon when our CI pipeline caught a failing test in a merge request. The fix was trivial - a missing null check that any local test run would have caught. But the developer had pushed without testing because "it was just a small change."

This was the third time that month. And each time, the cycle was the same: push, wait 15 minutes for CI, see the failure, fix locally, push again, wait another 15 minutes. Thirty minutes wasted per incident, multiplied across a team, multiplied across months.

Existing solutions did not fit. Jenkins was overkill for local validation. GitHub Actions could not run before the push. Docker Compose alone had no orchestration logic. Pre-commit hooks were too limited - we needed full environment setup, not just linting.

So I built my own.

The 9-Stage Pipeline

Dev Orchestrator runs a complete pipeline every time you push. Not after - before. A Git pre-push hook intercepts the push, spins up an isolated environment, runs the full test suite, and only allows the push through if everything passes.

The pipeline has nine stages, each solving a specific problem:

Stage 1: Branch Validation. Verify that all required branches exist in the repository. This catches the "I forgot to create the feature branch" problem.

Stage 2: Port Allocation. Dynamically allocate ports for HTTP, database, SMTP, and Vite dev server. Each environment gets its own ports with collision detection via ss -tlnp. No more "port 3000 is already in use."

Stage 3: Worktree Creation. Create Git worktrees on ext4 filesystem. For developers using WSL2 on Windows, this is critical - NTFS filesystem operations are 5-10x slower than native ext4. The orchestrator manages worktrees in a dedicated directory for maximum performance.

Stage 4: Branch Merging. Merge multiple feature branches sequentially with per-branch conflict resolution. This allows testing combinations of features before they reach the remote.

Stage 5: Docker Services. Start required services (PostgreSQL, Mailpit, Redis) using dynamically generated Docker Compose files. Each environment gets its own isolated database instance.

Stage 6: Configuration. Render environment-specific config files from templates. Database URLs, API keys, feature flags - all generated from a single config.yml per project.

Stage 7: Build. Run project-specific build scripts. For Laravel: composer install, npm install, asset compilation. For Symfony: similar but with Symfony-specific tooling.

Stage 8: Database. Import database dumps, run migrations, seed fixtures. The test environment mirrors production data structure exactly.

Stage 9: Serve & Test. Start application servers and run the test suite. PHPUnit, Pest, whatever the project uses. Only if all tests pass does the push proceed.

Designed for AI Agents

Here is where it gets interesting. I designed the orchestrator to work not just with human developers but with AI coding agents.

In my workflow, I use Claude Code and Codex for development. These agents pick up Jira tickets, create feature branches, write code, and prepare pull requests. But how do you ensure AI-generated code meets quality standards?

The orchestrator is the answer. My shared agent configuration includes workflow skills that integrate directly with it. An agent picks up a ticket, creates an isolated worktree via the orchestrator, writes the implementation, runs the full 9-stage pipeline, and only creates a PR if everything passes. The agent cannot bypass testing - the pre-push hook enforces it at the Git level.

Multiple agents can work simultaneously on different tickets, each in their own isolated worktree with their own database and ports. No conflicts. No interference. Parallel development at machine speed with human-grade quality gates.

The Web Dashboard

Because not everything should happen in a terminal, I built a web dashboard. It runs on Caddy with local domains - each environment gets its own *.local address (my-feature.local, api.my-feature.local). The dashboard shows real-time SSE output of pipeline execution, environment status, port allocations, and log streams from all running services.

When a colleague asks "is the staging environment up?" they do not need to SSH into anything. They open dev-orchestrator.local in their browser and see everything at a glance.

What I Learned

Building developer tools is humbling. Your users are engineers who will judge every rough edge, report every unclear error message, and find every undocumented assumption. The bar for developer experience is set by tools like Laravel Artisan and Symfony Console - polished, intuitive, helpful.

I learned more about Docker internals, Git plumbing, process management, and port allocation than I ever would have from tutorials. Sometimes the best way to deeply understand a technology is to build something that orchestrates it.

The orchestrator is now used by our team daily. No broken tests have reached the remote since we deployed it. The pre-push hook is non-negotiable, and nobody complains about it - because the alternative (waiting 15 minutes for CI to tell you what you should have caught locally) is worse.

The best developer tools do not add friction. They move friction to where it belongs - before the code leaves your machine, not after.
Igor Gawrys
Igor Gawrys
AI Engineer & IT Consultant · Katowice, Poland