Senior Data Engineer · AI-First Engineering · Workflow Automation · 11+ Years
Same principles at every scale — infrastructure as code, containerize everything, automate away toil, monitor proactively, and design for reliability so a small team can manage a large surface area.
Two-node setup (Raspberry Pi + Ubuntu server) running 30+ containerized services. Self-healing, zero-touch operation with automatic DNS, backups, rebuilds, and wake-on-demand.
docker compose build && upDe facto tech lead of a 2–3 person team. Full ownership of infrastructure, ingestion, orchestration, and warehouse. Transformed a platform with daily failures to near-zero incidents.
Company details anonymized — architecture patterns and technology choices are my own.
Designed and built the company-wide big data infrastructure from scratch. Decoupled analytics from a 20-year-old production Oracle database, eliminating availability risk and slashing costs.
Company details anonymized — major European online retailer with large-scale e-commerce data.
Whether it's a production data platform on AWS or a Raspberry Pi at home, the same principles apply. This table maps the consistent engineering behaviors across professional and personal infrastructure.
| Pattern | ● Professional | ● Home lab |
|---|---|---|
| IaC everything | Terraform + Helm | Ansible + Jinja2 |
| GitOps auto-deploy | ArgoCD + GitHub CI | Webhook + auto-rebuild |
| Containerized workloads | K8s Pod Operator on EKS | Docker Compose on Pi |
| Reverse proxy | EKS Ingress / Karpenter | Caddy + lazy wake |
| Secrets management | AWS Secrets Manager | Ansible vault + env |
| Auto backups | S3 lifecycle policies | rclone nightly to cloud |
| Monitoring | Datadog + Slack alerts | Healthcheck.io + ntfy |
| Multi-source ingestion | DLT / Airbyte / SQS | Hermes agent scrapers |
| Enabling non-tech users | Streamlit + dbt | Telegram bot for family |
| AI-first engineering | AI Code Review + Claude/GPT | Hermes multi-agent system |
Full-stack applications built end-to-end with AI-assisted workflow automation. Each app was designed, built, and iterated using AI under experienced supervision — from architecture decisions to deployment. All apps are responsive, mobile-friendly, and served securely via Tailscale mesh VPN.
Nutrition tracking app with daily macro targets, food database, per-item nutritional breakdown, and total composition tracking. Keto-friendly with net carbs and liquid oil for ketones targets.
Task management application with nested categories, drag-and-drop organization, rich task editing modal, and priority management. Designed to replace off-the-shelf solutions with a tailored workflow.
Recurring task reminder system with configurable intervals (hourly, daily, custom), Telegram notification integration, countdown display, and snooze controls. Tracks everything from medication to vehicle inspections.
Custom dashboard for managing the home lab infrastructure. Real-time system metrics (CPU, memory, load, uptime), Docker container status with start/stop controls, static app launcher, and media server management.
From civil engineering to self-taught developer to senior data engineer — a consistent pattern of owning large infrastructure scope and making systems run themselves.
The bottleneck in engineering has shifted. It's no longer writing code — it's the criteria behind what gets built, how it's architected, and whether the output is sound. This is where engineering is going, and I've already been working this way for over a year.
Workflows, team structures, and communication layers were designed for a world where writing code was the bottleneck. It's not anymore. The bottleneck is now criteria: knowing what to build, recognizing when the AI's output is wrong, and translating stakeholder needs directly into technical direction.
Companies that restructure workflows around AI-assisted development — not just “giving AI to workers” — can become dramatically more dynamic. When AI writes the code in minutes and the test suite alongside it, you need fewer handoffs and more technically sharp people who understand the business need, guide the AI, validate the output, and ship.
11+ years of platform engineering, self-taught from civil engineering, with a proven track record of applying strong judgment to AI-assisted workflows. I generate multiple architectural options, evaluate tradeoffs, and catch when solutions are heading in the wrong direction — that's the skill that matters now.
In my current role, I automated most of my engineering workflow with AI. The result? My work gets done in a fraction of the time. But instead of the company capturing that speed, I'm waiting — for reviews, for approvals, for processes designed around the old pace. This is happening everywhere, not just at my company.
The problem isn't that companies lack AI tools — it's that their workflows and team structures were designed for a world where writing code was the bottleneck. Now the bottleneck is criteria: knowing what to build, recognizing when the AI's output is wrong, and translating stakeholder needs directly into technical direction. Traditional handoff chains — PM to developer to QA to DevOps — become overhead when one person with strong judgment can guide AI through the full cycle.
I come from civil engineering — one of the hardest technical degrees in Spain — and taught myself to code well enough to work professionally for 11 years across Scala, Python, Spark, Kafka, Flink, AWS, Terraform, Snowflake. That path proves I can understand systems deeply. But what I've learned recently is that my real value was never the typing — it was the judgment. When I build something with AI, I generate multiple architectural options, evaluate tradeoffs, and catch when the solution is heading in the wrong direction. I've built complete applications this way: multi-container Docker systems, full-stack apps, AI agent platforms, and self-hosted tooling.
A team that already operates this way, or is committed to getting there — where the role is to apply technical criteria, guide AI-driven development, and interface directly with stakeholders. Full remote, and a company that sees this shift as urgent, not optional.
“At every role I've taken messy or nonexistent systems and applied solid engineering fundamentals to make them run themselves. Now I'm applying that same instinct to how engineering work itself is done — building AI-infrastructure-ready workflows instead of layering AI on top of legacy processes.”
The tools exist. The question is whether your operating model can capture the value.
If your best engineer automated 80% of their workflow tomorrow, would your organization capture that speed?
Or would the work sit in review queues, approval chains, and sprint ceremonies designed around the assumption that building takes weeks?
When the frontier models evolve — ask yourself
What each phase actually looks like
What I bring
I don't build what you need today. I build what you'll wish you'd started six months ago.
11+ years of data engineering. Civil Engineering degree from one of Spain's hardest technical programs. Self-taught into a professional software career. Built production platforms from scratch, stabilized failing infrastructure with 2-person teams, and run 30+ containerized services from my home lab with the same engineering discipline I apply at work.
The differentiator: I've been working AI-first for over a year — not as an experiment, as my daily operating model. I generate architectural options with AI, evaluate tradeoffs, catch failures early, and ship. I've built multi-user AI agent systems, automated my entire development workflow, and learned firsthand what the infrastructure needs to look like. My value isn't writing code. It's the criteria behind what gets built, how it's evaluated, and whether it's sound.
"We're already doing this" — great, let's get into specifics.
"This sounds extreme" — we should talk sooner.
Let's talk →