My journey with technology started as a kid, hacking away on a neighbour’s 8-bit MSX. From there, I taught myself everything I could get my hands on. Linux has been my playground since 1995.
Most of my career has been deep in the trenches, building and fixing the stuff nobody notices until it stops working. Mail systems at massive scale, CDN caches, fleets of thousands of nodes; I make sure the invisible plumbing just works.
When I debug, I imagine myself as a packet zipping through the system, tracing every hop and figuring out what’s really going on at each step. This mindset turned me into a generalist; once you get how things connect, you can tackle anything. The details change, the patterns feel like old friends.
Take a byte-range CDN caching trick I built in 2013; it’s basically what vLLM’s PagedAttention rediscovered for LLM KV caches a decade later. Or the LD_PRELOAD shim I hacked together in the 90s; not far off from what eBPF and APM auto-instrumentation do now. The names change, the problems stay the same.
These days, as a Senior Principal Engineer on the DevOps side, I shape the platforms that other teams at Flutter ship through. My team sets the standards for how code gets built; the brands keep autonomy over what and when they deploy. The magic happens at the seams: smooth where it matters, clear lines where teams need their own space. I set the direction, build the reference implementation, and get dozens of downstream teams to adopt it because the thing is actually better, not because a memo said so.
The surface spans supply-chain hardening, automated performance regression detection, release gates for CVEs and branch sync, and cross-repo context for AI-powered code review. Each one touches every repo in the org once it lands.
The hardest part isn’t the tech. It’s helping developers and stakeholders enjoy the transformation journey, keep expectations in check, and ensure we solve real problems rather than chase the next shiny thing. Technology is cool, solving problems is better.
Lately, I’ve been going deeper into AI: modernising services, using LLMs in supply chain analysis, and helping teams get comfortable with these tools.
In my spare time, I pour energy into LLM inference infrastructure: cache-aware routing, prefix alignment, wild KV cache setups, and mixing local models with paid APIs for the best bang per task. These feel a lot like CDN origin selection problems, just with faster cache churn and GPU economics on top.
When I’m not working, you’ll find me on a motorcycle or in my homelab. I’m Brazilian, living in Dublin, and I never stop being curious.