We built something customers wanted. It went into open beta. People used it. Then it died anyway.
This is the story of Nexodus — what we built, why the technical problems were interesting, and why it's sleeping with the fishes.
What Nexodus was
The pitch: zero-trust networking for any workload, regardless of where it runs.
The problem it solved is real. In a world where workloads are spread across clouds, on-prem infrastructure, edge nodes, and developer laptops, getting them to talk to each other securely is harder than it should be. Traditional VPNs assume a perimeter that doesn't exist. Overlay networks require per-cloud configuration. The result is either sprawling complexity or giving up and leaving things wider open than you'd like.
Nexodus initially used WireGuard as the data plane - fast, modern, kernel-native. But WireGuard by itself is just a toolkit. The interesting work was the control plane: identity, access policy, and the machinery to distribute both to every device in a network.
That was something that we ran, as a public service, but you could run your own if you wanted to.
The hard technical problem
The identity model had two layers. OIDC tokens handled user identity - who owns or operates this device. WireGuard keys handled device identity - the cryptographic identity of the endpoint itself. The interesting problem was linking the two and propagating access control decisions across the mesh.
Concretely: a user authenticates, gets an OIDC token.
Once authenticated, if the token allowed, that user could then enroll devices interactively.
If you've ever logged in to GitHub using the gh CLI you'll be familiar with the login flow.
When a device is enrolled, the control plane has to answer: which other devices is this one allowed to reach? And then distribute the relevant WireGuard peer configurations to everyone involved, consistently, without races or stale state.
At small scale this is manageable. As the number of devices and networks grows, the state synchronisation problem gets harder. You're essentially building a distributed system whose job is to keep cryptographic keys and access policies consistent across a mesh of endpoints... and some of them are constantly changing IP addresses and going offline! We built all of the control plane infrastructure from scratch - the API, the device registration flow, the OIDC integration, the peer reconciliation logic, the Organization, User, Network (VPC) hierarchy, everything.
It shipped
Nexodus went into open beta. It was public, the code was open source, and people actually used it. We actually used it for our own private networks, not just for demonstrations. The technology was sound, the product was getting better every day... but that success was short-lived.
What killed it
Nexodus was incubated in the Office of the CTO - which meant we had the agency to start it, explore the problem space, and validate that customers wanted it. What we needed next was an engineering team to commit to owning it long-term: hardening it, supporting it, building it into a product with us. After months of searching, it became clear that there was no such team. I believe this is because the scope was ambitious - it would have impacted 3 different lines of business - and its scope was what made it harder to have one group step up and own the whole thing end-to-end.
What I took from it
The hardest part wasn't the technology. The technology was interesting and largely solved. The hardest part was navigating an org structure that wasn't set up to absorb innovation from the edges.
Office of the CTO teams are good at starting things. They're not good at handing things off, and the org isn't always good at receiving them. That's a structural problem, not a people problem - the teams we were talking to weren't being unreasonable. Each one was making a locally rational decision. The irrationality was at the system level.
Some projects teach you technology. This one taught me something about how large organisations do and don't absorb new ideas. Both kinds of lesson are worth having.