Understanding how AI
actually changes
product development

Our approach

Research first.
Then build what's needed.

Every AI tool company starts with an opinion about what teams need. We started with a question: what changed when teams adopted AI?

The answer, across 37 sources and 160,000 engineers studied, surprised us. Individual productivity went up significantly. Organizational delivery didn't move. The bottleneck shifted downstream, into review, integration, and handover. The process never adapted to match the new speed.

That finding shapes everything we do. We publish our research openly. We update it as the landscape evolves. And the tools we build are designed to solve the problems the research identifies, not the problems we assumed existed before we looked.

Our research draws on the Stanford AI Index, Stack Overflow Developer Survey, DORA State of DevOps, Faros AI, McKinsey Global AI Survey, Foundation Capital, Designer Fund, UX Tools, Deloitte, and Gartner. We use Perplexity as our primary evidence engine and Claude as our analysis and drafting partner. The findings were published in March 2026 and continue to evolve as the landscape shifts.

The core question

How do teams move fluidly
through an AI-native process

AI is rewriting how individuals work. Designers are generating code. Engineers are prototyping interfaces. Product managers are building working demos. The roles are converging, but the handoffs between them haven't changed.

The code that a designer produces with AI today gets thrown away and rebuilt by engineering. The component a developer ships isn't the one design signed off on. The gap between what's designed and what ships is widening, not closing.

We're building toward a process where the code that anyone produces is valuable and reusable in production: where governance isn't a gate you pass through, but the architecture that makes fast work safe work. As we learn more, we'll keep publishing what we find.

Who we are

Two people + AI. Researching and
working as one unit.

Rob Surpateanu

Rob Surpateanu

Research, process, and product direction

Over seventeen years in product design and development, I've been fortunate to work across some incredible teams. At InVision, I led the work on Streams, a design visibility tool targeting large enterprise teams. I've led design at Fresha, JustEat, and bpPulse, and spent time consulting at household names like Microsoft, Deliveroo, Zipcar, and Reckitt Benckiser. Academically, my path took me from a BA in Graphic Design and Illustration to a Masters at Central Saint Martins in Applied Imagination in the Creative Industries, which shaped how I think about the intersection of design, systems, and emerging technology.

Over the past two years, I turned my attention entirely to early-stage AI startups, offering product design consultancy to teams at Mattoboard, Ai71, VinoVoss, and Multiverse. What started as designing AI products became something deeper: designing with AI, for AI. Along the way, I saw firsthand how the process itself breaks down when everyone on a team suddenly has access to generative tools. Understanding process has always been my deepest professional interest, and the arrival of AI made that interest urgent.

At LP5, I authored the white paper that grounds everything we build. I operate the full AI-native product development process firsthand, using Claude, Cursor, and Perplexity as integrated research and production tools. The research isn't theoretical. It comes from doing the work it describes.

David Lazar

David Lazar

Systems, components, and critical AI practice

My background is in front-end development, with over five years working across React, Next.js, TypeScript, and Tailwind, specialising in exactly the areas our research keeps surfacing as critical: design systems, component architecture, and the full path from design tokens to production UI. Before LP5, I built the chat experience at Happl, an AI startup. Academically, I took an unusual route, from a BSc in Electrical Engineering to an MFA in Computational Arts at Goldsmiths, University of London.

That path matters because my relationship with AI is both technical and deeply critical. Through my artistic practice, I explore technological systems as infrastructures that shape identity, behaviour, and public space. I've exhibited at Tate Modern through the Tate × Anthropic AI residency, shown work in London, Berlin, and Bucharest, and published writing on the ethics of creative AI. I tend to work with small-scale, locally run systems informed by principles of sustainability and transparency, which gives me a rare scepticism toward the very tools I build with.

At LP5, that combination is the point. I know how to ship components, and I know how to question the systems those components operate within. It shapes how we think about governance: not as a gate, but as architecture that reflects how humans and machines should responsibly work together.

Lagrange Point 5

Build the right system and quality sustains itself. Not through constant correction, but through architecture that makes good outcomes the natural state.

In orbital mechanics, L5 is one of only two naturally stable equilibrium points. Objects that reach it stay there, self-organising without intervention. LP5 builds tools that work the same way.