How AI Tools Made Our Grindr Engineering Team More Productive

Since mid-2025, Grindr’s Product Engineering team has been on an aggressive journey to embed AI into every layer of how we build software. Six months in, the data is in — and it’s striking. Here’s what we learned.
The Headline: ~1.5x Productivity Gain, Across the Board
In January 2026, we surveyed 50 of our 65 engineers to understand how AI tools like Claude Code, Cursor, and Firebender are changing the way they work. The results paint a clear picture:
- 92% of engineers believe their productivity has increased by 1.5x or more
- 58% believe they’re operating at 2–3x their pre-AI output
- 94% are running 1–5 AI agents in parallel during a typical development session
- 64% use at least one agent for most of their working time
These aren’t aspirational targets — they’re self-reported numbers from engineers in the trenches, shipping real features every week.
What’s Actually Changed for Engineers
When we asked engineers to describe the single most valuable shift AI has brought to their work, five themes emerged clearly:
Speed and throughput. Task completion time has dropped by nearly 50% for many engineers. Faster iteration cycles mean we can test more ideas, ship more often, and recover from mistakes more quickly.
Parallelization. Instead of working through tasks one at a time, engineers are delegating work to agents and focusing their attention on higher-order architecture and design decisions. Context switching — one of engineering’s biggest hidden costs — is down significantly.
Automation of the mundane. Boilerplate code, unit test generation, code cleanup, small one-off tasks — AI handles these now. Engineers are freed up for the complex, judgment-intensive work that actually requires a human.
Faster debugging and code comprehension. AI can analyze a codebase, surface relevant files, and identify root causes faster than any manual search. This is especially valuable for engineers onboarding to unfamiliar systems.
Greater confidence and reach. Engineers are taking on projects they would have previously considered out of scope. AI acts as a force multiplier for individual capability — letting people move with confidence in areas where they might have previously hesitated.
Data Beyond the Survey
The self-reported data is compelling, but we also have hard numbers from our tooling:
Cursor AI alone is responsible for ~30% of the code we write. That’s a substantial share, and it reflects just how deeply integrated AI-assisted coding has become in our day-to-day workflow.
Our GitHub data corroborates this: there’s been a dramatic increase in the volume of code changes across all platforms. Engineers are making larger, more confident commits — a signal that they feel supported rather than stretched.
Where We’re Still Limited
Honest assessment matters as much as the wins. Our engineers called out real friction points:
- 60% feel limited by their ability to context switch effectively between multiple agents
- 42% want to add more agents but are still building the muscle for managing them
- 28% are hitting hardware constraints — not enough screen space or compute to run the workflows they want
- 20% don’t yet fully trust agents to auto-deploy without human review
These aren’t blockers — they’re a roadmap.
What We’re Going After Next
When we asked engineers what would take AI usage to the next level, four priorities emerged:
Standardization and shared practices. Teams have developed their own approaches organically, but we need clearer guidelines and documented patterns so agents can navigate our codebase more reliably — and so we’re not duplicating effort across teams.
Training, demos, and dedicated experimentation time. The AI space is moving fast. Engineers want structured time to learn, experiment, and share what’s working. Workshops, a training budget, and regular knowledge-sharing sessions are at the top of the wish list.
More autonomous agents. The next frontier is fully agentic automation — bug fixing, SDK updates, UI-to-code generation, and enhanced code review handled end-to-end by agents. We’re early here, but that’s the direction.
Better tooling and integrations. Engineers want access to more models (Gemini, Grok Code), stronger MCP connectivity with tools like Figma and GitHub, and a faster path to getting new integrations approved and deployed.
Key Takeaways
The shift is real and measurable. AI has fundamentally changed the volume and velocity of code our team produces. Lines of code and PR counts are imperfect proxies — but the confidence engineers feel taking on larger, more complex changes is a meaningful signal.
The next challenge isn’t adoption. It’s intentionality. As we scale up agent usage, we need to be more rigorous about code review, quality gates, and the processes that keep our codebase healthy even as output accelerates.
We’re just getting started. Come join us!
This post summarizes findings from our January 2026 AI Usage Report, based on a survey of 50 Product Engineering team members. Data from Cursor and GitHub internal tooling was also included.



.jpg)
.gif)
.gif)
.webp)


.webp)
