ThesisModelVenturesTeamInsightsBuild With Us
All Articles

How We Built Diverss
Using Claude Code & Cowork

TEN Labs is an AI-native venture studio. That's not just a positioning statement — it means AI tools are woven into how we actually build. This is a transparent account of how we used Anthropic's Claude Code and Cowork to take Diverss from idea to product.

72hrsFirst working prototype
2AI tools at the core
~60%Faster than traditional build
1Core team member needed

We talk a lot about building AI-native companies. But it's easy to position yourself as AI-native while running the same build process as everyone else — just with ChatGPT open in a browser tab. At TEN Labs, we've been deliberate about this. Diverss, our investment intelligence venture, was the first company where we systematically used Anthropic's Claude Code and Cowork across the entire development lifecycle. This is what that actually looked like.

What Diverss Is — and Why It's Complex to Build

Diverss is an investment research and portfolio intelligence platform built around StockSense — an AI-native engine that analyses equities across five analytical dimensions: momentum signals, fundamental health, technical patterns, sectoral correlation, and sentiment. It isn't a stock screener. It isn't a news aggregator. It's closer to having a research analyst who never sleeps and can hold an entire portfolio in context simultaneously.

That kind of product has real technical complexity underneath. It requires financial data pipelines, rule-based analytical engines, portfolio modelling, a recommendation layer, and a user interface that makes deeply complex analysis feel simple. The traditional approach would be: hire a team of engineers, spend six months building, then start learning from users. We didn't have six months and we didn't want to over-engineer before we understood what the product should be.

"The goal wasn't to replace engineers with AI. It was to think more clearly, move more quickly, and skip the parts of the build that don't need human judgement."

The Two Tools We Used — and What Each One Did

Before getting into the specifics, it's worth being clear on what these tools actually are, because they solve different problems.

Claude Code is Anthropic's agentic coding tool — a command-line interface where Claude operates with the full context of your codebase, reads and writes files, runs commands, and handles multi-step technical tasks autonomously. It's not autocomplete. It's closer to a senior engineer who understands the whole project and can reason across it.

Cowork is a desktop tool from Anthropic that gives Claude the ability to work with your files, documents, and workflows — outside of code. Research synthesis, structured writing, document creation, task organisation, and content pipelines. If Claude Code is your engineering partner, Cowork is your ops and research partner.

Claude Code
Engineering Partner
Codebase-aware, autonomous. Reads, writes, and runs code. Used for architecture, feature development, debugging, and iteration.
Cowork
Research & Ops Partner
Works with files, docs, and workflows. Used for research synthesis, product documentation, content, and project organisation.
Claude Code
Data Pipeline Architecture
Designed and scaffolded the financial data ingestion, normalisation, and processing layer — reasoning across the entire schema from day one.
Cowork
Market & User Research
Synthesised competitor analysis, investment research frameworks, and user interview notes into structured product requirements.

Phase One: Research and Product Definition

Before a single line of code was written, we needed to understand the problem space. Investment tools are a crowded category. Zerodha Kite, Smallcase, Tickertape, Trendlyne, Screener.in — each does something, and each has significant gaps when it comes to genuine AI-powered insight rather than data presentation.

We used Cowork to conduct and synthesise that landscape analysis. In practice, this meant: ingesting publicly available information about how retail investors in India actually make decisions, what they find confusing about existing platforms, where they currently seek research outside of apps, and what they'd pay for something better. Cowork structured this into a product brief — clear problem statements, prioritised features, and the specific hypothesis we were building Diverss to test.

Cowork ↗
What it produced: A 12-page product requirements document covering user personas, feature prioritisation, analytical framework definitions for StockSense's five dimensions, and a competitor gap analysis. Work that would typically take a product manager two weeks was complete in under two days — with more structure than most teams produce at all.

Phase Two: Architecture and First Build

The product brief went directly into Claude Code as context. What happened next is where AI-native development diverges most sharply from the traditional process.

Rather than spending days on architecture diagrams and then more days translating those into code, Claude Code reasoned about the architecture and began scaffolding it simultaneously. It understood the requirements — five analytical dimensions, a live signal layer, a portfolio correlation engine — and proposed a technical structure that could support all of it, flagging the trade-offs in each approach before proceeding.

The first working prototype of StockSense's recommendation engine — capable of ingesting stock data and returning a multi-dimensional analysis — was running in 72 hours. Not a polished product. But something real that we could put in front of people and learn from.

72hrs From brief to working prototype
Analytical dimensions built concurrently
1 Engineer required for the first phase

This is the compounding advantage of Claude Code: it doesn't just write the function you ask for. It holds the context of the whole codebase, understands what exists, and reasons about how new code fits — or doesn't fit — with what's already there. Debugging in this context is faster, refactoring is less painful, and architectural drift is caught earlier.

Claude Code ↗
Specifically used for: Data ingestion pipeline for NSE/BSE instruments, the five-dimension scoring engine, the Live Signal alert layer, portfolio correlation matrix, and the API layer that StockSense's frontend consumes. Also used for code review, test generation, and performance debugging across the stack.

Phase Three: Iteration Without Friction

The most underrated advantage of this stack isn't the first build — it's what happens on the tenth iteration. In conventional development, each iteration has overhead: context-switching, documentation gaps, the time it takes a developer to re-familiarise themselves with code they wrote three weeks ago. That overhead compounds.

With Claude Code maintaining full project context, iterations on Diverss happened without that tax. When we decided the Correlation Engine needed a different approach — moving from a simpler cosine similarity model to a more sophisticated rolling-window correlation that could surface regime changes in sector behaviour — the change was reasoned through, implemented, and tested within a single session.

Meanwhile, Cowork was running in parallel on everything around the product: drafting user onboarding flows, writing help documentation, preparing investor updates, and creating the content that Diverss needed to explain itself to users who were encountering AI-powered investment analysis for the first time. That's not a small surface area. Explaining complex financial analytics in plain language, to a user who doesn't have a finance degree, without dumbing it down — is genuinely hard content work.

Real example — Cowork in use
When we built the "Portfolio Health Score" feature, we needed: user-facing copy explaining what the score means, help documentation covering edge cases, a blog post contextualising it against how traditional financial advisors assess portfolio risk, and an email sequence for onboarding users to the feature. Cowork produced first drafts of all four artefacts — grounded in the technical spec Claude Code had generated — in a single session. Review, edit, publish.

What This Changes About How a Venture Studio Operates

The honest answer to "what does using Claude Code and Cowork actually change?" isn't that it makes individual tasks faster. It's that it changes what's possible with a small team — and therefore what you choose to build and how you choose to build it.

When we started Diverss, the mental model was: we need to hire an engineer, a product manager, and a content person before we can move. With this stack, that mental model is wrong. A founder with domain knowledge and good judgement can make meaningful progress on all three fronts simultaneously — not because the AI does everything, but because it handles the parts that don't require human judgement while you focus on the parts that do.

At TEN Labs, this changes how we assess ventures. The question used to be "what's the minimum team to build this?" The question now is "what's the minimum team to make good decisions about this?" Those are very different numbers. And the gap between them is where AI-native development creates leverage.

"The question is no longer what's the minimum team to build this. It's what's the minimum team to make good decisions about this. Those are very different numbers."

What Doesn't Change

It would be dishonest to end here without being clear about what AI tools don't replace. They don't replace founder judgement on what to build — and that's the highest-leverage decision in any venture. They don't replace the need to talk to real users constantly. They don't replace the specific expertise that Diverss needs in financial markets: understanding how Indian retail investors actually think about their portfolio, what makes them trust a platform, why they churn. That knowledge comes from humans, from conversations, from building in the market over time.

What Claude Code and Cowork removed was the bureaucratic overhead between having a good idea and testing it — the translation cost between intent and execution. In a startup context, that overhead is not just inefficiency. It's risk. Every week a hypothesis sits unbuilt is a week you're not learning. Compress that cycle, and you compress risk.

The Broader Point for AI-Native Ventures

We publish this not as a product review of Anthropic's tools — though we think both are genuinely excellent — but as a proof point for the model TEN Labs is built around. If you're going to claim to be an AI-native studio, you need to actually build AI-native. That means the tools aren't an afterthought. They're the methodology.

Diverss is live. StockSense is processing real portfolios. And the velocity at which we're improving it — adding instruments, refining the analytical models, expanding the signal layer — continues to reflect the same advantage we had in the first 72 hours. That's the compounding return on getting your build process right from the start.

TEN Labs Ventures

Building your own AI-native product?

If you're working on a venture and want to understand how TEN Labs approaches AI-native development — as a co-founder, not a consultant — let's talk.

Start a Conversation