I shipped DocProof recently—a privacy-first document verification service. But this post isn't about what it does. It's about how it got built, and what it's like to develop a product in 2026 with AI assistance.

Short version: I couldn't have done it this fast without Claude for code-assistance, and ChatGPT for ideation.

And I want to be honest about what that actually looked like.

The starting point

I'm a fullstack software engineer. I know my way around code.

But DocProof touched areas where I'm not an expert: blockchain integration, cryptographic hashing in the browser, smart contract deployment, Stripe payment flows, and a dozen other things I'd never done exactly this way before.

A few years ago, this would've meant weeks of documentation rabbit holes, Stack Overflow threads & trial-and-error. I'd still have built it—but slower, with more frustration, and probably with more architectural mistakes baked in.

This time, I had a different approach.

Claude as a thinking partner

I started using Claude not just for code snippets, but as a genuine collaborator. And that changed how I work.

Here's what that looked like in practice:

Architectural decisions. Early on, I wasn't sure how to structure the blockchain interaction. Should the hash go directly on-chain? Should I batch transactions? What's the cost tradeoff? I talked through the options with Claude, weighing pros and cons until the right approach became clear. It wasn't Claude telling me what to do—it was a conversation that helped me think better.

Learning on demand. I'd never worked with Base or deployed a smart contract to mainnet with real money on the line. Instead of spending days reading documentation, I could ask targeted questions: "What's the difference between Base Sepolia and mainnet deployment?" or "How do I estimate gas costs for this transaction?" Instant, contextual answers. Then I'd verify and implement.

Code review and debugging. When something didn't work, I'd paste the error and the relevant code. Claude would spot the issue faster than I could—often something obvious I'd been staring at for too long. Fresh eyes, even artificial ones, help.

Writing and copy. The landing page, the explanations, the documentation—writing clear copy for a technical product is hard. I'd draft something, Claude would suggest improvements, we'd iterate. The result was clearer than what I'd have written alone.

Thinking through edge cases. "What happens if someone tries to verify a document that was never registered?" "How should I handle failed blockchain transactions?" These conversations caught problems before they became bugs.

What AI assistance actually feels like

There's a misconception that using AI means "the AI builds it for you." That's not how it works—at least not for anything meaningful.

It's more like having a knowledgeable colleague available 24/7 who never gets tired of questions, doesn't judge you for forgetting syntax, and can context-switch instantly between frontend, backend, blockchain, and marketing copy.

I still made every decision. I still wrote and reviewed every line of code. I still own the architecture, the bugs, and the tradeoffs. But I got there faster, with fewer dead ends, and with more confidence.

The best analogy I have: it's like pair programming with someone who's read all the documentation you haven't.

The productivity shift

I don't want to overstate this, but I also don't want to understate it.

DocProof would have taken me significantly longer without AI assistance. Not because Claude wrote the code for me—but because it removed friction at every step. Less time stuck. Less time context-switching to Google. Less time second-guessing architectural choices.

That time adds up. For a solo developer working on a side project, it's the difference between shipping and abandoning.

Some honest caveats

It's not magic. Claude gets things wrong sometimes—especially with newer APIs or very specific library versions. I learned to verify, not just trust. The answers are a starting point, not gospel.

And there's a skill to using it well. Vague questions get vague answers. The better I got at explaining context and asking precise questions, the more useful the responses became. It's a tool that rewards intentional use.

Where this leaves me

DocProof is live. It works. Real users can create real proofs.

Building it taught me something about how software development is changing. The gap between "I have an idea" and "I shipped a product" is shrinking—not because AI does the work for you, but because it removes the friction that used to slow everything down.

I'm still the developer. I still need to understand what I'm building. But I have a collaborator now that makes the whole process feel less lonely and more efficient.

If you're a solo dev or indie hacker hesitating to start something because it feels too big—try working this way. You might surprise yourself with what you can ship.


DocProof is at [docproof.dk]. If you want to follow along with what I'm building, I'm on Mastodon at [handle] and posting updates here.

Building DocProof: A Solo Dev Journey with AI as My Co-Pilot