Hey HN! In 2025, I've spent more time than ever conversing with AI coding agents, particularly Claude Code. These conversations are an intimate look into how we think and solve problems. Every chat with the agent contains valuable solutions, patterns, decisions, and mistakes. So being able to search, analyze, and learn from those interactions isn't just convenient, it's becoming essential.
To help me do this, I built a tool to process Claude Code conversations:
* Import and search your entire conversation history across projects
* Analyze sessions, choosing from over 300 LLM models, via OpenRouter, to extract insight and patterns (decisions made, error patterns, how you use AI agents)
* Share insights as GitHub Gists (as long as the text passes a security scan)
* View basic aggregate statistics on Claude Code usage
The tool is built with Python, Streamlit, SQLite with FTS5, OpenRouter, and Gitleaks.
I made this for myself, and sharing it in case it helps you too. Once your conversations are in a database, you can start asking questions like “What were the key technical decisions on this project?”, “How did the agent help to research and prototype this feature?”, "What steps did I take to implement this?" and “What errors does the agent commonly make?”
It’s a work in progress, and I'm planning on adding more features. Currently only tested on macOS 14.7 with Claude Code 2.0.21. If you’re curious what your Claude Code sessions may reveal, take it for a spin!
But the AI coding agent can then ask you follow up questions, consider angles you may not have, and generate other artifacts like documentation, data generation and migration scripts, tests, CRUD APIs, all in context. If you can reliably do all that from plain pseudo code, that's way less verbose than having to write out every different representation of the same underlying concept, by hand.
Sure, some of that, like CRUD APIs, you can generate via templates as well. Heck, you can even have the coding agent generate the templates and the code that will process/compile them, or generate the code that generates the templates given a set of parameters.
I expect less time spent on boilerplate and documentation, and more time spent on iterating, experimenting, and increasing customer satisfaction. I also wouldn't accept "I don't know how to do that" as an answer. Instead, I'd encourage "I don't know how to do that, but I can use AI to learn faster, and also seek out someone with experience to help review my work".
Add LLM-powered chat to your app. Translate English into executable JSON commands using just TypeScript, Node, and OpenAI—no frameworks! Start extremely simple and get better at prompt engineering, before delving into things like tool calls and MCP servers.
That is funny, because I was a Java developer for many years, then Scala for a few years, and these days I mainly write Python, but the last thing I go for is creating a class. That's generally only when a set of functions in a single responsibility module need to share / mutate some state, and it's more self-documenting than passing around dictionaries.
To help me do this, I built a tool to process Claude Code conversations:
https://github.com/sujankapadia/claude-code-analytics
* Import and search your entire conversation history across projects
* Analyze sessions, choosing from over 300 LLM models, via OpenRouter, to extract insight and patterns (decisions made, error patterns, how you use AI agents)
* Share insights as GitHub Gists (as long as the text passes a security scan)
* View basic aggregate statistics on Claude Code usage
The tool is built with Python, Streamlit, SQLite with FTS5, OpenRouter, and Gitleaks.
I made this for myself, and sharing it in case it helps you too. Once your conversations are in a database, you can start asking questions like “What were the key technical decisions on this project?”, “How did the agent help to research and prototype this feature?”, "What steps did I take to implement this?" and “What errors does the agent commonly make?”
It’s a work in progress, and I'm planning on adding more features. Currently only tested on macOS 14.7 with Claude Code 2.0.21. If you’re curious what your Claude Code sessions may reveal, take it for a spin!