As someone who is ignorant and not following all of the recent releases. Isn't Claude-code a proprietary version of https://aider.chat/ in the first place?
> Isn't Claude-code a proprietary version of https://aider.chat/ in the first place?
It might be; but aider is Apache 2 licensed and I don't think Anthropic would be able to change the license. But either way, these tools are simple enough that it's not hard to replicate what they do. So simple that it's kind of pointless for Anthropic to get proprietary and protective about the whole thing. The moat they have is with the model quality.
I saw a very similar tool demoed at the local Kotlin meetup in Berlin just one day after Anthropic announced claude code that also uses the Claude API.
Kotlin might not be everybody's cup of tea but actually seems to be an effective tool for the job. The level of simplicity of the code base is what I found the most shocking. There's almost nothing to it. It's tiny bits of code needed to read/write files and execute commands and a very simple loop where it interacts with claude via a simple REST API. You can read through the code base (a couple of hundred lines of Kotlin) in a few minutes and get a feel for what it does. We had some fun trying to get the tool to improve itself during the meetup. It worked pretty well.
Probably, the most complex thing of the whole code base is the system prompt used to orchestrate the whole thing. That looks like some thought went into it and there are probably a few gotchas that are yet to be addressed. But even that is doable. A determined person could probably replicate what this does in a day or so from scratch. Probably a lot faster if you use one of these tools to build the next one. This isn't rocket science.
I'm guessing we'll be seeing a lot of tools like this in the next few months. It will be interesting to see what refinements people come up with. I think most of the innovation is going to be in giving these agents better/smarter tools to work with. For example, there's a lot of stuff encapsulated in IDEs that it probably a lot easier to use than trying to do the same thing with random cli commands.
Yet the Product Hunt crowd mostly (a) hustle to ship each other's takes on feature innovations, (b) still fail to match each other's results.
There are under the hood differences, and whole is greater than the parts differences. Despite the arms race, most are not beating Aider effectiveness.
Claude Code works similarly to Aider in that they both run in the terminal and write code using LLMs, but it shares no code with Aider as far as I know. Aider is written in Python and Claude Code in Javascript, among other reasons to think that it is not derived from Aider.
The tools also work very differently when you're actually using them, with Claude Code doing more of an agent loop and having very different methods of putting together the context to pass to the model.
If you want one that was shipped 3 months earlier, is true FOSS (Apache 2.0,), and can even work reasonably with qwen-32b-coder-instruct, check https://github.com/ai-christianson/RA.Aid
Disclaimer: am maintainer
If you try it and have feedback, I'm very curious to hear it.
Nice! Could you maybe share some cool things that you've used it for? I love that your demo is it adding a feature to itself. We did the exact same thing for our OSS autonomous agent:
Disclaimer: built by my co-founder (and some contributors!)
We opted to have the agent execute its tasks in docker containers fully in parallel to the developer's environment. This way the developer can have the agent work on something while doing something else themselves.
Thanks a lot! Just wanted to say that for me, no Chromium install is complete without Little Rat there.
Very handy, thanks again for that.
For those who want to check it out, it's an extension to monitor/block network access of other extensions, it also lets you turn them on/off from the same screen:
I wonder why doesnt it use RAG for code context retrival across the codebase? Is there some fundamental reason why init in for example claude coder makes this stupid claude.md file instead of vectorizing and indexing the codebase locally?
The fundamental reason is that RAG kind of sucks and requires a ton of effort/optimization to reach a high degree of reliability for most applications. RAG solutions are not Anthropic's core product. Just reading all the relevant files is more expensive but is more effective and efficient from a dev time perspective.
The interesting part, to me, is that claude-code is made by people who know their models well. Having the same tool work with other models lets us get a better feel of how to make better coding agents
It's not much different than your proxy idea.
It's implemented as a transformation between the internal message structure (close to Anthropic API's) to OpenAI message spec and vice-versa. Then, it's calling all the other models using the openai-node client, as pretty much everyone supports that now (openrouter, ollama, etc)
Congrats on the launch!
As someone who is ignorant and not following all of the recent releases. Isn't Claude-code a proprietary version of https://aider.chat/ in the first place?
> Isn't Claude-code a proprietary version of https://aider.chat/ in the first place?
It might be; but aider is Apache 2 licensed and I don't think Anthropic would be able to change the license. But either way, these tools are simple enough that it's not hard to replicate what they do. So simple that it's kind of pointless for Anthropic to get proprietary and protective about the whole thing. The moat they have is with the model quality.
I saw a very similar tool demoed at the local Kotlin meetup in Berlin just one day after Anthropic announced claude code that also uses the Claude API.
https://github.com/xemantic/claudine
Kotlin might not be everybody's cup of tea but actually seems to be an effective tool for the job. The level of simplicity of the code base is what I found the most shocking. There's almost nothing to it. It's tiny bits of code needed to read/write files and execute commands and a very simple loop where it interacts with claude via a simple REST API. You can read through the code base (a couple of hundred lines of Kotlin) in a few minutes and get a feel for what it does. We had some fun trying to get the tool to improve itself during the meetup. It worked pretty well.
Probably, the most complex thing of the whole code base is the system prompt used to orchestrate the whole thing. That looks like some thought went into it and there are probably a few gotchas that are yet to be addressed. But even that is doable. A determined person could probably replicate what this does in a day or so from scratch. Probably a lot faster if you use one of these tools to build the next one. This isn't rocket science.
I'm guessing we'll be seeing a lot of tools like this in the next few months. It will be interesting to see what refinements people come up with. I think most of the innovation is going to be in giving these agents better/smarter tools to work with. For example, there's a lot of stuff encapsulated in IDEs that it probably a lot easier to use than trying to do the same thing with random cli commands.
> Simple enough that it's not hard to replicate…
Yet the Product Hunt crowd mostly (a) hustle to ship each other's takes on feature innovations, (b) still fail to match each other's results.
There are under the hood differences, and whole is greater than the parts differences. Despite the arms race, most are not beating Aider effectiveness.
Claude Code works similarly to Aider in that they both run in the terminal and write code using LLMs, but it shares no code with Aider as far as I know. Aider is written in Python and Claude Code in Javascript, among other reasons to think that it is not derived from Aider.
The tools also work very differently when you're actually using them, with Claude Code doing more of an agent loop and having very different methods of putting together the context to pass to the model.
I've struggled to get both to do anything useful. that said Claude code is consistent and doesn't error a lot
Similar concept but entirely different implementation.
(aider is Python, Claude Coder is TypeScript, for one)
License could be an interesting challenge and compatibility.
https://github.com/anthropics/claude-code/blob/main/LICENSE.... and https://github.com/dnakov/anon-kode/blob/main/LICENSE.md
If you want one that was shipped 3 months earlier, is true FOSS (Apache 2.0,), and can even work reasonably with qwen-32b-coder-instruct, check https://github.com/ai-christianson/RA.Aid
Disclaimer: am maintainer
If you try it and have feedback, I'm very curious to hear it.
Nice! Could you maybe share some cool things that you've used it for? I love that your demo is it adding a feature to itself. We did the exact same thing for our OSS autonomous agent:
https://github.com/bosun-ai/kwaak
Disclaimer: built by my co-founder (and some contributors!)
We opted to have the agent execute its tasks in docker containers fully in parallel to the developer's environment. This way the developer can have the agent work on something while doing something else themselves.
You can check out a list of merge requests it made and got merged here: https://github.com/SwabbieBosun?tab=overview&from=2025-02-01...
Can I give your money ?
I'll try it out tonight.
Does running it inside of VS Code sandbox this ? Claude keeps failing when every it tries to run a shell directory.
Cross fingers :)
https://github.com/anthropics/claude-code/issues/249#issueco...
I'm holding off on using it until the IDFK license is OSI approved.
Yeah, hopefully they'd let it slide
Cool project! Though I'd advise dialing down the edge factor.
Yeah, "Terminal-based AI coding retard" is unnecessary.
README is now less retarded as per author: https://github.com/dnakov/anon-kode/commit/320265694838fb2ae...
Thanks a lot! Just wanted to say that for me, no Chromium install is complete without Little Rat there.
Very handy, thanks again for that.
For those who want to check it out, it's an extension to monitor/block network access of other extensions, it also lets you turn them on/off from the same screen:
https://github.com/dnakov/little-rat
Thanks! Love to hear people are still using it, even though it needs a flag toggle now.
> Fixes your spaghetti code
> Explains wtf that function does
> Runs tests and other bullshit
Love it
Are there examples of LLMs fixing spaghetti code in the wild?
If your spaghetti code is written in React, Claude Code might be able to do it given enough time, and at the cost of a few million tokens.
Are there examples of non-spaghetti react out in the wild?
How can it be a fork of Claude-code when it’s not open source?
Look into goose¹ and aider² as well
[1] https://github.com/block/goose
[2] https://aider.chat/
As an aider devotee I am now all in on Claude Coder.
Was fun while it lasted.
Is it so much better? What's the difference? How does it do embeddings?
- More polished UI. - Better tools. - Tighter integration with the model/prompt.
Aider seems great unless you're constantly working in a ton of repos.
What is the disadvantage of aider when using a lot of repos?
I guess the whole point of Claude Code was to show which prompts work the best and what stuff they fine-tune on
I wonder why doesnt it use RAG for code context retrival across the codebase? Is there some fundamental reason why init in for example claude coder makes this stupid claude.md file instead of vectorizing and indexing the codebase locally?
The fundamental reason is that RAG kind of sucks and requires a ton of effort/optimization to reach a high degree of reliability for most applications. RAG solutions are not Anthropic's core product. Just reading all the relevant files is more expensive but is more effective and efficient from a dev time perspective.
Why spend Anthropic developer time on charging you less when an MVP impl can charge you $1 for indexing a simple small Helm chart?
The interesting part, to me, is that claude-code is made by people who know their models well. Having the same tool work with other models lets us get a better feel of how to make better coding agents
Maybe it's easier to just set up a proxy mirroring Anthropic's API, but pointing to whatever model you want? Genuine doubt.
It's not much different than your proxy idea. It's implemented as a transformation between the internal message structure (close to Anthropic API's) to OpenAI message spec and vice-versa. Then, it's calling all the other models using the openai-node client, as pretty much everyone supports that now (openrouter, ollama, etc)
Cool! The only downside would be keeping your code in sync with upstream, then.
[flagged]
[flagged]