I saw this comment a little bit back and I don’t think the OP expanded on it, but this looks like a fantastic idea to me:
sam0x17 20 days ago:
Didn't want to bury the lead, but I've done a bunch of work with this myself. It goes fine as long as you give it both the textual representation and the ability to walk along the AST. You give it the raw source code, and then also give it the ability to ask a language server to move a cursor that walks along the AST, and then every time it makes a change you update the cursor location accordingly. You basically have a cursor in the text and a cursor in the AST and you keep them in sync so the LLM can't mess it up. If I ever have time I'll release something but right now just experimenting locally with it for my rust stuff
On the topic of LLMs understanding ASTs, they are also quite good at this. I've done a bunch of applications where you tell an LLM a novel grammar it's never seen before _in the system prompt_ and that plus a few translation examples is usually all it takes for it to learn fairly complex grammars. Combine that with a feedback loop between the LLM and a compiler for the grammar where you don't let it produce invalid sentences and when it does you just feed it back the compiler error, and you get a pretty robust system that can translate user input into valid sentences in an arbitrary grammar.
One thing to take care with in cases like this, it probably needs to handle code with syntax errors. It's not uncommon for developers to work with code that doesn't parse (e.g. while you're typing, to resolve merge conflicts, etc).
In general, a drum I beat regularly is that during development the code spends most of its time incorrect in one way or another. Syntax errors, doesn't type check, missing function implementations, still working out the types and their relationships, etc. Any developer tooling that only works on valid code immediately loses a lot of its value.
Isn't that the benefit of treesitter? I was under the impression that it's more accepting of these types of errors, at least to a degree where you can get enough info to fix it.
> thread 'main' (17953) panicked at ck-cli/src/main.rs:305:41: byte index 100 is not a char boundary
I seem to have gotten 'lucky' and it split an emoji just right.
---
For anyone curious: this is great for large, disjointed, and/or poorly documented code bases. If you kept yours tight and files smaller than ~600 lines, it is almost always better to nudge llm's into reading whole files.
I've benchmarked the code search MCPs extensively and agents with LSP-aware mcps outperform agents using raw indexed stores quite handily. Serena, as janky as it is, is a better enabler than Codanna.
This generalizes to a whole new category of tools: UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use, but LLMs will put in the work to use them.
> UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use
Really? My thinking is more that human devs are way too likely to sink time into powerful but complex tools that may end up being a yak shave with minimal/no benefit in the end. "too lazy to use" doesn't seem like a common problem from what I've seen.
Not that the speed of an agent being able to experiment with this kind of thing isn't a benefit... but not how I would have thought to pose it.
I actually have a WIP library for this, the indexing server isn't where I want it just yet, but I have an entire agent toolkit that does this stuff, and the indexing server is quite advance, with self-tuning, raptor/lsp integration, solves for optimal result set using knapsack, etc.
Lens is basically a rust local first mmapped file base search store, it combines RAPTOR with LSP, semantic vectors and a dual dense/sparse encoding, and can learn a function over those to tune the weights of the relevance sources adaptively per query using your data. It also uses linear programming to select an "efficient" set of results that minimizes mutual information between result atoms -- regular rag/rerank pipelines just dump the top K, but those often have a significant amount of overlap so you bloat context for no benefit.
This is cool, but I don't understand why it tries to re-implement (a subset of) grep. Not only that, but the grep-like behaviour is the default and I need to opt-in to the semantic search using the --sem flag. If I want grep I can use grep/ripgrep.
Fair comment- the initial thinking was to have both and in fact a hybrid mode too which fuses results so you can get chunks that match both semantically and on keyword search in one resultset. Later could add a reranker too.
Or another way of thinking. How much is the penalty we are talking about for semantic vs conventional grep?
My thinking is that for large codebase, sorting embedding matches maybe more efficient than reading all files and hence there is no point to put semantic search behind a --semantic flag
The reason to overload grep is that the agents already understand most of the semantics and are primed to use it, so it's a small lift to get them to call a modified grep with some minor additional semantics.
Yes- files are hashed and checked whenever you search so index should always remain up to date. Only changed files are reindexed. You can also inspect the metadata (chunking semantics, embeddings). It’s all in the .ck sidecar
Roo has codebase indexing that it'll instruct the agent to use if enabled.
It uses whatever arbitrary embedding model you want to point it at and backs it with a qdrant vector db. Roo's documents point you toward free cloud services for this, but I found those to be dreadfully slow.
Fortunately, it takes about 20 minutes to spin up a qdrant docker container and install ollama locally. I've found the nomic text embed model is fast enough for the task even running on CPU. You'll have an initial spin up as it embeds existing codebase data then it's basically real-time as changes are made.
FWIW, I've found that the indexing is worth the effort to set up. The models are generally better about finding what they need without completely blowing up their context windows when it's available.
We really are living in the golden age of the terminal. I thought this would take a chunk out of Typescript/node marketshare of young coders, but i'm starting to see more and more of these animals building TUIs using nothing but npm packages.
Last week I built my own CLI coding agent tool using just nodejs and zero dependencies! It is a lot of fun to build, really, I think everyone should try it out
At this point, we aren't even saying it's written in Rust anymore, we just mention it in the title whenever possible.
I did look into the core features and I gotta say, that looked quite cool. It's like Google search, but for the codebase. What does it take to support other languages?
I saw this comment a little bit back and I don’t think the OP expanded on it, but this looks like a fantastic idea to me:
sam0x17 20 days ago:
Didn't want to bury the lead, but I've done a bunch of work with this myself. It goes fine as long as you give it both the textual representation and the ability to walk along the AST. You give it the raw source code, and then also give it the ability to ask a language server to move a cursor that walks along the AST, and then every time it makes a change you update the cursor location accordingly. You basically have a cursor in the text and a cursor in the AST and you keep them in sync so the LLM can't mess it up. If I ever have time I'll release something but right now just experimenting locally with it for my rust stuff On the topic of LLMs understanding ASTs, they are also quite good at this. I've done a bunch of applications where you tell an LLM a novel grammar it's never seen before _in the system prompt_ and that plus a few translation examples is usually all it takes for it to learn fairly complex grammars. Combine that with a feedback loop between the LLM and a compiler for the grammar where you don't let it produce invalid sentences and when it does you just feed it back the compiler error, and you get a pretty robust system that can translate user input into valid sentences in an arbitrary grammar.
https://news.ycombinator.com/item?id=44941999
One thing to take care with in cases like this, it probably needs to handle code with syntax errors. It's not uncommon for developers to work with code that doesn't parse (e.g. while you're typing, to resolve merge conflicts, etc).
In general, a drum I beat regularly is that during development the code spends most of its time incorrect in one way or another. Syntax errors, doesn't type check, missing function implementations, still working out the types and their relationships, etc. Any developer tooling that only works on valid code immediately loses a lot of its value.
Isn't that the benefit of treesitter? I was under the impression that it's more accepting of these types of errors, at least to a degree where you can get enough info to fix it.
> thread 'main' (17953) panicked at ck-cli/src/main.rs:305:41: byte index 100 is not a char boundary
I seem to have gotten 'lucky' and it split an emoji just right.
---
For anyone curious: this is great for large, disjointed, and/or poorly documented code bases. If you kept yours tight and files smaller than ~600 lines, it is almost always better to nudge llm's into reading whole files.
Nice catch- should be fixed in latest
There's also https://github.com/bartolli/codanna, that's similarly new. I'll have to try that again, and this one.
I've benchmarked the code search MCPs extensively and agents with LSP-aware mcps outperform agents using raw indexed stores quite handily. Serena, as janky as it is, is a better enabler than Codanna.
This generalizes to a whole new category of tools: UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use, but LLMs will put in the work to use them.
> UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use
Really? My thinking is more that human devs are way too likely to sink time into powerful but complex tools that may end up being a yak shave with minimal/no benefit in the end. "too lazy to use" doesn't seem like a common problem from what I've seen.
Not that the speed of an agent being able to experiment with this kind of thing isn't a benefit... but not how I would have thought to pose it.
I actually have a WIP library for this, the indexing server isn't where I want it just yet, but I have an entire agent toolkit that does this stuff, and the indexing server is quite advance, with self-tuning, raptor/lsp integration, solves for optimal result set using knapsack, etc.
https://github.com/sibyllinesoft/grimoire
I have to know, what is the Lens SPI? The link in your readme is broken, and Kagi results for this cannot possibly be right.
Lens is basically a rust local first mmapped file base search store, it combines RAPTOR with LSP, semantic vectors and a dual dense/sparse encoding, and can learn a function over those to tune the weights of the relevance sources adaptively per query using your data. It also uses linear programming to select an "efficient" set of results that minimizes mutual information between result atoms -- regular rag/rerank pipelines just dump the top K, but those often have a significant amount of overlap so you bloat context for no benefit.
Cool. Some AI fluff can be detected in the README.
For example under the "Why CK?" section, "For teams" is of no substance compared to "For developers"
Well, there's also mine https://github.com/VectorOps/know with some details what it does and how: https://vectorops.dev/blog/post-1/
This is cool, but I don't understand why it tries to re-implement (a subset of) grep. Not only that, but the grep-like behaviour is the default and I need to opt-in to the semantic search using the --sem flag. If I want grep I can use grep/ripgrep.
Fair comment- the initial thinking was to have both and in fact a hybrid mode too which fuses results so you can get chunks that match both semantically and on keyword search in one resultset. Later could add a reranker too.
Or another way of thinking. How much is the penalty we are talking about for semantic vs conventional grep?
My thinking is that for large codebase, sorting embedding matches maybe more efficient than reading all files and hence there is no point to put semantic search behind a --semantic flag
The reason to overload grep is that the agents already understand most of the semantics and are primed to use it, so it's a small lift to get them to call a modified grep with some minor additional semantics.
I tried in my relatively small project.
All I got was spinning M2 Mac fan after a minute, and gave up.interesting - can I ask you to try a ck --index . ?
It'd be nice if respected gitignore. It's turning my M4 MBP into a space heater too.
coming up next.
Fyi, I just grabbed the same lib that ripgrep uses. That bit is extracted iirc, and was quite nice and simple to use.
I saw that you added it, thanks! I'll give this a shot for a few days.
The biggest improvement to CC would be it using the TypeScript LSP to immediately get type feedback and inspect types.
I added the VSCode plugin but it didn’t seem to help, likewise searching around yesterday I didn’t see anything surprisingly.
Man, that's a great thing! Really waiting to see Ruby and Elixir. Fingers crossed for you!
Added Ruby, but Elixir not very well supported by tree sitter
This looks very useful.
Looks like you have to build an index. When should it be rebuilt? Any support for automatic rebuilds?
Yes- files are hashed and checked whenever you search so index should always remain up to date. Only changed files are reindexed. You can also inspect the metadata (chunking semantics, embeddings). It’s all in the .ck sidecar
What model are you using to create the embeddings?
BAAI/bge-small-en-v1.5 but considering switching this to google's latest gemmaembedding - it's fairly switchable.
this is so cool, is there any other tool which is more mature?
Roo has codebase indexing that it'll instruct the agent to use if enabled.
It uses whatever arbitrary embedding model you want to point it at and backs it with a qdrant vector db. Roo's documents point you toward free cloud services for this, but I found those to be dreadfully slow.
Fortunately, it takes about 20 minutes to spin up a qdrant docker container and install ollama locally. I've found the nomic text embed model is fast enough for the task even running on CPU. You'll have an initial spin up as it embeds existing codebase data then it's basically real-time as changes are made.
FWIW, I've found that the indexing is worth the effort to set up. The models are generally better about finding what they need without completely blowing up their context windows when it's available.
I recently saw SemTools [0], but have not tried it out yet myself.
[0] https://github.com/run-llama/semtools
I don't see how these are apples-to-apples given its "send me all your content" approach <https://github.com/run-llama/semtools#:~:text=get%20your%20a...>
versus https://github.com/BeaconBay/ck#:~:text=yes%2C%20completely%...
LlamaIndex is batting a thousand since their inception. Can't go wrong with this tool, either.
Agreed - Logan is a legend, this is similar but simpler - no dependency on external models (might add it)
Thanks!
Seems like CLI tools are all the rage these days
We really are living in the golden age of the terminal. I thought this would take a chunk out of Typescript/node marketshare of young coders, but i'm starting to see more and more of these animals building TUIs using nothing but npm packages.
Have they no shame?
Last week I built my own CLI coding agent tool using just nodejs and zero dependencies! It is a lot of fun to build, really, I think everyone should try it out
help make it mature :D Add any issues
At this point, we aren't even saying it's written in Rust anymore, we just mention it in the title whenever possible.
I did look into the core features and I gotta say, that looked quite cool. It's like Google search, but for the codebase. What does it take to support other languages?
It supports most languages but needs a bit of tree-sitter setup to do semantic chunking. Let me know what languages you’d like added
Java would be useful as well for larger backend codebases.
Thanks for your quick response, most large codebases I've been fiddling on is Ruby!
I'd love to see elixir support.
Clojure would be awesome
[stub for offtopicness]