Show HN: Semantic grep with local embeddings

(github.com)

171 points | by Runonthespot a day ago ago

48 comments

  • MarkMarine a day ago ago

    I saw this comment a little bit back and I don’t think the OP expanded on it, but this looks like a fantastic idea to me:

    sam0x17 20 days ago:

    Didn't want to bury the lead, but I've done a bunch of work with this myself. It goes fine as long as you give it both the textual representation and the ability to walk along the AST. You give it the raw source code, and then also give it the ability to ask a language server to move a cursor that walks along the AST, and then every time it makes a change you update the cursor location accordingly. You basically have a cursor in the text and a cursor in the AST and you keep them in sync so the LLM can't mess it up. If I ever have time I'll release something but right now just experimenting locally with it for my rust stuff On the topic of LLMs understanding ASTs, they are also quite good at this. I've done a bunch of applications where you tell an LLM a novel grammar it's never seen before _in the system prompt_ and that plus a few translation examples is usually all it takes for it to learn fairly complex grammars. Combine that with a feedback loop between the LLM and a compiler for the grammar where you don't let it produce invalid sentences and when it does you just feed it back the compiler error, and you get a pretty robust system that can translate user input into valid sentences in an arbitrary grammar.

    https://news.ycombinator.com/item?id=44941999

    • rictic 21 hours ago ago

      One thing to take care with in cases like this, it probably needs to handle code with syntax errors. It's not uncommon for developers to work with code that doesn't parse (e.g. while you're typing, to resolve merge conflicts, etc).

      In general, a drum I beat regularly is that during development the code spends most of its time incorrect in one way or another. Syntax errors, doesn't type check, missing function implementations, still working out the types and their relationships, etc. Any developer tooling that only works on valid code immediately loses a lot of its value.

      • digdugdirk 21 hours ago ago

        Isn't that the benefit of treesitter? I was under the impression that it's more accepting of these types of errors, at least to a degree where you can get enough info to fix it.

  • athrowaway3z 20 hours ago ago

    > thread 'main' (17953) panicked at ck-cli/src/main.rs:305:41: byte index 100 is not a char boundary

    I seem to have gotten 'lucky' and it split an emoji just right.

    ---

    For anyone curious: this is great for large, disjointed, and/or poorly documented code bases. If you kept yours tight and files smaller than ~600 lines, it is almost always better to nudge llm's into reading whole files.

    • Runonthespot 17 hours ago ago

      Nice catch- should be fixed in latest

  • dorian-graph 21 hours ago ago

    There's also https://github.com/bartolli/codanna, that's similarly new. I'll have to try that again, and this one.

    • CuriouslyC 20 hours ago ago

      I've benchmarked the code search MCPs extensively and agents with LSP-aware mcps outperform agents using raw indexed stores quite handily. Serena, as janky as it is, is a better enabler than Codanna.

  • ozten a day ago ago

    This generalizes to a whole new category of tools: UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use, but LLMs will put in the work to use them.

    • abeyer 19 hours ago ago

      > UX which requires more thought and skill, but is way more powerful. Human devs are mostly too lazy to use

      Really? My thinking is more that human devs are way too likely to sink time into powerful but complex tools that may end up being a yak shave with minimal/no benefit in the end. "too lazy to use" doesn't seem like a common problem from what I've seen.

      Not that the speed of an agent being able to experiment with this kind of thing isn't a benefit... but not how I would have thought to pose it.

  • CuriouslyC 20 hours ago ago

    I actually have a WIP library for this, the indexing server isn't where I want it just yet, but I have an entire agent toolkit that does this stuff, and the indexing server is quite advance, with self-tuning, raptor/lsp integration, solves for optimal result set using knapsack, etc.

    https://github.com/sibyllinesoft/grimoire

    • threecheese 18 hours ago ago

      I have to know, what is the Lens SPI? The link in your readme is broken, and Kagi results for this cannot possibly be right.

      • CuriouslyC 16 hours ago ago

        Lens is basically a rust local first mmapped file base search store, it combines RAPTOR with LSP, semantic vectors and a dual dense/sparse encoding, and can learn a function over those to tune the weights of the relevance sources adaptively per query using your data. It also uses linear programming to select an "efficient" set of results that minimizes mutual information between result atoms -- regular rag/rerank pipelines just dump the top K, but those often have a significant amount of overlap so you bloat context for no benefit.

  • rane a day ago ago

    Cool. Some AI fluff can be detected in the README.

    For example under the "Why CK?" section, "For teams" is of no substance compared to "For developers"

  • joecarpenter 13 hours ago ago

    Well, there's also mine https://github.com/VectorOps/know with some details what it does and how: https://vectorops.dev/blog/post-1/

  • 0x696C6961 a day ago ago

    This is cool, but I don't understand why it tries to re-implement (a subset of) grep. Not only that, but the grep-like behaviour is the default and I need to opt-in to the semantic search using the --sem flag. If I want grep I can use grep/ripgrep.

    • Runonthespot a day ago ago

      Fair comment- the initial thinking was to have both and in fact a hybrid mode too which fuses results so you can get chunks that match both semantically and on keyword search in one resultset. Later could add a reranker too.

      • alvis a day ago ago

        Or another way of thinking. How much is the penalty we are talking about for semantic vs conventional grep?

        My thinking is that for large codebase, sorting embedding matches maybe more efficient than reading all files and hence there is no point to put semantic search behind a --semantic flag

    • CuriouslyC 20 hours ago ago

      The reason to overload grep is that the agents already understand most of the semantics and are primed to use it, so it's a small lift to get them to call a modified grep with some minor additional semantics.

  • rane a day ago ago

    I tried in my relatively small project.

        ~/c/l/web % ck --sem 'error handling'
        ℹ Semantic search: top 10 results, threshold ≥0.6
        ⠹ Searching with semantic mode...
    
    All I got was spinning M2 Mac fan after a minute, and gave up.
    • Runonthespot a day ago ago

      interesting - can I ask you to try a ck --index . ?

      • postalcoder 21 hours ago ago

        It'd be nice if respected gitignore. It's turning my M4 MBP into a space heater too.

        • Runonthespot 21 hours ago ago

          coming up next.

          • mijoharas 20 hours ago ago

            Fyi, I just grabbed the same lib that ripgrep uses. That bit is extracted iirc, and was quite nice and simple to use.

          • postalcoder 10 hours ago ago

            I saw that you added it, thanks! I'll give this a shot for a few days.

  • nwienert 20 hours ago ago

    The biggest improvement to CC would be it using the TypeScript LSP to immediately get type feedback and inspect types.

    I added the VSCode plugin but it didn’t seem to help, likewise searching around yesterday I didn’t see anything surprisingly.

  • jarek83 19 hours ago ago

    Man, that's a great thing! Really waiting to see Ruby and Elixir. Fingers crossed for you!

    • Runonthespot 17 hours ago ago

      Added Ruby, but Elixir not very well supported by tree sitter

  • skybrian a day ago ago

    This looks very useful.

    Looks like you have to build an index. When should it be rebuilt? Any support for automatic rebuilds?

    • Runonthespot a day ago ago

      Yes- files are hashed and checked whenever you search so index should always remain up to date. Only changed files are reindexed. You can also inspect the metadata (chunking semantics, embeddings). It’s all in the .ck sidecar

  • abyesilyurt a day ago ago

    What model are you using to create the embeddings?

    • Runonthespot a day ago ago

      BAAI/bge-small-en-v1.5 but considering switching this to google's latest gemmaembedding - it's fairly switchable.

  • dprophecyguy a day ago ago

    this is so cool, is there any other tool which is more mature?

    • commandar a day ago ago

      Roo has codebase indexing that it'll instruct the agent to use if enabled.

      It uses whatever arbitrary embedding model you want to point it at and backs it with a qdrant vector db. Roo's documents point you toward free cloud services for this, but I found those to be dreadfully slow.

      Fortunately, it takes about 20 minutes to spin up a qdrant docker container and install ollama locally. I've found the nomic text embed model is fast enough for the task even running on CPU. You'll have an initial spin up as it embeds existing codebase data then it's basically real-time as changes are made.

      FWIW, I've found that the indexing is worth the effort to set up. The models are generally better about finding what they need without completely blowing up their context windows when it's available.

    • redhale a day ago ago

      I recently saw SemTools [0], but have not tried it out yet myself.

      [0] https://github.com/run-llama/semtools

      • mdaniel 21 hours ago ago

        I don't see how these are apples-to-apples given its "send me all your content" approach <https://github.com/run-llama/semtools#:~:text=get%20your%20a...>

        versus https://github.com/BeaconBay/ck#:~:text=yes%2C%20completely%...

      • fakebizprez a day ago ago

        LlamaIndex is batting a thousand since their inception. Can't go wrong with this tool, either.

        • Runonthespot a day ago ago

          Agreed - Logan is a legend, this is similar but simpler - no dependency on external models (might add it)

          • cheesyFishes a day ago ago

            Thanks!

            Seems like CLI tools are all the rage these days

          • fakebizprez a day ago ago

            We really are living in the golden age of the terminal. I thought this would take a chunk out of Typescript/node marketshare of young coders, but i'm starting to see more and more of these animals building TUIs using nothing but npm packages.

            Have they no shame?

            • floydnoel a day ago ago

              Last week I built my own CLI coding agent tool using just nodejs and zero dependencies! It is a lot of fun to build, really, I think everyone should try it out

    • Runonthespot a day ago ago

      help make it mature :D Add any issues

  • Alifatisk a day ago ago

    At this point, we aren't even saying it's written in Rust anymore, we just mention it in the title whenever possible.

    I did look into the core features and I gotta say, that looked quite cool. It's like Google search, but for the codebase. What does it take to support other languages?

    • Runonthespot a day ago ago

      It supports most languages but needs a bit of tree-sitter setup to do semantic chunking. Let me know what languages you’d like added

      • t0mas88 a day ago ago

        Java would be useful as well for larger backend codebases.

      • Alifatisk a day ago ago

        Thanks for your quick response, most large codebases I've been fiddling on is Ruby!

      • benzible a day ago ago

        I'd love to see elixir support.

      • Bigsy a day ago ago

        Clojure would be awesome

  • dang 18 hours ago ago

    [stub for offtopicness]