Show HN: Luminal – Open-source, search-based GPU compiler

(github.com)

87 points | by jafioti 11 hours ago ago

47 comments

  • Alifatisk 10 hours ago ago

    So wait, am I understanding this correctly?

    Instead of applying just predetermined optimization rules or patterns, the compiler formulates the problem as searching through many possible configurations or versions of the code. Each possible version can have different arrangements, tiling sizes, thread block configurations, memory access patterns, and instruction sequences, right?

    And from my understanding, the “search space” is just a collection of all potential versions of the code (kernels) that the compiler can generate from the original input. So for example, the space might include

    - Different ways to partition workloads among GPU threads and blocks

    - Varying memory access strategies (using shared memory, global memory)

    - Various instruction-level optimizations or reordering

    - Alternative loop unroll factors or vectorization strategies

    The compiler then programmatically produces a large number of candidate kernels by combining different optimizations and configurations. Among these millions of candidates, the compiler tries to find the one that performs best.

    In that case, can the compiler print out which gpu configuration works the best for that computer? And will that configuration be applicable to all computers with the same setup?

    This is such an interesting technique.

    • jafioti 4 hours ago ago

      yup! we build a search space by iteratively applying rewrite rules in every possible order (using e-graphs to do this efficiently). the rewrites alter stuff like looping / tiling structures, as well as algebraic rewrites like softmax to online softmax (and then flash attention).

      yes optimized kernels for one system will work on other systems with the same hardware. its fine to take a long time compiling if you just compile once and run a lot.

      • _0ffh 3 hours ago ago

        Is/will it be possible to just write a model component with Luminal and then use that as a building block in e.g. Torch or JAX?

      • almostgotcaught 26 minutes ago ago

        > take a long time compiling

        Lol np-hard is still np-hard no matter how you slice it (especially given vague objective functions).

    • jakestevens2 9 hours ago ago

      Your description is exactly right. We create a search space of all possible kernels and find the best ones based on runtime. The best heuristic is no heuristic.

      This obviously creates a combinatorial problem that we mitigate with smarter search.

      The kernels are run on the computer the compiler is running on. Since runtime is our gold standard it will search for the best configuration for your hardware target. As long as the setup is mostly the same, the optimizations should carry over, yes.

      • UncleOxidant 9 hours ago ago

        How long does this typically take? It sounds time consuming. Also, it seems like this could be similar to doing a GA?

        • jakestevens2 9 hours ago ago

          That depends on the model architecture and how it was written since that informs the size of the search space.

          The typical range is 10 mins to 10 hours. It won't be fast but you only have to do it once and then those optimizations are set for every forward pass.

          • sitkack 2 hours ago ago

            Do you learn the capabilities of the underlying hardware relative to the kernel src? You should be able to start predicting perf using learned static profiling.

            • jakestevens2 2 hours ago ago

              Not today but we will implement memoization of kernels for each hardware backend, yes.

        • jakestevens2 9 hours ago ago

          You can also set a time budget for how long you'd like the search to run for to avoid wasting time on diminishing returns.

      • pilooch 5 hours ago ago

        Is this a bit similar to what tensorrt does, but in a more opened manner ?

  • diggan 10 hours ago ago

    > Luminal can run Q8 Llama 3 8B on M-series Macbooks at 15-25 tokens per second. The goal is to become the fastest ML framework for any model on any device.

    Great that some numbers are provided, but in isolation, I'm not sure what they provide. It would be helpful to also share what tok/s you'd get with llama.cpp or something else on the same hardware, so we can actually understand if it's faster or not :) Also including the prompt processing would be a bonus!

    • Reubend 24 minutes ago ago

      Yeah those numbers look very low to me for something that's supposed to represent a state of the art optimization technique. I think that's lower than other implementations, although it depends on the MacBook.

      Nonetheless this project looks very cool, and I hope they can continue improving it to the point where it indeed beats human-led optimizations.

    • jafioti 9 hours ago ago

      a lot of the search is still being optimized so we don't match super hand-optimized kernels like llama.cpp has, so we def don't match their tps yet, but i want to make a perf tracking page to see improvements over time and prevent regressions

  • aleinin 9 hours ago ago

    Cool project! How do you think about targeting hardware-specific ISAs directly? There’s an interesting paper from Citadel (https://arxiv.org/pdf/1804.06826) that highlights inefficiencies in nvcc for the Volta architecture. Do you see Luminal’s search-based paradigm eventually extending beyond outperforming handwritten kernels, towards actually competing with NVIDIA’s compiler optimizations at the PTX level?

    • jafioti 9 hours ago ago

      yep! currently we're emitting cuda / metal but once the search is better, i want to directly emit ptx / low-level asm on other hardwares.

      • Lerc 4 hours ago ago

        I don't suppose you have an eye towards verilog in the long term?

        I'm curious as to the breadth of possibilities that could be searched. I would imagine something like this could invent flash attention if it cast its net wide enough, but that is a pretty broad net. [Edit: I scrolled back and saw flash attention was explicitly mentioned, cool stuff]

        • jafioti 2 hours ago ago

          you suppose correctly ;)

  • sakras 8 hours ago ago

    I see you guys are using Egg/Egglog! I've been mildly interested in egraphs for quite a while, glad to see they're gaining traction!

    • PoignardAzur 6 hours ago ago

      Right, my first thought when reading the blurb was "kinda sounds like e-graphs?"

      • jafioti 4 hours ago ago

        e-graphs are awesome! none of this would be possible without them.

  • helltone 6 hours ago ago

    I have a background in program analysis, but I'm less familiar with the kind of kernels you are optimising.

    - Can you give some more insight on why 12 ops suffice for representing your input program?

    - With such a small number of ops, isn't your search space full of repeat patterns? I understand the will to have no predefined heuristics, but it seems that learning some heuristics/patterns would massively help reduce the space.

    • jafioti 6 hours ago ago

      we're just optimizing linear algebra, which is mostly made up of patterns of simple ops. for instance, matmul is just broadcasted multiply -> sum reduce.

      the search does common subexpression elimination by default. if two patterns are unioned in the search space, it applies that union to every occurrence of that pattern at the same time, so using e-graphs it helps keep the search space smaller.

      • helltone 5 hours ago ago

        Right I think I see it.

        This is insanely cool.

        But then there are performance tradeoffs in reusing intermediates vs recomputing that I think you can't represent.

        Some of these may affect numerical stability btw. See eg https://herbie.uwplse.org/

        There is so much potential in this project.

        • jafioti 4 hours ago ago

          ah i see the confusion. we do common subexpression elimination of the terms in the search space (which allows single application of rewrites to apply to many repeat patterns) but the search can choose to re-use patterns of terms when we extract dags after the search space is built. so various levels of recomputation are searched.

          right now since we're profiling kernels, and we have a reference output of the unoptimised version, we can directly measure deviation of profiled outputs "for free" since we're already computing them for runtime. tbh this isn't what i want long term, i want to bake numerical stability natively into the search space to only extract dags that would produce stable outputs. hopefully that'll be solved soon.

  • ttoinou 3 hours ago ago

    Is it possible that with all the models you’re testing you’re going to find simple rules to optimize kernels so that we won’t need a meta optimizer in the future ? And just code something straight that applies the most important optimizations. Maybe the current search is always ending up on the same kind of codes in the end

    • jakestevens2 2 hours ago ago

      See my comment on a deeper thread about this. Eventually we will implement static profiling for common kernels so the search doesn't actually have to manually run all of them; many will have a known runtime that we can tie to them.

  • AkashKarnatak 10 hours ago ago

    Very cool project. Earlier tinygrad used to have ~25 ops but now it has grown to 86 and I believe it is primarily to support hardware feature like tensor core and tma. I don't think luminal supports tensor cores as of now, how do you think the ops will evolve as the library matures.

    • jafioti 10 hours ago ago

      we do support tensor cores, but the ops are only part of the search space, so there's virtually no overhead for them. the frontend and main ir is only 12 ops, and we can add hardware-specific ops in to the search space and only add in a bit of code in the codegen pass to support them.

  • cedws 4 hours ago ago

    Around the time DeepSeek R2 released there was chatter about how DeepSeek had had an “undocumented” PTX instruction to squeeze as much performance as possible from their hardware. My understanding is that it wasn’t any kind of secret instruction but just a novel way that they put the instruction together.

    Would Luminal be capable of rediscovering this trick?

    • jafioti 4 hours ago ago

      hopefully! i dont know the exact trick they used, but the idea is to design the search space such that that trick is discoverable.

  • giacomoforte 4 hours ago ago

    Hey, I have been following your project for a while, because I'm kinda interested in progam synthesis. Anyway my question is, how scaleable is the search process itself? Is it a good fit for GPU clusters? I guess benchmarking of candidate kernels takes much longer than generating candidate kernels, or not?

    • jafioti 2 hours ago ago

      yep, parallelized profiling across many devices is definitely something i want to add.

  • fancyfredbot 5 hours ago ago

    This is a good idea. Do you use a cost model for the search or are you actually executing kernels? What kind of heuristics do you use to avoid search space becoming intractabl

    • matthewjgunton 5 hours ago ago

      our cost function right now is just the latency of the kernel. we execute on the hardware as is it really the only accurate way to see how fast the kernel will run

    • jafioti 4 hours ago ago

      we're working on techniques like mcts and RL (e.g. AlphaGo) to manage the search space, but you'd be suprised how far you can get if you carefully design the search space to prevent explosions.

  • efnx 8 hours ago ago

    Cool! How is this project different from the tuning process in TVM?

    • jafioti 8 hours ago ago

      basically autotuning on steroids. instead of searching single dimensions of optimization (tile sizing, etc.) we search through full algebraic rewrites (like rewriting softmax to online softmax) and various loop / tiling structures in the same unified search space.

  • GregarianChild 5 hours ago ago

    How is this different from superoptimisation?

    Also, how do you ensure that newly generated kernels are correct w.r.t. the original naive kernel that you use as specification?

    • jafioti 4 hours ago ago

      very similar to superoptimisation, but most superoptimisers try to tackle turing-complete code. by just doing a very limited space of computation (linear algebra with 12 primitive ops) the search remains tractable.

      the search space is designed to remain logically equivalent at all times, by virtue of how its built (applying rewrite rules we know dont change the logical equivalence).

  • dvdplm 9 hours ago ago

    This is very cool. Do you have any advice on papers to read to understand the details of search based compilation a bit more?

  • UncleOxidant 8 hours ago ago

    When you say (in the video) that you can target more exotic hardware, what about things FGPA accelerators (maybe taking advantage of TVM's FPGA backend)?

    Also, what about CUDA alternatives like ROCm?

    • matthewjgunton 8 hours ago ago

      Yup. We are totally hardware agnostic

      • matthewjgunton 8 hours ago ago

        i should add this also applies to the language too. we currently support Metal (Apple's language) and CUDA, with extensions planned for others

  • 10 hours ago ago
    [deleted]
  • fairestvalue 5 hours ago ago

    how do you differ from tinygrad?