Instead of applying just predetermined optimization rules or patterns, the compiler formulates the problem as searching through many possible configurations or versions of the code. Each possible version can have different arrangements, tiling sizes, thread block configurations, memory access patterns, and instruction sequences, right?
And from my understanding, the “search space” is just a collection of all potential versions of the code (kernels) that the compiler can generate from the original input. So for example, the space might include
- Different ways to partition workloads among GPU threads and blocks
- Varying memory access strategies (using shared memory, global memory)
- Various instruction-level optimizations or reordering
- Alternative loop unroll factors or vectorization strategies
The compiler then programmatically produces a large number of candidate kernels by combining different optimizations and configurations. Among these millions of candidates, the compiler tries to find the one that performs best.
In that case, can the compiler print out which gpu configuration works the best for that computer? And will that configuration be applicable to all computers with the same setup?
yup! we build a search space by iteratively applying rewrite rules in every possible order (using e-graphs to do this efficiently). the rewrites alter stuff like looping / tiling structures, as well as algebraic rewrites like softmax to online softmax (and then flash attention).
yes optimized kernels for one system will work on other systems with the same hardware. its fine to take a long time compiling if you just compile once and run a lot.
Your description is exactly right. We create a search space of all possible kernels and find the best ones based on runtime. The best heuristic is no heuristic.
This obviously creates a combinatorial problem that we mitigate with smarter search.
The kernels are run on the computer the compiler is running on. Since runtime is our gold standard it will search for the best configuration for your hardware target. As long as the setup is mostly the same, the optimizations should carry over, yes.
That depends on the model architecture and how it was written since that informs the size of the search space.
The typical range is 10 mins to 10 hours. It won't be fast but you only have to do it once and then those optimizations are set for every forward pass.
Do you learn the capabilities of the underlying hardware relative to the kernel src? You should be able to start predicting perf using learned static profiling.
> Luminal can run Q8 Llama 3 8B on M-series Macbooks at 15-25 tokens per second. The goal is to become the fastest ML framework for any model on any device.
Great that some numbers are provided, but in isolation, I'm not sure what they provide. It would be helpful to also share what tok/s you'd get with llama.cpp or something else on the same hardware, so we can actually understand if it's faster or not :) Also including the prompt processing would be a bonus!
Yeah those numbers look very low to me for something that's supposed to represent a state of the art optimization technique. I think that's lower than other implementations, although it depends on the MacBook.
Nonetheless this project looks very cool, and I hope they can continue improving it to the point where it indeed beats human-led optimizations.
a lot of the search is still being optimized so we don't match super hand-optimized kernels like llama.cpp has, so we def don't match their tps yet, but i want to make a perf tracking page to see improvements over time and prevent regressions
Cool project! How do you think about targeting hardware-specific ISAs directly? There’s an interesting paper from Citadel (https://arxiv.org/pdf/1804.06826) that highlights inefficiencies in nvcc for the Volta architecture. Do you see Luminal’s search-based paradigm eventually extending beyond outperforming handwritten kernels, towards actually competing with NVIDIA’s compiler optimizations at the PTX level?
I don't suppose you have an eye towards verilog in the long term?
I'm curious as to the breadth of possibilities that could be searched. I would imagine something like this could invent flash attention if it cast its net wide enough, but that is a pretty broad net. [Edit: I scrolled back and saw flash attention was explicitly mentioned, cool stuff]
I have a background in program analysis, but I'm less familiar with the kind of kernels you are optimising.
- Can you give some more insight on why 12 ops suffice for representing your input program?
- With such a small number of ops, isn't your search space full of repeat patterns? I understand the will to have no predefined heuristics, but it seems that learning some heuristics/patterns would massively help reduce the space.
we're just optimizing linear algebra, which is mostly made up of patterns of simple ops. for instance, matmul is just broadcasted multiply -> sum reduce.
the search does common subexpression elimination by default. if two patterns are unioned in the search space, it applies that union to every occurrence of that pattern at the same time, so using e-graphs it helps keep the search space smaller.
ah i see the confusion. we do common subexpression elimination of the terms in the search space (which allows single application of rewrites to apply to many repeat patterns) but the search can choose to re-use patterns of terms when we extract dags after the search space is built. so various levels of recomputation are searched.
right now since we're profiling kernels, and we have a reference output of the unoptimised version, we can directly measure deviation of profiled outputs "for free" since we're already computing them for runtime. tbh this isn't what i want long term, i want to bake numerical stability natively into the search space to only extract dags that would produce stable outputs. hopefully that'll be solved soon.
Is it possible that with all the models you’re testing you’re going to find simple rules to optimize kernels so that we won’t need a meta optimizer in the future ? And just code something straight that applies the most important optimizations. Maybe the current search is always ending up on the same kind of codes in the end
See my comment on a deeper thread about this. Eventually we will implement static profiling for common kernels so the search doesn't actually have to manually run all of them; many will have a known runtime that we can tie to them.
Very cool project. Earlier tinygrad used to have ~25 ops but now it has grown to 86 and I believe it is primarily to support hardware feature like tensor core and tma. I don't think luminal supports tensor cores as of now, how do you think the ops will evolve as the library matures.
we do support tensor cores, but the ops are only part of the search space, so there's virtually no overhead for them. the frontend and main ir is only 12 ops, and we can add hardware-specific ops in to the search space and only add in a bit of code in the codegen pass to support them.
Around the time DeepSeek R2 released there was chatter about how DeepSeek had had an “undocumented” PTX instruction to squeeze as much performance as possible from their hardware. My understanding is that it wasn’t any kind of secret instruction but just a novel way that they put the instruction together.
Would Luminal be capable of rediscovering this trick?
Hey, I have been following your project for a while, because I'm kinda interested in progam synthesis. Anyway my question is, how scaleable is the search process itself? Is it a good fit for GPU clusters? I guess benchmarking of candidate kernels takes much longer than generating candidate kernels, or not?
This is a good idea. Do you use a cost model for the search or are you actually executing kernels? What kind of heuristics do you use to avoid search space becoming intractabl
our cost function right now is just the latency of the kernel. we execute on the hardware as is it really the only accurate way to see how fast the kernel will run
we're working on techniques like mcts and RL (e.g. AlphaGo) to manage the search space, but you'd be suprised how far you can get if you carefully design the search space to prevent explosions.
basically autotuning on steroids. instead of searching single dimensions of optimization (tile sizing, etc.) we search through full algebraic rewrites (like rewriting softmax to online softmax) and various loop / tiling structures in the same unified search space.
very similar to superoptimisation, but most superoptimisers try to tackle turing-complete code. by just doing a very limited space of computation (linear algebra with 12 primitive ops) the search remains tractable.
the search space is designed to remain logically equivalent at all times, by virtue of how its built (applying rewrite rules we know dont change the logical equivalence).
When you say (in the video) that you can target more exotic hardware, what about things FGPA accelerators (maybe taking advantage of TVM's FPGA backend)?
So wait, am I understanding this correctly?
Instead of applying just predetermined optimization rules or patterns, the compiler formulates the problem as searching through many possible configurations or versions of the code. Each possible version can have different arrangements, tiling sizes, thread block configurations, memory access patterns, and instruction sequences, right?
And from my understanding, the “search space” is just a collection of all potential versions of the code (kernels) that the compiler can generate from the original input. So for example, the space might include
- Different ways to partition workloads among GPU threads and blocks
- Varying memory access strategies (using shared memory, global memory)
- Various instruction-level optimizations or reordering
- Alternative loop unroll factors or vectorization strategies
The compiler then programmatically produces a large number of candidate kernels by combining different optimizations and configurations. Among these millions of candidates, the compiler tries to find the one that performs best.
In that case, can the compiler print out which gpu configuration works the best for that computer? And will that configuration be applicable to all computers with the same setup?
This is such an interesting technique.
yup! we build a search space by iteratively applying rewrite rules in every possible order (using e-graphs to do this efficiently). the rewrites alter stuff like looping / tiling structures, as well as algebraic rewrites like softmax to online softmax (and then flash attention).
yes optimized kernels for one system will work on other systems with the same hardware. its fine to take a long time compiling if you just compile once and run a lot.
Is/will it be possible to just write a model component with Luminal and then use that as a building block in e.g. Torch or JAX?
> take a long time compiling
Lol np-hard is still np-hard no matter how you slice it (especially given vague objective functions).
Your description is exactly right. We create a search space of all possible kernels and find the best ones based on runtime. The best heuristic is no heuristic.
This obviously creates a combinatorial problem that we mitigate with smarter search.
The kernels are run on the computer the compiler is running on. Since runtime is our gold standard it will search for the best configuration for your hardware target. As long as the setup is mostly the same, the optimizations should carry over, yes.
How long does this typically take? It sounds time consuming. Also, it seems like this could be similar to doing a GA?
That depends on the model architecture and how it was written since that informs the size of the search space.
The typical range is 10 mins to 10 hours. It won't be fast but you only have to do it once and then those optimizations are set for every forward pass.
Do you learn the capabilities of the underlying hardware relative to the kernel src? You should be able to start predicting perf using learned static profiling.
Not today but we will implement memoization of kernels for each hardware backend, yes.
You can also set a time budget for how long you'd like the search to run for to avoid wasting time on diminishing returns.
Is this a bit similar to what tensorrt does, but in a more opened manner ?
> Luminal can run Q8 Llama 3 8B on M-series Macbooks at 15-25 tokens per second. The goal is to become the fastest ML framework for any model on any device.
Great that some numbers are provided, but in isolation, I'm not sure what they provide. It would be helpful to also share what tok/s you'd get with llama.cpp or something else on the same hardware, so we can actually understand if it's faster or not :) Also including the prompt processing would be a bonus!
Yeah those numbers look very low to me for something that's supposed to represent a state of the art optimization technique. I think that's lower than other implementations, although it depends on the MacBook.
Nonetheless this project looks very cool, and I hope they can continue improving it to the point where it indeed beats human-led optimizations.
a lot of the search is still being optimized so we don't match super hand-optimized kernels like llama.cpp has, so we def don't match their tps yet, but i want to make a perf tracking page to see improvements over time and prevent regressions
Cool project! How do you think about targeting hardware-specific ISAs directly? There’s an interesting paper from Citadel (https://arxiv.org/pdf/1804.06826) that highlights inefficiencies in nvcc for the Volta architecture. Do you see Luminal’s search-based paradigm eventually extending beyond outperforming handwritten kernels, towards actually competing with NVIDIA’s compiler optimizations at the PTX level?
yep! currently we're emitting cuda / metal but once the search is better, i want to directly emit ptx / low-level asm on other hardwares.
I don't suppose you have an eye towards verilog in the long term?
I'm curious as to the breadth of possibilities that could be searched. I would imagine something like this could invent flash attention if it cast its net wide enough, but that is a pretty broad net. [Edit: I scrolled back and saw flash attention was explicitly mentioned, cool stuff]
you suppose correctly ;)
I see you guys are using Egg/Egglog! I've been mildly interested in egraphs for quite a while, glad to see they're gaining traction!
Right, my first thought when reading the blurb was "kinda sounds like e-graphs?"
e-graphs are awesome! none of this would be possible without them.
I have a background in program analysis, but I'm less familiar with the kind of kernels you are optimising.
- Can you give some more insight on why 12 ops suffice for representing your input program?
- With such a small number of ops, isn't your search space full of repeat patterns? I understand the will to have no predefined heuristics, but it seems that learning some heuristics/patterns would massively help reduce the space.
we're just optimizing linear algebra, which is mostly made up of patterns of simple ops. for instance, matmul is just broadcasted multiply -> sum reduce.
the search does common subexpression elimination by default. if two patterns are unioned in the search space, it applies that union to every occurrence of that pattern at the same time, so using e-graphs it helps keep the search space smaller.
Right I think I see it.
This is insanely cool.
But then there are performance tradeoffs in reusing intermediates vs recomputing that I think you can't represent.
Some of these may affect numerical stability btw. See eg https://herbie.uwplse.org/
There is so much potential in this project.
ah i see the confusion. we do common subexpression elimination of the terms in the search space (which allows single application of rewrites to apply to many repeat patterns) but the search can choose to re-use patterns of terms when we extract dags after the search space is built. so various levels of recomputation are searched.
right now since we're profiling kernels, and we have a reference output of the unoptimised version, we can directly measure deviation of profiled outputs "for free" since we're already computing them for runtime. tbh this isn't what i want long term, i want to bake numerical stability natively into the search space to only extract dags that would produce stable outputs. hopefully that'll be solved soon.
Is it possible that with all the models you’re testing you’re going to find simple rules to optimize kernels so that we won’t need a meta optimizer in the future ? And just code something straight that applies the most important optimizations. Maybe the current search is always ending up on the same kind of codes in the end
See my comment on a deeper thread about this. Eventually we will implement static profiling for common kernels so the search doesn't actually have to manually run all of them; many will have a known runtime that we can tie to them.
Very cool project. Earlier tinygrad used to have ~25 ops but now it has grown to 86 and I believe it is primarily to support hardware feature like tensor core and tma. I don't think luminal supports tensor cores as of now, how do you think the ops will evolve as the library matures.
we do support tensor cores, but the ops are only part of the search space, so there's virtually no overhead for them. the frontend and main ir is only 12 ops, and we can add hardware-specific ops in to the search space and only add in a bit of code in the codegen pass to support them.
Around the time DeepSeek R2 released there was chatter about how DeepSeek had had an “undocumented” PTX instruction to squeeze as much performance as possible from their hardware. My understanding is that it wasn’t any kind of secret instruction but just a novel way that they put the instruction together.
Would Luminal be capable of rediscovering this trick?
hopefully! i dont know the exact trick they used, but the idea is to design the search space such that that trick is discoverable.
Hey, I have been following your project for a while, because I'm kinda interested in progam synthesis. Anyway my question is, how scaleable is the search process itself? Is it a good fit for GPU clusters? I guess benchmarking of candidate kernels takes much longer than generating candidate kernels, or not?
yep, parallelized profiling across many devices is definitely something i want to add.
This is a good idea. Do you use a cost model for the search or are you actually executing kernels? What kind of heuristics do you use to avoid search space becoming intractabl
our cost function right now is just the latency of the kernel. we execute on the hardware as is it really the only accurate way to see how fast the kernel will run
we're working on techniques like mcts and RL (e.g. AlphaGo) to manage the search space, but you'd be suprised how far you can get if you carefully design the search space to prevent explosions.
Cool! How is this project different from the tuning process in TVM?
basically autotuning on steroids. instead of searching single dimensions of optimization (tile sizing, etc.) we search through full algebraic rewrites (like rewriting softmax to online softmax) and various loop / tiling structures in the same unified search space.
How is this different from superoptimisation?
Also, how do you ensure that newly generated kernels are correct w.r.t. the original naive kernel that you use as specification?
very similar to superoptimisation, but most superoptimisers try to tackle turing-complete code. by just doing a very limited space of computation (linear algebra with 12 primitive ops) the search remains tractable.
the search space is designed to remain logically equivalent at all times, by virtue of how its built (applying rewrite rules we know dont change the logical equivalence).
This is very cool. Do you have any advice on papers to read to understand the details of search based compilation a bit more?
a lot of the ideas luminal is built on are here: https://arxiv.org/abs/2304.04332
When you say (in the video) that you can target more exotic hardware, what about things FGPA accelerators (maybe taking advantage of TVM's FPGA backend)?
Also, what about CUDA alternatives like ROCm?
Yup. We are totally hardware agnostic
i should add this also applies to the language too. we currently support Metal (Apple's language) and CUDA, with extensions planned for others
how do you differ from tinygrad?