Partially Matching Zig Enums

(matklad.github.io)

128 points | by ingve 9 hours ago ago

83 comments

  • judofyr 7 hours ago ago

    This is one the reasons I find it so silly when people disregard Zig «because it’s just another memory unsafe language»: There’s plenty of innovation within Zig, especially related to comptime and metaprogramming. I really hope other languages are paying attention and steals some of these ideas.

    «inline else» is also very powerful tool to easily abstract away code with no runtime cost.

    • chrismorgan 6 hours ago ago

      What I’ve seen isn’t people disregarding Zig because it’s just another memory-unsafe language, but rather disqualifying Zig because it’s memory-unsafe, and they don’t want to deal with that, even if some other aspects of the language are rather interesting and compelling. But once you’re sold on memory safety, it’s hard to go back.

      • Sytten 6 hours ago ago

        This is really the crust of the argument. I absolutely love the Rust compiler for example, going back to Zig would feel a regression to me. There is a whole class of bugs that my brain now assumes the compiler will handle for me.

        • pron 6 hours ago ago

          Problem is, like they say the stock market has predicted nine of the last five recessions, the Rust compiler stops nine of every five memory safety issues. Put another way, while both Rust and Zig prevent memory safety issues, Zig does it with false negatives while Rust does it with false positives. This is by necessity when using the type system for that job, but it does come at a cost that disqualifies Rust for others...

          Nobody knows whether Rust and/or Zig themselves are the future of low-level programming, but I think it's likely that the future of low-level programming is that programmers who prefer one approach would use a Rust-like language, while those who prefer the other approach would use a Zig-like language. It will be intesting to see whether the preferences are evenly split, though, or one of them has a clear majority support.

          • tialaramex 5 hours ago ago

            C++ already illustrates this idea you're talking about and we know exactly where this goes. Rust's false positives are annoying, so programmers are encouraged to further improve the borrowck and language features to reduce them. But the C++ or Zig false negatives just means your program malfunctions in unspecified ways and you may not even notice, so programmers are encouraged to introduce more and more such cases to the compiler.

            The drift over time is predictable, compared to ten years ago Rust has fewer false positives, C++ has more false negatives.

            You are correct to observe that there is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable. But I would argue we already know what you're calling the "false positive" scenario is also not useful, we're just not at the point where people stop doing it anyway.

            • pron 4 hours ago ago

              > C++ already illustrates this idea you're talking about and we know exactly where this goes.

              No, it doesn't. Zig is safer than C++ (and it's much simpler, which also has an effect on correctness).

              Making up some binary distinction and then deciding that because C++ falls on the same side of it as Zig (except it doesn't, because Zig eliminates out-of-bounds access to the same degree as Rust, not C++) then what applies to one must apply to the other. There is simply no justification to make that equivalence.

              > There is no middle choice here, that's Rice's Theorem, non-trivial semantic correctness is Undecidable.

              That's nothing to do with Rice's theorem. Proving some properties with the type system isn't a general algorithm; it's a proof you have to work for in every program you write individually. There are languages (Idris, ATS) that allow you to prove any correctness property using the type system, with no false positives. It's a matter of the effort required, and there's nothing binary about that.

              To get a sense of the theoretical effort (the practical effort is something to be measured empirically, over time) consider the set of all C programs and the effort it would take to rewrite an arbitrary selection of them in Rust (while maintaining similar performance and footprint characteristics). I believe the effort is larger than doing the same to translate a JS program to a Haskell program.

              • tialaramex 3 hours ago ago

                > There is simply no justification to make that equivalence.

                I explained in some detail exactly why this equivalence exists. I actually have a small hope that this time there are enough people who think it's a bad idea that we don't have to watch this play out for decades before the realisation as we did with C and C++.

                Yes it's exactly Rice's Theorem, it's that simple and that drastic. You can choose what to do when you're not sure, but you can't choose (no matter how much effort you imagine applying) to always be sure†, that Undecidability is what Henry Rice proved. The languages you mention choose to treat "not sure" the same as "nope", like Rust does, you apparently prefer languages like Zig or C++ which instead treat "not sure" as "it's fine". I have explained why that's a terrible idea already.

                The underlying fault, which is why I'm confident this reproduces, is in humans. To err is human. We are going to make mistakes and under the Rust model we will curse, perhaps blame the compiler, or the machine, and fix our mistake. In C++ or Zig our mistake compiles just fine and now the software is worse.

                † For general purpose languages. One clever trick here is that you can just not be a general purpose language. Trivial semantic properties are easily decided, so if your language can make the desired properties trivial then there's no checking and Rice's Theorem doesn't apply. The easy example is, if my language has no looping type features, no recursive calls, nothing like that, all its programs trivially halt - a property we obviously can't decidably check in a general purpose language.

                • pron 2 hours ago ago

                  > I explained in some detail exactly why this equivalence exists.

                  No, you assumed that Zig and C++ are equivalent and concluded that they'll follow a similar trajectory. It's your premise that's unjustified.

                  A problem you'd have to contend with is that Rust is much more similar to C++ than Zig in multiple respects, which may matter more or less than the level of safety when predicting the language trajectory.

                  > But you can't choose (no matter how much effort you imagine applying) to always be sure

                  That is not Rice's theorem. You can certainly choose to prove every program correct. What you cannot do is have a general mechanism that would prove all programs in a certain language correct.

                  > One clever trick here is that you can just not be a general purpose language.

                  That's not so much a clever trick as the core of all simple (i.e. non-dependent) type systems. Type-safety in those languages then trivially implies some property, which is an inductive invariant (or composable invariant) that's stronger than some desired property. E.g. in Rust, "borrow/lifetime-safety" is stronger than UAF-safety.

                  However, because an effort to prove any property must exist, we can find it for some language that trivially offers it by looking at the cost of translating a correct program in some other language that doesn't guarantee the property to one that does. The reason why it's more of a theoretical point than a practical one is because it could be reasonably argued that writing a memory-safety program in C is harder than doing it in Rust in the first place, but either way, there's some effort there that isn't there when writing the program in, say, Java.

              • surajrmal 3 hours ago ago

                What is your reason to claim zig is safer than c++?

                • messe 3 hours ago ago

                  Bounds safety by default, nullability is opt-in and checks are enforced by the type-system, far less "undefined behaviour", less implicit integer casting (the ergonomics could still use some work here), etc.

                  This is on top of the cultural part, which has led to idiomatic Zig being less likely to heap allocate in the first place, and more likely to consider ownership in advance. This part shouldn't be underestimated.

                  • tialaramex 3 hours ago ago

                    > This part can't be underestimated.

                    You presumably intend "shouldn't be underestimated" rather than "can't be". I agree that culture is crucial, but the technology needs to support that culture and in this respect Zig's technology is lacking. I would love to imagine that the culture drives technology such that Zig will fix the problem before 1.0, but Zig is very much an auteur language like Jai or Odin, Andrew decides and he does not seem to have quite the same outlook so I do not expect that.

                    • messe 3 hours ago ago

                      > You presumably intend "shouldn't be underestimated" rather than "can't be".

                      Good call, I've fixed that.

              • CyberDildonics 3 hours ago ago

                > Zig is safer than C++

                Maybe if someone bends over backwards to rationalize it, but not in any real sense. Zig doesn't have automatic memory management or move semantics.

                In C++ you can put bounds checking in your data structures and it is already in the standard data structures. You can't build RAII and moves into zig.

                • pron 2 hours ago ago

                  > Maybe if someone bends over backwards to rationalize it, but not in any real sense.

                  In a simple, real sense. Zig prevents out-of-bounds access just as Rust does; C++ doesn't. Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html).

                  > You can't build RAII and moves into zig.

                  So RAII is part of the definition of memory safety now?

                  Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.

                  We could, of course, argue over which of Rust, Zig, and C++ offers the best contribution to correctness beyond the sound guarantees they make, except these are empirical arguments with little empirical data to make any determination, which is part of my point.

                  Software correctness is such a complicated topic and, if anything, it's become more, not less, mysterious over the decades (see Tony Hoare's astonishment that unsound methods have proven more effective than sound methods in many regards). It's now understood to be a complicated game of confidence vs cost that depends on a great many factors. Those who claim to have definitive solutions don't know what they're talking about (or are making unfounded extrapolations).

                  • CyberDildonics 33 minutes ago ago

                    C++ doesn't.

                    Then why do my data structures detect if I go out of bounds?

                    Interestingly, almost all of Rust's complexity is invested in the less dangerous kind of memory unsafety

                    I didn't say anything about rust.

                    So RAII is part of the definition of memory safety now?

                    Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.

                    Why not just declare memory safety to be "whatever Rust does", say that anything that isn't exactly that is worthless, and be done with that, since that's the level of the arguments anyway.

                    Why are you talking about rust here? Focus on what I'm saying.

                    We could, of course, argue over which of Rust, Zig, and C++

                    if anything, it's become more, not less, mysterious over the decades

                    Says who?

                    I don't care about rust or zig, I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.

                    • pron 3 minutes ago ago

                      > Then why do my data structures detect if I go out of bounds?

                      I didn't mean you can't write C++ code that enforces that, I said C++ itself doesn't enforce it.

                      > Yes. You can clean up memory allocations automatically with destructors and have value semantics for memory that is on the heap.

                      Surely there are other ways to do that. E.g. Zig has defer. You can say that you may forget to write defer, which is true, but the implicitness of RAII has cause (me, at least) many problems over the years. It's a pros-and-cons thing, and Zig chooses the side of explicitness.

                      > Says who?

                      Says most people in the field of software correctness (and me https://pron.github.io). In the seventies, the prevalent opinion was that proofs of correctness would be the only viable approach to correctness. Since then, we've learnt two things, both of which were surprising.

                      The first was new results in the computational complexity of model checking (not to be confused with the computational complexity of model checkers; we're talking about the intrinsic computation complexity of the model checking problem, i.e. the problem of knowing whether a program satisfies some correctness property, regardless of how we learn that). This included results (e.g. by Philippe Schnoebelen) showing that even though there would be the reasonable expectation that language abstractions could make the problem easier, even in the worst case - it doesn't.

                      The second was that unsound techniques, including engineering best practices, have proven far more effective than was thought possible in the seventies.

                      As a result, the field of software correctness has shifted its main focus from proving program correct to finding interesting confidence/cost tradeoffs to reduce the number of bugs, realising that there's no single best path to more correctness (as far as we know today).

                      > I'm saying that these are solved problems in C++ and I don't have to deal with them. Zig does not have destructors and move semantics.

                      That's true, but these are not memory safety guarantees. These are mechanisms that could mitigate bugs (though perhaps cause others), and Zig has other, different mechanisms to mitigate bugs (though perhaps cause others). E.g. see how easy it is to write a type-safe printf in Zig compared to C++, or how Zig handles various numeric overflow issues compared to C++. So it's true that C++ has some features we may find helpful that Zig doesn't and vice-versa, we can't judge which of them leads to more correct programs. All I said was that Zig offers more safety guarantees than C++, which it does.

              • rowanG077 3 hours ago ago

                Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer. The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.

                • pron 3 hours ago ago

                  > Unless you actually use the simplicity to apply formal methods I don't think simplicity make a language safer.

                  That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods guy, although formal methods have come a long way since the seventies, when we thought the point was to prove programs correct).

                  > The exact opposite. You can see it play out in the C vs C++ arena. C++ is essentially just a more complex C. But I trust modern C++ much more in terms of memory safety.

                  I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at all).

                  • rowanG077 3 hours ago ago

                    > That depends what you mean by "safer", but it is an empirical fact that unsound methods (like tests and code reviews) are extremely effective at preventing bugs, so the claim that formal methods are the only way is just wrong (and I say this as a formal methods person)

                    I agree that tests and reviews are somewhat effective. That's not the point. The point is that if you look at the history of programming languages simplicity in general goes against safety. Simplicity also goes against human understanding of code. C and assembly are extremely simple compared to java, python, C#, typescript etc. yet programs written in C and assembly are much harder to understand for humans. This isn't just a PL thing either. Simplicity is not the same as easy, it often is the opposite.

                    > I don't understand the logical implication. From the fact that there exists a complicating extension of a language that's safer in some practical way than the original you conclude that complexity always offers correctness benefits? This just doesn't follow logically, and you can immediately see it's false because Zig is both simpler and safer than C++ (and it's safer than C++ even if its simplicity had no correctness benefits at al

                    It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe. You are right that zig is simpler and safer, but it's a green field language. Else I might as well say rust is more safe than zig and also more complex. The point is as to isolate simplicity as the factor as much as possible.

                    I would even say that zig willingly sacrifices safety on the alter of simplicity.

                    • pron 2 hours ago ago

                      > The point is that if you look at the history of programming languages simplicity in general goes against safety... C and assembly are extremely simple compared to java, python, C#, typescript

                      But Java and Python are simpler yet safer than C++, so I don't understand what trend you can draw if there are examples in both directions.

                      > It's the greatest example of you take a simple language, you add a ton of complexity and it becomes more safe.

                      But I didn't mean to imply that's not possible to add safety with complexity. I meant that when the sound guarantees are the same in two languages, then there's an argument to be made that the simpler one would be easier to write more correct programs in. Of course, in this case Zig is not only simpler than C++, but actually offers more sound safety guarantees.

          • Sytten 2 hours ago ago

            So far I think the adoption in critical infrastructure (Linux, AWS, Windows, etc.) is clearly in Rust favor but I agree that something at some point will replace Rust. My belief is that more guardrails will end up winning no matter the language since the last 50 years of progamming have shown us we can't rely on humans to write bug free code and it is even worse with LLM.

        • Zambyte 3 hours ago ago

          In practice, almost all memory safety related bugs caught by the Rust compiler are caught by the Zig safe build modes at run time. This is strictly worse in isolation, but when you factor in the fact that the rest of the language is much easier to reason about, the better C interop, the simple yet powerful metaprogramming, and the great built in testing tools, the tradeoffs start to become a lot more interesting.

          • dnautics 2 hours ago ago

            catching at compile time is much better, though. there are plenty of strange situations that can happen that you'll not reach in runtime (for example, odds of running into a tripwire increase over time, things that can only happen after certain amount of memory fragmentation -- maybe you forgot an errdefer somewhere, etc.)

        • dnautics 2 hours ago ago

          would you be satisfied if there was a static safety checker? (or if it were a compiler plugin that you trigger by running a slightly different command?). Note that zig compiles as a single object, so if you import a library and the library author does not do safety checking, your program would still do the safety checking if it doesn't cross a C abi boundary.

          https://www.youtube.com/watch?v=ZY_Z-aGbYm8

        • munchler 3 hours ago ago

          Nit: I think you want crux in that phrase, not crust.

          • Sytten 2 hours ago ago

            Thanks! Cant edit anymore, I guess I was feeling hungry this morning

        • conorbergin 4 hours ago ago

          I think the problem with this attitude is the compiler becomes a middle manager you have to appease rather than a collaborator. Certainly there are advantages to having a manager, but if you go off the beaten track with Rust, you will not have a good time. I write most of my code in Zig these days and I think being able to segfault is a small price to pay to never have to see `Arc<RefCell<Foo<Bar<Whatever>>>` again.

          • Sytten 2 hours ago ago

            I view it as a wonderful collaborator, it tells me automatically were my code is wrong and it gets better with every release, I can't complain really. I think a segfault is a big price to pay, but it depends on the criticality of it I guess.

          • surajrmal 3 hours ago ago

            I can't imagine writing c++ or c these days without static analysis or the various llvm sanitizers. I would think the same applies to zig. Rather than need these additional tools, rust gives you most of their benefits in the compiler. Being able to write bugs and have the code run isn't really something to boast about.

            • conorbergin 2 hours ago ago

              I would rather rely on a bunch of sanitizers and static analysis because it is more representative of the core problem I am solving: Producing machine code. If I want Rust to solve these problems for me I now have to write code in the Rust model, which is a layer of indirection that I have found more trouble than it's worth.

          • the__alchemist 3 hours ago ago

            You can write rust without over-using traits. Regrettably, many rust libs and domains encourage patterns like that. One of the two biggest drawbacks of the rust ecosystem.

          • ViewTrick1002 3 hours ago ago

            How do you guard concurrent access in your multithreaded code?

            Due diligence every single time after the tenth refactor?

          • pron 4 hours ago ago

            Not to mention that `Arc` uses a GC (and not a stellar one, at that)...

            • surajrmal 3 hours ago ago

              You can use alternative GC such as crossbeam if you want. You're not locked into using an Arc.

    • diegocg 7 hours ago ago

      As someone who uses D and has been doing things like what you see in the post for a long time, I wonder why other languages would put attention to these tricks and steal them when they have been completely ignoring them forever when done in D. Perhaps Zig will make these features more popular, but I'm skeptic.

      • brabel 6 hours ago ago

        I was trying to implement this trick in D using basic enum, but couldn't find a solution that works at compile-time, like in Zig. Could you show how to do that?

        • Snarwin 2 hours ago ago

            import std.meta: AliasSeq;
          
            enum E { a, b, c }
          
            void handle(E e)
            {
                // Need label to break out of 'static foreach'
                Lswitch: final switch (e)
                {
                    static foreach (ab; AliasSeq!(E.a, E.b))
                    {
                        case ab:
                            handleAB();
                            // No comptime switch in D
                            static if (ab == E.a)
                                handleA();
                            else static if (ab == E.b)
                                handleB();
                            else
                                static assert(false, "unreachable");
                            break Lswitch;
                    }
                    case E.c:
                        handleC();
                        break;
                }
            }
          • sixthDot 8 minutes ago ago

            This is more simple using STYX (D -inspired) "Enum Sets":

                enum U { A, B, C }
                alias UBitSet = bool[U]; // packed
            
                function demo(U u) {
                    const UBitSet setified = u;
                    switch setified do {
                        on [A,B], A, B do { // matches (1 << A | 1 << B), (1 << A), (1 << B)
                            handleAB();
                            switch u do {
                                on A do handleA();
                                on B do handleB();
                                else do assert(0, "unreachable");
                            }
                        }
                        on C do handleC();
                    }
                }
    • ozgrakkurt 4 hours ago ago

      This perspective that many people take on memory-safety of Rust seems really "interesting".

      Unfortunately for all fanatics, language really doesn't matter that much.

      I have been using KDE for years now and it works perfectly good for me. It has no issues/crashes, it has many features in terms of desktop environment and also many programs that come with it like music player, video player, text editor, terminal etc. and they all work perfectly well for me. Almost all of this is written in C++. No need to mention the classic linux/chromium etc. etc which are all written in c++/c.

      I use Ghostty which is written in zig, it is amazingly polished and works super well as well.

      I have built and used a lot of software written in Rust as well and they worked really well too.

      At some point you have to admit, what matters is the people writing software, the amount of effort that goes into it etc. it is not the langauge.

      As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff. Even then just using Rust isn't as good as you might think, I uncountered a decent amount of segfaults, random crashes etc. using very popular Rust libraries as well. In the end just need to put in the effort.

      I'm not saying language doesn't matter but it isn't even close to being the most important thing.

      • Eliah_Lakhin 4 hours ago ago

        > As far as memory-safety goes, it really isn't close to being the most important thing unless you are writing security critical stuff.

        Safety is the selling point of Rust, but it's not the only benefit from a technical point of view.

        The language semantics force you to write programs in a way that is most convenient for the optimizing compiler.

        Not always, but in many cases, it's likely that a program written in Rust will be highly and deeply optimized. Of course, you can follow the same rules in C or Zig, but you would have to control more things manually, and you'd always have to think about what the compiler is doing under the hood.

        It's true that neither safety nor performance are critical for many applications, but from this perspective, you could just use a high-level environment such as the JVM. The JVM is already very safe, just less performant.

    • bobajeff 3 hours ago ago

      I've seen a few new languages come along that were inspired by zig's comptime/metaprogramming in the same language concept.

      Zig I think has potential but it hasn't stabilized enough yet for broad adoption. That means it'll be awhile before it's built an ecosystem (libraries, engines etc.) that is useful to developers that don't care about language design.

    • the__alchemist 3 hours ago ago

      Concur. This is a great feature I wish rust had. I've been bitten by the unpleasant syntax this article laments.

    • pron 6 hours ago ago

      > just another memory unsafe language

      Also, treating all languages that don't ensure full memory safety as if they're equally problematic is silly. The reason not ensuring memory safety is bad is because memory unsafety as at the root of some bugs that are both common, dangerous, and hard to catch. Only not all kinds of memory unsafety are equally problematic, Zig does ensure the lack of the the most dangerous kind of unsafety (out-of-bounds access) while making the other kind (use-after-free) easier to find.

      That the distinction between "fully memory safe" and "not fully memory safe" is binary is also silly not just because of the above, but because no lanugage, not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.

      Furthermore, Zig has (or intends to have) novel features (among low-level languages) that help reduce bugs beyond those caused by memory unsafety.

      • hitekker 4 hours ago ago

        If you one day write a blog, I would want to subscribe.

        Your writing feels accessible. I find it makes complex topics approachable. Or at least, it gives me a feel of concepts that I would otherwise have no grasp on. Other online writing tends to be permeated by a thick lattice of ideology or hyper-technical arcanery that inhibits understanding.

        • pron 4 hours ago ago

          Thank you!

          I did have one once (https://pron.github.io) but I don't know how accessible it is :) (two post series are book-length)

        • Ygg2 2 hours ago ago

          > Your writing feels accessible. I find it makes complex topics approachable

          Yeah. By omitting a large swath of nuance. It reeks of "you can approximate cow with a sphere the size of Jupiter". It's baffling ludicrous.

          Any rhetorical device that equates Java/C# (any memory safe Turing language ) safety with C is most likely a fallacy.

          • pron an hour ago ago

            > Any rhetorical device that equates Java/C# (any memory safe Turing language ) safety with C is most likely a fallacy.

            I agree, but I didn't do any of that. If anything my point was that 1. safety is clearly not a binary thing and no one really treats it as such (even those who claim it is a binary distinction) and 2. that trying to extrapolate from one language to another based on choosing some property that we think is the most relevant one may be assuming that which we seek to prove.

            Saying that C, C++, and Zig are "the same" because they all make fewer guanratees than Rust is as silly as saying C, C++, Zig, and Rust are the same because they all offer fewer guarantees than ATS, or that Rust and Java are the same because they offer similar guarantees but with very different complexity costs.

            Also, the focus on memory safety is justified because of the security bugs it causes, but the two major kinds of unsafety (out-of-bounds access and use-after free) aren't equally dangerous, and Rust pays most of its complexity cost to prevent the less dangerous of the two (https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html). There's even more nuance here, because some techniques focus on reducing the risk of exploitable use-after-free bugs without preventing it or even making it easier to detect at all (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...).

            It's all a matter of degree, both when it comes to the risk as well as to the cost of avoiding it. Not much here, beyond the very basics, is simple or obvious.

            If you want to read some more even nuanced things I've written about software correctness, you can find some old stuff here: https://pron.github.io

          • hudon 2 hours ago ago

            I interpreted his post as saying it's not binary safe/unsafe, but rather a spectrum, with Java safer than C because of particular features that have pros and cons, not because of a magic free safe/unsafe switch. He's advocating for more nuance, not less.

      • IshKebab 5 hours ago ago

        Right but I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards. Even if it is much much better than C (clearly), it's hard to get excited about a language that "unsolves" a longstanding problem.

        > not even Java, is truly "fully memory safe", as programs continue to employ components not written in memory safe languages.

        This is a silly point.

        • pron 4 hours ago ago

          > I think people are disappointed because we finally got a language that has memory safety without GC, so Zig seems like a step backwards

          Memory safety (like soundly ensuring any non-trivial property) must come at a cost (that's just complexity theory). You can pay for it with added footprint (Java) or with added effort (Rust). Some people are disappointed that Zig offer more safety than C++ but less than Rust in exchange for other important benefits, while others are disappointed that the price you have to pay for even more safety in Rust is not a price they're happy to pay.

          BTW, many Rust programs do use GC (that's what Rc/Arc are), it's just one that optimises for footprint rather than speed (which is definitely okay when you don't use the GC as much as in Java, but it's not really "without GC", either, when many programs do rely on GC to some extent).

          > This is a silly point.

          Why? It shows that even those who wish to make the distinction seem binary themselves accept that it isn't, and really believe that it matters just how much risk you take and how much you pay to reduce it.

          (You could even point out that memory corruption can occur at the hardware level, so not only is the promise of zero memory corruption not necessarily worth any price, but it is also unattainable, even in principle, and if that were truly the binary line, then all of software is on the same side of it.)

          • IshKebab an hour ago ago

            > You can pay for it with added footprint (Java) or with added effort (Rust)

            ... or runtime errors (C, Zig presumably).

            Ok Zig is clearly better than C in that regard but I think it remains to be seen if it is better enough.

            > many Rust programs do use GC (that's what Rc/Arc are)

            This is not what most people mean when they say GC.

            > Why?

            Because when we're talking about the memory safety of a language we're talking about the code you write in that language (and excluding explicit opt-in to memory unsafe behaviour, e.g. `unsafe` or Python's `ctypes`).

            Saying "Java isn't memory safe because you can call C" is like saying "bicycles can fly because you can put them on a plane".

    • CyberDildonics 3 hours ago ago

      If you make advancements but disregard the advancements that came before you, you have a research language, not a modern usable language.

      • loeg an hour ago ago

        By this definition, every major programming language in use today (C, C++, Java, Python, ...) is merely a research language.

        • CyberDildonics 32 minutes ago ago

          All of your examples were created three decades ago or more.

    • surajrmal 3 hours ago ago

      I can't take zig as seriously as rust due to lack of data race safety. There are just too many bugs that can happen when you have threads, share state between those threads and manually manage memory. There are so many bugs I've written because I did this wrong for many years but didn't realize until I wrote rust. I don't trust myself or anyone to get this right.

    • Ygg2 7 hours ago ago

      > «inline else» is also very powerful tool to easily abstract away code with no runtime cost.

      Sure, but you lose the clarity of errors. The error wasn't in `comptime unreachable` but in `inline .a .b .c`.

      • jeltz 7 hours ago ago

        I disagree, I would say the error is in "comptime unreachable" or maybe the whole "switch (ab)".

      • binary132 6 hours ago ago

        Adding a new case is legitimate, failing to handle it (by reaching unreachable) is an error.

  • loeg an hour ago ago

    Just having a comptime unreachable feature seems pretty cool. Common C++ compilers have the worst version of this with __builtin_unreachable() -- they don't do any verification the site is unreachable, and just let the optimizer go to town. (I use/recommend runtime assert/fatal/abort over that behavior most days of the week.)

  • agons an hour ago ago

    Is there a reason the Zig compiler can't perform type-narrowing for `u` within the `U::A(_) | U::B(_)` "guard", rendering just the set of 2 cases entirely necessary and sufficient (obviating the need for any of the solutions in the blog post)?

    I'm not familiar with Zig, but also ready to find out I'm not as familiar with type systems as I thought.

    • rvrb an hour ago ago

      it can narrow the payload: https://zigbin.io/7cb79d

      I think the post would be more helpful if it had a concrete use case. let's say a contrived bytecode VM:

        dispatch: switch (instruction) {
            inline .load, .load0, .load1, .load2, .load3 => |_, tag| {
                const slot = switch (tag) {
                    .load => self.read(u8),
                    else => @intFromEnum(tag) - @intFromEnum(.load0),
                };
                self.push(self.locals[slot]);
                continue :dispatch self.read(Instruction);
            },
            // ...
        }
      
      "because comptime", this is effectively the same runtime performance as the common:

        dispatch: switch (instruction) {
            .load => {
                self.push(self.locals[self.read(u8)]);
                continue :dispatch self.read(Instruction);
            },
            .load0 => {
                self.push(self.locals[0]);
                continue :dispatch self.read(Instruction);
            },
            .load1 => {
                self.push(self.locals[1]);
                continue :dispatch self.read(Instruction);
            },
            .load2 => {
                self.push(self.locals[2]);
                continue :dispatch self.read(Instruction);
            },
            .load3 => {
                self.push(self.locals[3]);
                continue :dispatch self.read(Instruction);
            },
            // ...
        }
      
      and this is in a situation where this level of performance optimization is actually valuable to spend time on. it's nice that Zig lets you achieve it while reusing the logic.
  • spiffyk 7 hours ago ago

    This post shows how versatile Zig's comptime is not only in terms of expressing what to pre-compute before the program ever runs, but also for doing arbitrary compile time bug-checks like these. At least to me, the former is a really obvious use-case and I have no problem using that to my advantage like that. But I often seem to overlook the latter, even though it could prove really valuable.

    • dwattttt 7 hours ago ago

      I love the idea, but something being "provable" in this way feels like relying on optimisations.

      If a dead code elimination pass didn't remove the 'comptime unreachable' statement, you'll now fail to compile (I expect?)

      • teiferer 5 hours ago ago

        It's inherently an incomplete heutistic. Cf. the halting problem.

        Doesn't mean it's not useful.

      • anonymoushn 7 hours ago ago

        A lot of Zig relies on compilation being lazy in the same sort of way.

        • dwattttt 7 hours ago ago

          For the validity of the program? As in, a program will fail to compile (or compile but be incorrect) if an optimisation misbehaves?

          That sounds as bad as relying on undefined behaviour in C.

          • Laremere 7 hours ago ago

            It's not an optimization. What gets evaluated via the lazy evaluation is well defined. Control flow which has a value defined at comptime will only evaluate the path taken. In the op example, the block is evaluated twice, once for each enum value, and the inner switch is followed at comptime so only one prong is evaluated.

          • 9029 6 hours ago ago

            Nope, this is not relying on optimization, it's just how compile time evaluation works. The language guarantees "folding" here regardless of optimization level in use. The inline keyword used in the original post is not an optimization hint, it does a specific thing. It forces the switch prong to be evaluated for all possible values. This makes the value comptime, which makes it possible to have a comptime unreachable prong when switching on it.

            There are similarities here to C++ if constexpr and static_assert, if those are familiar to you.

          • anonymoushn 7 hours ago ago

            Well, for example you may have some functions which accept types and return types, which are not compatible with some input types, and indicate their incompatibility by raising an error so that compilation fails. If the program actually does not pass some type to such a function that leads to this sort of error, it would seem like a bug for the compiler to choose to evaluate that function with that argument anyway, in the same way that it would be a bug if I had said "template" throughout this comment. And it is not generally regarded as a deficiency in C++ that if the compiler suddenly chose to instantiate every template with every value or type, some of the resulting instantiations would not compile.

            • dwattttt 7 hours ago ago

              To take an extreme example, what if I asserted the Riemann hypothesis in comptime? It's relying on comptime execution to act as a proof checker.

              Which is fine for small inputs and uses, but it's not something that would scale well.

  • uecker 2 hours ago ago

    It is great to see other languages getting the same compile-time meta programming features as C ;-)

    https://godbolt.org/z/P1r49nTWo

  • rvrb 2 hours ago ago

    I did not realize you could inline anything other than an `else` branch! This is a very cool use for that.

  • the__alchemist 3 hours ago ago

    I love how this opens with the acknowledgement we've made a mess of choice-like data structure terminology!

  • veber-alex 5 hours ago ago

    I don't understand. Isn't this only useful if the value you match on is known at compile time?

    • sekao 4 hours ago ago

      The code example will work even if `u` is only known at runtime. That's because the inner switch is not matching on `u`, it's matching on `ab`, which is known at compile time due to the use of `inline`.

      That may be confusing, but basically `inline` is generating different code for the branches .a and .b, so in those cases the value of `ab` is known at compile time. So, the inner switch is running at compile time too. In the .a branch it just turns into a call to handle_a(), and in the .b branch it turns into a call to handle_b().

    • alpinisme 4 hours ago ago

      The problem this is meant to solve is that sometimes a human thinking about the logic of the program can see it is impossible to reach some code (ie it is statically certain) but the language syntax and type system alone would not see the impossibility. So you can help the compiler along.

      It is not meant for asserting dynamic “unreachability” (which is more like an assertion than a proof).

  • dlahoda 7 hours ago ago

    fn main() {

        if false {
    
            const _:() =  panic!();
    
        }
    
    }

    Fails to compile in Rust.

    • Sharlin 7 hours ago ago

      Sure, because it's compile-time code inside a (semantically) run-time check. In recent Rust versions you can do

          fn main() {
              const {
                  if false {
                      let _:() = panic!();
                  }
              }
          }
      
      which compiles as expected. (Note that if the binding were `const` instead of `let`, it'd still have failed to compile, because the semantics don't change.)
      • tialaramex 5 hours ago ago

        Perhaps more succinctly:

            fn main() {
               const _:() = const { if false { panic!() } };
            }
        
        It's fine that we want a constant, it's fine that this constant would, when being computed at compile time, panic if false was true, because it is not.
      • dlahoda 4 hours ago ago

        not sure it is to be equivalent to zig.

        in zig they have one brach const.

        in rust example from you, whole control flow ix is const. which is not rquivalent to zig. so how to have non const branches?

    • the__alchemist 3 hours ago ago

      I have no idea what that's trying to do. A demonstration that rust is a large language with different dialects! A terse statement with multiple things I don't understand:

        - Assigning a const conditionally?
        - Naming a const _ ?
        - () as a type?
        - Assigning a panic to a constant (or variable) ?
      
      To me it might as well be:

        fn main() {
          match let {
              if ()::<>unimplemented!() -> else;
          }
      }
    • Ygg2 7 hours ago ago

      Why would it? If I recall correctly, const and static stuff basically gets inlined at the beginning of the program.