The Expression Problem and its solutions (2016)

(eli.thegreenplace.net)

100 points | by andsoitis a day ago ago

82 comments

  • kragen a day ago ago

    We were discussing this a few days ago in https://news.ycombinator.com/item?id=45121080.

  • amluto a day ago ago

    This article discusses details of trying to use language features to model a fairly basic problem but doesn’t discuss the fundamentals of the problem very much.

    If I have O operations (e.g. evaluate, stringify) and T types, then I need O•T implementations. If I fix O, I can add types until the cows come home by adding O implementations per new type. If I fix T, I can add new operations with T implementations per operation. If I want to let O and T vary, then the number of implementations I need to add per additional operation/type varies depending on what I’ve added before. No amount of programming language magic will change this basic count.

    ISTM what’s really going on in the article is that the author is using the programming language as a registry of implementations. Those O•T implementations need to be stored somewhere so their callers can find them. In C++ this can be done when very verbose inheritance and virtual calls. In other languages, it’s less verbose. If multiple dispatch is built in, then one can use it directly. One can, of course, also build an actual table indexed on operation and type as a data structure in just about any language and use it, and in a language like C++ the result may well be quite a bit nicer than using a class hierarchy.

    • layer8 18 hours ago ago

      The problem is how to statically ensure that you don’t run into the case of a missing implementation for some combination at runtime, in particular in languages with separate compilation. If you have a library A that defines a basic (T, O) set, and two libraries B and C that aren’t aware of each other but both use A, where B adds a new type and C adds a new operation, and finally you have a program D that uses both B and C, how do you statically detect or prevent the case where D tries to use the new type from B with the new operation from C, but no one has provided an implementation for that combination; while also supporting the case of the required implementation being present.

      • HarHarVeryFunny 18 hours ago ago

        > The problem is how to statically ensure that you don’t run into the case of a missing implementation

        Static (compile-time) guarantees are only applicable to languages with static binding rules, in which case there is no problem - the compiler will just report "cannot resolve overloaded function" or something similar.

        This is of course one of the downsides to languages with dynamic types and dynamic binding ... that many errors can only be detected at runtime.

    • HarHarVeryFunny 19 hours ago ago

      > In C++ this can be done when very verbose inheritance and virtual calls.

      No - in C++ you'd just define the operators (either as global functions, or member functions of the types/classes in question), and the language's overload resolution rules would select the right implementation for each usage.

      You could use inheritence (and virtual overrides if needed), or templates/generics, if you wanted to - C++ certainly gives you a lot of ways to make things more complex and shoot yourself in the foot, but the normal/obvious way would just be to define the overloaded operators and be done with it.

      • layer8 18 hours ago ago

        The expression problem is about dynamic dispatch, not static dispatch. For example, you call a function F that returns an object of one of the relevant types, and another function G that returns one of the relevant operations, and then you apply the operation to the object. How can you have F extend the set of types, and also G extend the set of operations, without having to recompile the caller, or without F and G having to coordinate, and also avoid a runtime error due to a missing implementation of some operation for some type.

        • HarHarVeryFunny 18 hours ago ago

          I agree (although others in this thread do not), but the comment I was responding to was claiming that in C++ operator overloading requires inheritance and virtual methods, and I was just pointing out that this is not true.

          • layer8 18 hours ago ago

            The comment you responded to doesn’t mention overloading.

            • HarHarVeryFunny 18 hours ago ago

              > In C++ this can be done when very verbose inheritance and virtual calls

              Virtual methods overload the corresponding method in the parent class they are inheriting from. It could be a named method like "add()", or an operator like "+()".

              The author of that comment is implicitly assuming that all types are derived via inheritance from a common base type that defines all the (virtual) methods/operators being overloaded.

              • layer8 18 hours ago ago

                The operations are different methods, not overloads of each other, and the different types are the types on which the methods are defined, not arguments to the methods. That’s also how the article presents it. There are no overloads in the scenario.

                • HarHarVeryFunny 17 hours ago ago

                  The article presents examples in many languages. His C++ inheritance/override example was an example of how NOT to do it, since if you add a new virtual method to the base class then all the derived classes either have to use that base class method or be modified to override it in an appropriate way.

                  Overuse of inheritance is one of the ways to shoot yourself in the foot with C++. If you want to overload (apply same operator to multiple types - related or not), then just overload - don't use inheritance.

                  Your own example, above (B defines new type, C defines new operator) doesn't appear to be about inheritance at all - you don't even say what language you are talking about, and elsewhere you say (contracticting the article author) that this "expression problem" only applies to dynamic languages ... you seem to be all over the map on this one!

                  • layer8 13 hours ago ago

                    I’m sorry, but the expression problem is a fairly well-known problem, and you don’t seem to be familiar with it, so are drawing the wrong conclusions.

                    The problem statement is: You have code (e.g. a library) that supports polymorphic operations on a range of types. It is called the “expression problem” because the prototypical example is to have a type representing an expression (like an arithmetic expression) that can have various forms (subtypes, not necessarily in the OOP sense), and there are operations like evaluating the expression or printing the expression, which of course need to be implemented for all those subtypes (you need to know how to print a constant, how to print a binary expression, and so on).

                    There is code that can take an arbitrary expression whose exact subtype will only be known at runtime, and which invokes one of the available operations on the expression object.

                    Now, you want to be able to write secondary libraries (or programs) that extend the first library by extending the set of subtypes and/or the set of operations, but without modifying the first library (and without duplicating code from the first library). With OOP inheritance, adding new subtypes is easy. With the visitor pattern (or equivalent runtime case-distinction mechanisms like pattern matching in functional languages), adding new operations is easy. But the combination of both is difficult, if not unsolvable.

                    Overloading doesn’t apply, because overloading would dispatch on the static type of the expression variable, but the use case is that this is an expression parsed from a textual representation at runtime (e.g. in the context of an interpreter or compiler, or maybe some data format parser), so the variable holding the expression object has a generic type and could be holding any expression.

                    • HarHarVeryFunny 3 hours ago ago

                      I'm just going by the article this thread is responding to. Perhaps the author's C++ example that he lead with was meant to be a strawman "look how bad C++ is (when we use it this way)"?

                      If the problem statement is just how to add operators and types to a dynamically typed expression (in some arbitrary language), then in addition to implementing any new operator overloads, there is the question of how to dispatch, which would most obviously be done by mapping (operator, operand type) pairs to the registered type-specific operator implementation (aka overload). Again, not sure what the big deal is here.

              • Kranar 17 hours ago ago

                Virtual methods and overloading are not the same thing.

                You are likely mixing up the term overload with the term override.

                • HarHarVeryFunny 16 hours ago ago

                  Apples and oranges.

                  Mechanism vs functionality.

    • kqr a day ago ago

      Huh, this is a very neat way to put it. Something about it seems... too neat? But I'm the wrong person to poke holes in it. Can I see other people debate it civilly?

      • magnio 21 hours ago ago

        My first exposure to the expression problem is the book Crafting Interpreters, Section 5.3.1,[1] which is very similar to the GP comment. The subsection is a short few paragraphs, with one or two tables for illustration, and it was one of the most eureka moments of my programming career. Ever since then, I see this trade-off everywhere, and I get reminded that engineering is all about finding the best trade-off and that the holy-grail of a perfect language or design is a waste of time.

        [1]: https://craftinginterpreters.com/representing-code.html#the-...

    • moron4hire 21 hours ago ago

      I think the article as it stands makes that all pretty clear. But the point of the article is more that the language selection will determine how difficult it is to do that if you don't have control over the original definition of the operations or types.

    • moelf 21 hours ago ago

      this is counting it as if O only takes an argument, which is single dispatch, which OOP covers already (the `self` is the single argument).

      In practice, Multiple Dispatch shines when you have 1) more than one argument type (duh) 2) higher order `O` operation [1]

      [1]: think of a numerical routine that calls eigen or something, and you want eigen to exploit the matrix's type, such as symmetry or upper-triangular, and this is encoded as a matrix type)

      • layer8 18 hours ago ago

        In the expression problem, the other argument is the operation to be invoked, which can vary at runtime as well. OOP doesn’t cover that.

    • naasking 19 hours ago ago

      > One can, of course, also build an actual table indexed on operation and type as a data structure in just about any language and use it, and in a language like C++ the result may well be quite a bit nicer than using a class hierarchy.

      This requires unsafe casting and thus doesn't actually solve the expression problem, which is about how to do this without compromising safety. Your approach is what the solution to the expression problem in Haskell effectively does at runtime though, via type classes and the dictionary-passing translation that they undergo during compilation, but it's all type safe.

    • HarHarVeryFunny 21 hours ago ago

      Overloading operators like "+" to work on new types, and maybe mixed types, requires each desired combination of operator and operand types to be implemented (as one can easily do in a language like C++, and presumably also in more modern languages). If there is any commonality between types as to how these operators are meant to work, then generics such as C++'s templates can also be used to reduce the number of implementations to those that are actually distinct.

      I don't understand what problem the author is trying to solve here - maybe it's language specific? More related to dynamic typing and efficient dispatch?

      In any case, operator overloading, while it can make for cute-to-read code, isn't really a good idea for code readability and maintenance.

      • moelf 20 hours ago ago

        > operator overloading, while it can make for cute-to-read code, isn't really a good idea for code readability and maintenance

        so should we write `add_int_int(1,1)` and `add_int_double(1, 1.0)` and `add_double_double(1.0, 1.0)`?...

        • HarHarVeryFunny 19 hours ago ago

          The expectation is that arithmetic operators are performing the corresponding arithmetic operations, so overloading for different arithmetic types (int, float, complex) doesn't produce any surprises or need to lookup operator definitions.

          Applying arithmetic operators to non-arithmetic types starts to become a problem, even if some cases (set union/difference, string catenation) are natural...

          The same goes for other operators... overloading '[]' indexing for maps as well as arrays seems natural, but would be highly confusing if you overloaded it for something unrelated to an index-like concept.

          • JadeNB 19 hours ago ago

            I think a problem here is that everyone has a different idea of when you hit the boundary between obvious/non-surprising and confusing, so you can't just say that overloading is OK as long as it is unsurprising.

            • HarHarVeryFunny 18 hours ago ago

              Just because people have differing opinions (some well founded, some not!) doesn't mean that defining best practices doesn't make sense. Code readability and lack of surprise objectively are good things, and certainly important for large enterprise projects.

              I think there are two alternate guidelines for use of operator overloading that would make sense.

              1) Just don't overload operators at all for your own types! Leave that to the standard libraries and types that they define.

              OR

              2) Overload, but keep to the semantics of the operators as used by the language's built-in types (e.g. use arithmetic operators for arithmetic operations, comparison operators for ordering operations, etc). If your project was using a style guideline like this, then violations would be caught by code review (maybe automated by AI in the future?).

      • Kranar 21 hours ago ago

        >I don't understand what problem the author is trying to solve here - maybe it's language specific? More related to dynamic typing and efficient dispatch?

        The expression problem only arises in statically typed programming languages, it does not exist in dynamically typed programming languages.

        Operating overloading has nothing to do with the problem and can not be used to resolve it. Operators are nothing more than a one-off syntax so we can use familiar childhood notation like a + b, etc... they are not particularly meaningful. The ability to overload operators or functions in general is also irrelevant since such overloads are resolved statically at compile time.

        • munificent 20 hours ago ago

          > The expression problem only arises in statically typed programming languages, it does not exist in dynamically typed programming languages.

          Wadler's list of requirements for solving the expression problem include "with static checking that all cases are covered", so in one sense, yes, dynamic languages don't have this "problem". But in another sense, dynamic languages simply have no chance to solve the problem because they don't statically check anything.

          It is true that it's much easier to do multiple dispatch and open-ended extension in dymamic languages. That's a nice benefit of them. But you do sacrifice all of the safety, code navigation, and performance of static types in order to get that.

          The problem Wadler is trying to solve is "how can we have this sort of open-ended extension in both directions without giving up static safety?"

          • adgjlsfhk1 10 hours ago ago

            This isn't true. Julia manages to maintain good performance with multiple dispatch through lots of type inference in the compiler (and carefully written code in the language to preserve "type stability")

        • kibwen 20 hours ago ago

          > does not exist in dynamically typed programming languages

          The "problem" exists for dynamically-typed languages as well. If you define a new class in Python, you still need to teach existing code how to operate on it (though you might be using inheritance to automatically derive some of those implementations, but inheritance obviously isn't limited to dynamic languages).

        • HarHarVeryFunny 19 hours ago ago

          > Operating overloading has nothing to do with the problem

          You've got T types (some new), and O operators (some new) and want to implement all operators for all types ... This is the exact definition of operator overloading.

          There is no magic (other than inheritance or generics, if applicable) that will remove the need to individually implement all those O x T cases, and while that is obviously necessary, it is also all that you need to do.

          If you are not talking about operator overloading - supporting the same operator for multiple different custom types, then what are you talking about ?!

  • mkleczek a day ago ago
  • nine_k 21 hours ago ago

    In short, the solution is to define an interface / typeclass / trait / protocol that factors out the particular kind of operation, and separates it from any specific class or data type. Basically it allows to tag any particular data representation with a trait, and then offer an implementation of that trait, without touching any pre-existing code related to that data representation.

    In languages that do not allow that (e.g. Java), one has to implement it by hand, using a Visitor pattern. Instead of relying on the language to do the dynamic dispatch, one has to explicitly add a visiting method for another data type.

    To me, the turn towards interfaces / typeclasses / traits / protocols is one of the most notable bits of progress in the newer crop of programming languages (and, importantly, their standard libraries).

  • mccoyb 14 hours ago ago

    Does Clojure solve this to a satisfying degree? One gives up static typing to embrace Clojure’s solutions.

    On the other hand, Rust / Haskell provide partial solutions with static typing — but run into annoying boilerplate related to the orphan rule.

    I’m not sure if there’s a true solution, given the level of thinking which has gone into these two forks in the road.

    • mrkeen 8 hours ago ago

      I was also confused by the claim that Clojure solves it and Haskell does not.

      > run into annoying boilerplate related to the orphan rule

      I always forget this, but it's not an orphan if you put the instance in the same module as the class definition (as opposed to bundling it with a newtype elsewhere). Class definitions and instances would go in Evaluatable and Stringable.

  • dang 16 hours ago ago

    Related. Others?

    The Expression Problem and its solutions - https://news.ycombinator.com/item?id=11683379 - May 2016 (49 comments)

  • zelphirkalt 15 hours ago ago

    I think one of the solutions is the data oriented approach, that is already shown in SICP, where you have a global lookup table, in which you lookup by types and operation. In SICP the characters or you then add modules, which is done by registering new types-operations pairs in the table, if I remember correctly.

  • mdaniel 21 hours ago ago

    SableCC used that visitor pattern for its outputs, and then immediately had an "empty implementation" of the interface to allow folks to subsclass the parts that were relevant https://github.com/SableCC/sablecc/blob/sablecc-4-beta.4/src... https://github.com/SableCC/sablecc/blob/sablecc-4-beta.4/src... https://github.com/SableCC/sablecc/blob/sablecc-4-beta.4/src...

    I think Antlr 4 does similar, I just haven't used it in as much anger because its runtime always jammed me up https://github.com/antlr/antlr4/blob/4.13.2/doc/listeners.md...

    I thought that IntelliJ did similarly but I am misremembering since they generate recursive descent style https://github.com/JetBrains/Grammar-Kit/blob/v2023.3/HOWTO....

  • monocularvision 14 hours ago ago

    A pretty decent solution for the Expression Problem using Swift: https://www.youtube.com/watch?v=EsanJ7_U89A

    We use this pretty effectively in our iOS app to model and implement Effect types in a functional core / imperative shell architecture.

  • DarkSucker 21 hours ago ago

    The article's `expression problem matrix` section states that the goal is make it `easy to add ops` and `easy to add types`. My learning of Rust so far indicates Rust satisfies both: traits satisfies the `ops` problem for all traits you want to support the op, and Rust's implementations (impl) solves the problem of adding types. Of course, for each new {op,type} combination, one must write the code, but Rust allows you to do that with its trait and generic systems. Am I missing something important?

    • wavemode 20 hours ago ago

      When people talk about the "expression problem", what they're describing is the fact that, (in your example) if you add a new method to the trait, you have to go around and implement that new method on every type that implements the trait.

      This is in contrast to if you had used an enum (sum type) instead, wherein adding a new operation is easy and can be done in a single place. But then in exchange, adding new variants requires going around and updating every existing pattern match to support the new variant.

      • DarkSucker 20 hours ago ago

        Thanks. I wasn't thinking of enums. To the extent one designs a trait to use an enum type (or the enum to satisfy the trait), one wins. But it seems impossible to avoid code to handle all future {type, op} combinations. The nice thing I've seen with Rust is the ability to add to what's been done before without breaking what already works. I'm thinking of "orphan rules" here.

      • TuringTest 19 hours ago ago

        I'm thinking, didn't inheritance and abstract methods work to solve the problem?

        I know inheritance has its own severe problems, but adding a generic abstract method at the base class could create reusable code that can be accessed by any new class that inherits from it.

        P.S. ah ok, it's mentioned in the article at the Visitor section.

        • zelphirkalt 15 hours ago ago

          I think the problem is, that at the base class, you don't necessarily know how to handle things, that are encountered at the concrete class level, or don't want to put the logic for all implementations into your base class. That would defeat the purpose of your abstract class and the hierarchy.

          • TuringTest 4 hours ago ago

            If you have to handle specific behavior for the new method in an existing type, of course you will need to add a new implementation of the method for that type.

            As I understand the Expression problem, the limitation of programming languages is that they force you to modify the previous existing types even if the new behavior is a generic method that works the same for all types, so it should be enough to define it once for all classes. A virtual method in the base class should be able to do that.

            • zelphirkalt an hour ago ago

              That only moves the problem to the base class. In this case there is no difference in effort modifying the base class to foresee all cases that could happen with the derived classes, compared to adapting all the derived classes. In fact, you might even be typing more, because you might need some more "if"s in that base class, because you don't know which specific implementation you are dealing with, for which you implement the behavior in the base class. You still have to deal with the same amount of cases. And it is still bad, when you need to extend something that a library does, that you are not maintaining.

              This does not solve the issue at its core, I think.

      • jauntywundrkind 18 hours ago ago

        This seems like it's easy to workaround by not modifying the existing trait but by defining a new trait with your new method, and impl'ing it atop the old trait.

        That seems like a pretty ok 90% solution, and in a lot of ways cleaner and more well defined a way to grow your types anyhow.

  • almostgotcaught 15 hours ago ago

    ITT: lots of people who think they understand the expression problem but don't actually understand the expression problem.

  • jauntywundrkind a day ago ago

    Rust doesn't merely have a way around the Expression Problem, the entire language & it's set of libraries is built constructively / additively bottom-up atop very dumb types, with operations added latter! Traits and impls invert the responsibility to define operations, are still static definitions & static types but late-created / created-elsewhere types, that aggregate more and more impls than the type is initially defined with. An inversion of control of typing. Dynamic but still compile-time static types.

    It's refreshing to see this very clear problem statement, which feels like an ideal anti-thesis that reveals the hidden glory of Rust's traits/impls:

    > It turns out that adding new operations isn't as easy as adding new types. We'd have to change the Expr interface, and consequently change every existing expression type to support the new method(s). If we don't control the original code or it's hard to change it for other reasons, we're in trouble.

    > In other words, we'd have to violate the venerable open-closed principle, one of the main principles of object-oriented design, defined as:

    > software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification

    Apologies for the self link, but this came up very recently in a sub thread about Rust & I'm so glad to close the loop & find a name for this. I said there & I'll say again, I think this is one of Rust's Greta superpowers, that we are not trapped by the design parameters of the libraries we consume, that Rust allows us to layer in more and more to our types, as we please. An amazing superpower of Rust that imo gets way too little attention, where-as the borrow-checking tends to soak all the attention/fixation for the language. https://news.ycombinator.com/item?id=45041286#45045202

    There's a whole chapter of the Rust book on Rust & OOP, btw, which provides a side-ways look at how Rust & OOP and the Expression Problem relate. https://doc.rust-lang.org/book/ch18-00-oop.html

    C# and others have extension methods, where you can add operations. But most of the language & it's use is pretty conventional OOP; it's a feature not the core. Mentioned in the thread above, there's a claim that Standard ML functors have similarities to rust: I haven't investigated yet but that'a piqued my interest & I'm eager to find out at some point!

    • Kranar 21 hours ago ago

      Nothing you mention is related to this article and neither Rust or C# solve the expression problem.

      The expression problem is about being able to extend both data types (new cases) and operations (new functions) without modifying existing code while preserving static type safety.

      C#'s extension methods can't be virtual, so you can't use them to actually add any new operations which can be dispatched on. They're just syntactic sugar for adding static methods which in non-OOP languages can be accomplished by declaring a free function.

      Rust traits let you add new types (just impl the Trait), but it doesn't let you add new functions, so it doesn't do anything to solve the problem.

      • justinpombrio 21 hours ago ago

        Rust's traits _do_ solve the expression problem.

        Each data type is a `struct`. Each operation is a trait. You `impl` each trait on each struct.

        This works even if you're using a library that has declared `struct A` and `struct B` and `trait F` and `trait G`, and you want to add both a `struct C` and a `trait H`, and fill out the whole 3x3 grid without modifying the library.

        The library says:

            struct A { ... }
            struct B { ... }
        
            trait F { ... }
            impl F for A { ... }
            impl F for B { ... }
        
            trait G { ... }
            impl G for A { ... }
            impl G for B { ... }
        
            fn some_function<T: F + G>(data: T) { ... }
        
        Your code says:

            use library::{A, B, F, G};
        
            struct C { ... }
            impl F for C { ... }
            impl G for C { ... }
        
            trait H { ... }
            impl H for A { ... }
            impl H for B { ... }
            impl H for C { ... }
        
            fn another_function<T: F + G + H>(data: T);
        
        Now `library::some_function()` can be called with an A, B, or C, even though C was defined by the user of the library. And `another_function()` can be called with A, B, or C, even though A was defined by the library.
        • Kranar 20 hours ago ago

          While this has nothing to do with the expression problem, it's worth noting that in any case your solution does not work in general.

          Rust does let you impl traits for types or traits that are inside of your crate, so your example strictly speaking works, but it does not let you impl traits if both the type and the trait are not inside of your crate. This is known as the orphan rule:

          https://doc.rust-lang.org/reference/items/implementations.ht...

          As the article points out, the expression problem itself is pretty simple to solve for code that you control, it's when writing modular software that can vary independently that gives rise to issues.

          • ceronman 19 hours ago ago

            Why would the the orphan rule be a problem here?

            The orphan rule only disallow impls if both the trait and the type are defined outside the crate.

            But in this example if you are adding a new type (struct) or a new operation (trait), well this new item should be in your crate, so all the impls that follow are allowed.

            • Kranar 16 hours ago ago

              It's not a problem here as it has nothing to do with this to begin with. I am pointing out a limitation in a feature that the author has presented, but that feature does not resolve anything about the topic being discussed.

              The goal isn't to allow a new type to work for an existing implementation of a function nor is it to take an existing type and write a new function that works with it. In the proposed solution you have `some_function` and the author claims that this solves the expression problem because you can take a new type C and pass it into some_function. Pretty much every language has a way to define new types and pass them into existing functions.

              The goal is to allow for that new type, C, to have its own implementation of `some_function` that is particular to C, as well as the ability to write new functions that can be specialized for existing types. In particular we want calls to `some_function` through an interface to call C's specific implementation when the runtime type of an object resolves to C, and calls whatever other implementations exist when called through another interface.

              The author's solution doesn't do that, it literally does nothing that you can't do in pretty much any other language.

        • wavemode 20 hours ago ago

          That's not the expression problem. The expression problem is the question of, what is required to add new methods to an existing trait. In your example, it requires modifying the existing impls for all types implementing the trait.

          What you've done is different, you've simply added a new trait entirely. That's not unique to Rust, you can add new interfaces in any language...

          • ceronman 18 hours ago ago

            > That's not the expression problem.

            To me it looks like this is exactly the expression problem. The expression problem is not about adding methods to a trait, it's about adding an operation to a set of types. In this case every operation is defined by a single trait.

            The idea behind the expression problem is to be able to define either a new operation or a new type in such a way that the code is nicely together. Rust trait system accomplish this beautifully.

            > That's not unique to Rust, you can add new interfaces in any language...

            Many languages have interfaces, but most of them don't allow you to implement them for an arbitrary type that you have not defined. For example, in Java, if you create an interface called `PrettyPrintable`, but you can't implement it for the `ArrayList` type from the standard library. In Rust you can do this kind of things.

          • kbolino 20 hours ago ago

            There is something in Rust that can help, though. You can define multiple impls for different bounds.

            Supposing we started off with a trait for expression nodes:

              pub trait ExprNode { fn eval(&self, ctx: &Context) -> Value; }
            
            Now, this library has gone out into the wild and been implemented, so we can't add new methods to ExprNode (ignoring default implementations, which don't help solve the problem). However, we can define a new trait:

              pub trait CanValidate { fn validate(&self) -> Result<(), ValidationError>; }
            
            And now we get to what is (somewhat) unique to Rust: you can define different method sets based upon different generic bounds. Suppose we have a parent Expr struct which encapsulates the root node and the context:

              pub struct Expr<N> { root: N, ctx: Context }
            
            We would probably already have this impl:

              impl<N: ExprNode> Expr<N> {
                pub fn eval(&self) -> Value { self.root.eval(&self.ctx) }
              }
            
            Now we just need to add this impl:

              impl<N: CanValidate + ExprNode> Expr<N> {
                pub fn validate(&self) -> Result<(), ValidationError> { self.root.validate() }
              }
            
            Of course, this is a trivial example (and doesn't even require the intersection bound), but it does illustrate how the expression problem can be solved. The problem this strategy creates, though, is combinatorial explosion. When we just have two traits, it's not a big deal. When we have several traits, and useful operations start to require various combinations of them (other than just Original + New), the number of impls and test cases starts to grow rapidly.
        • ceronman 18 hours ago ago

          I don't know why you're getting down voted. But you are right. Rust type system solves this in a very nice way. Maybe to clarify we can show how to do the exact same example shown with Clojure multi-methods, but in Rust:

              struct Constant { value: i32 }
              struct BinaryPlus { lhs: i32, rhs: i32 }
              
              trait Evaluate {
                  fn evaluate(&self) -> i32;
              }
              
              impl Evaluate for Constant {
                  fn evaluate(&self) -> i32 { self.value }
              }
              
              impl Evaluate for BinaryPlus {
                  fn evaluate(&self) -> i32 { self.lhs + self.rhs }
              }
              
              // Adding a new operation is easy. Let's add stringify:
              
              trait Stringify {
                  fn stringify(&self) -> String;
              }
              
              impl Stringify for Constant {
                  fn stringify(&self) -> String { format!("{}", self.value) }
              }
              
              impl Stringify for BinaryPlus {
                  fn stringify(&self) -> String { format!("{} + {}", self.lhs, self.rhs) }
              }
              
              // How about adding new types? Suppose we want to add FunctionCall
              
              struct FunctionCall { name: String, arguments: Vec<i32> }
              
              impl Evaluate for FunctionCall {
                  fn evaluate(&self) -> i32 { todo!() }
              }
              
              impl Stringify for FunctionCall {
                  fn stringify(&self) -> String { todo!() }
              }
          • mrkeen 8 hours ago ago

            The only thing missing here is separation of files.

            Assuming the whole Stringify section goes into a new file (likewise with FunctionCall) then I agree that this solves the expression problem.

      • adastra22 21 hours ago ago

        Traits have methods. How is that not adding new functions?

        • Kranar 20 hours ago ago

          Classes in C++ have methods too.

          The problem is that you can't add new methods to an existing class.

          • mrkeen 8 hours ago ago

            So hypothetically, if you could add new methods to an existing class, it would solve the problem?

            • Kranar an hour ago ago

              New virtual methods, yes.

          • kbolino 20 hours ago ago

            More specifically, you cannot add new abstract (aka "pure virtual") methods to an existing class/interface/trait when that class/interface/trait is already extended/implemented by code you don't control.

          • bitwize 20 hours ago ago

            Rust lets you combine multiple traits per type.

            • Kranar 20 hours ago ago

              C++ lets you inherit from multiple classes as well. I don't see how this has anything to do with being able to add new methods to existing types.

              • tomjakubowski 20 hours ago ago

                There is an important difference: in Rust you can write a new trait, with new methods, and impl that trait for a type without having to change the existing structs/enums which comprise that type at all. You can also do this (with some restrictions, "the orphan rule") for types defined in another library.

                In C++ classes, you have to be able to modify a class definition to extend it with new methods (or a new ancestor to multiply inherit from).

              • adastra22 18 hours ago ago

                You can add traits (with their methods) to existing types, without modification.

              • bitwize 19 hours ago ago

                C++ classes are types. Rust traits are applied to types.

        • naasking 19 hours ago ago

          > Traits have methods. How is that not adding new functions?

          If you add a method to a trait, every implementation of that trait has to be modified in the original source code. The point of the expression problem is that you want to be able to add new operations in a way that is type safe and checked for completeness, even if you don't have access to the original source code.

          Rust can probably do this, because its traits are equivalent to Haskell type classes and those have a solution to the expression problem, but it's not trivial. See the link in the article.

          • adastra22 18 hours ago ago

            In rust you would add a NEW trait, which is brought in scope by those who need it. Or add a new method to an existing trait, with a default implementation.

            The problem you describe makes no sense. It sounds like, for example, wanting to add a new variant to an enum while also not wanting to modify match statements which now fail exhaustive testing. That’s a direct contradiction.

            The only sensible charitable interpretation I can come up with is that you want to add new methods to a type without having to update any existing type definitions. This is what traits do.

            • naasking 17 hours ago ago

              > In rust you would add a NEW trait, which is brought in scope by those who need it.

              No, that's not sufficient if done naively as I think you're describing. You seem to be missing the context of this discussion as described by the article. Other code already depends on the current compiled abstraction, and you want to extend it in various ways, safely, so that existing code continues to work without recompilation, but you can extend the abstraction that code is already safely handling with new types and new operations without modifying the original source code.

              If you want to add a new node type to an AST, with algebraic data types you would have to modify the original source code of the original functions that match on those types, so types are closed to extension. You can leave the type open to extension via traits, but now the operations are closed to extension without modifying the original source code.

              > It sounds like, for example, wanting to add a new variant to an enum while also not wanting to modify match statements which now fail exhaustive testing. That’s a direct contradiction.

              No it's not, if enums and match statements were safely open to extension rather than closed. That's exactly what would solve the expression problem. This can and has been done [1] as the article described, via multimethods or via lifting data constructors to the type class level. It's obtuse and awkward, but in theory it doesn't have to be.

              The ultimate goal is that the new type and/or pattern matching cases have to be provided together to cover all of missing combinations, but importantly "together" means by the time the final program is assembled and not in the source code that defined the original types or operations.

              [1] open pattern matching and open algebraic data types have also been done in research languages

              • adastra22 15 hours ago ago

                > No, that's not sufficient if done naively as I think you're describing. You seem to be missing the context of this discussion as described by the article. Other code already depends on the current compiled abstraction, and you want to extend it in various ways, safely, so that existing code continues to work without recompilation, but you can extend the abstraction that code is already safely handling with new types and new operations without modifying the original source code.

                That is exactly what a new trait would accomplish. I remain confused as to the distinction you are making.

                • naasking 13 hours ago ago

                  Read the Haskell-specific follow-up which I mentioned, type classes have a correspondance with traits so the problems with this approach are the same:

                  https://eli.thegreenplace.net/2018/more-thoughts-on-the-expr...

                  • adastra22 10 hours ago ago

                    Unfortunately I don’t speak Haskell. As none of the terminology used by Haskell folks align with languages I do know, I really can’t make head or tails of that article.

                    • internet_points 8 hours ago ago

                      There's data definitions:

                          data Constant           = Constant Double deriving (Show)
                      
                      is a bit like `#[derive(Debug)] struct Constant(f64);`, ie. it's just a wrapper around a double and you can print it.

                      And there's typeclasses. Show is a typeclass. You can think of them as interfaces, so `instance Sqlite DB where open dsn = ...` is a bit like saying Sqlite is an implementation (instance) of the DB interface (typeclass). The typeclass itself could be defined like `class DB where open :: String -> IO ()` meaning the interface requires a function taking a string and having IO access (and of course you can require more than one function in your interface/typeclass).

                      The article also uses typeclasses with parameters. Parameters (and functions and variables) are written lowercase, while classes and constructors and such are capitalized, so

                          class (Expr e) => Stringify e where
                            stringify :: e -> String
                      
                          instance Stringify Constant where
                            stringify (Constant x) = show x
                      
                      means there's an interface Stringify that's parametrized over some type e (and that e in turn has to be in the Expr typeclass).
                      • adastra22 7 hours ago ago

                        Thank you. Now I understand the article. And the problem identified for type classes isn’t an issue in Rust. You’d return a ‘dyn Trait’ and be done. It means the object would have to be heap allocated, but that is already a given because you are wanting to define a function that can work with any type including future defined types. I guess Haskell doesn’t support the equivalent of trait objects for type classes?

                        I’m not trying to prove that rust is better or something. People who do that are annoying. It’s just weird to me that this is being presented as a fundamental and largely unsolved challenge when there is a simple solution at the heart of a widely deployed and well known language, which in turn stole it from elsewhere.

                        • naasking an hour ago ago

                          dyn traits have quite a few restrictions if I'm reading the docs right. Like, dispatchable functions can't even have type parameters [1]. I wouldn't exactly call that "solved", although maybe a language with traits and dyn traits that used GC wouldn't have such onerous restrictions.

                          [1] https://doc.rust-lang.org/reference/items/traits.html#dyn-co...

                    • gf000 7 hours ago ago

                      I tried to translate the article to Rust, but seems that the article's gotcha in Haskell is not necessarily an issue in Rust if we return `dyn OurTrait`, so now I'm even more confused. If anyone could take a look what I'm missing that would be welcome:

                      ------------

                      Say, you want to write a simple language interpreter.

                      You have an `Expr` enum (sum type), with say, a `Constant(f64)` and a `BinaryPlus(Expression, Expression)`.

                      You can easily add a new function expecting an `Expr` indeed, but if you were to add a new variant to the `Expr` enum, you would have to go through the code and change every use site.

                      You can solve the issue by simply making a `struct Constant` and a `struct BinaryPlus`. Now you can just define a new trait for both of them, and you can use that `dyn trait` in your code -- you can add new functions and also new types without code change at use site!

                      So what's the issue?

                      In Haskell, a logic like

                      ```

                      func example(runtime_val: f64) -> Expr {

                        if (runtime_val is someRuntimeCheck())
                          return Constant(..)
                      
                        else
                          return BinaryPlus(.., ..)
                      }

                      ```

                      can't compile in itself as `Expr` is a type class (=trait). Basically, in this mode Haskell awaits a concrete implementation (we actually get the exact same behavior with `impl Expr` in Rust), but here is my confusion: this can be circumvented in Rust with dyn traits..

                      Here is my (very ugly due to just hacking something together while pleasing the borrow checker) code showing it in Rust: https://play.rust-lang.org/?version=stable&mode=debug&editio...

                      • adastra22 7 hours ago ago

                        Your confusion is my confusion: Rust supports this.