Elvish – Scripting language and interactive shell

(github.com)

79 points | by kartikarti 6 days ago ago

58 comments

  • Arcuru 2 days ago ago

    I've been using fish for many years now though I keep trying all these new shells.

    Ultimately I've found that for my interactive shell I just want something widely supported and with easy syntax for `if` and `for` loops for short multi-line commands. For anything longer than that I just reach for real Python using either the `sh` or `plumbum` package.

    I don't need the extra features very often, so I just run things in full Python where I'm already comfortable.

    I've tried oils/ysh, elvish, xonsh, nushell, and while they are _fine_ I don't want to learn a different language that's not quite Python and not quite shell.

    • xk3 a day ago ago

      I'm also a fish fan and largely agree--but if you are forced to use Windows and you don't care for WSL, nushell is likely your best option. It's pretty good! Almost feels like Unix but you're in Windows. I don't think fish shell will be ported over to Windows anytime soon...

  • photonthug 3 days ago ago

    While removing weird stuff from daily bash annoyances is interesting, I'm not necessarily looking to replace that with brand new but also pretty random weird stuff. Adding new rules isn't the same as adding structure. The documentation is also frequently strange in a way that makes it hard to digest. From https://elv.sh/learn/first-commands.html#external-commands

    > While Elvish provides a lot of useful functionalities as builtin commands, it can’t do everything. This is where external commands come in, which are separate programs installed on your machine. Many useful programs come in the form of external commands, and there is no limit on what they can do. Here are just a few examples: Git provides the git command to manage code repositories

    At first I thought, wait, is this a shell or not, do I have to write code or something to get access to normal external commands? But no, this is more like going to a car dealership and having the salesman say "Hey thanks for coming by, a car is a mechanical device consisting of metal parts and rubber parts for the purpose of taking you where you need to go! Now that we're on the same page about that, money is a thing made of paper for the purposes of .."

    Docs are hard, once or twice is fine, but lots of parts are like this and I gave up reading. Not sure if it's AI generated, but if the project is doing that then it should stop, and if it's not doing that it should consider starting to

    • nneonneo 3 days ago ago

      I mean, you are literally reading the first chapter of the tutorial for beginners (“Beginner's Guide to Elvish is for you if you haven’t used shells a lot or want to brush up on the basics”).

      They have a separate set of docs for people who do have some experience with other shells (https://elv.sh/learn/); you may find the quick tour more suitable for your speed: https://elv.sh/learn/tour.html

      • photonthug 3 days ago ago

        I did browse around, that's the page I got the first part of my comment from. Modules are one example of something that sounds probably good (https://elv.sh/ref/language.html#modules ). Good stuff is really weakened though by the many random changes that seem to go from arbitrary to.. also arbitrary, while destroying any chance of readability, backwards compatibility, or interoperability. Why?

        > Line continuation in Elvish uses ^ instead of \

        > Bash: echo .[ch] vs Elvish: echo .?[set:ch]

        One more example, guess what this does: `echo &sep=',' foo bar`. Is it bash, elvish? Some combination of the two with markdown? Legal in all three? Elvish certainly cleans up conditionals and stuff, but you probably shouldn't introduce new things with exactly the same name unless you've created a genuine superset/dialect where the older version still works without rewrite. Namespace it as elvish.echo or use your module system. Shadows aren't friendly, this is equivalent to the guy that monkey-patches sys.stderr=sys.stdout to work around their one-off problem

        • layer8 2 days ago ago

          Elvish is designed for use on Windows as well, where \ is the directory separator and wouldn’t be uncommon to occur at the end of a command line, so that can’t be used for line continuation. Windows (and DOS?) batch files use ^ for line continuation, so that’s probably where it was adopted from.

  • rendaw 2 days ago ago

    I went down this route based on HN recommendations, with some people calling it stable well documented.

    There's TODOs all over the documentation! There's no background task tools for scripting, and in interactive use background tasks are barely supported - an issue about background tasks has people going roughly "nobody needs to do tasks in parallel, that was only important when people were working on mainframes". The shell hooks have (undocumented) weird restrictions. Lazy iteration of lists is only supported by using functions with callbacks. Stable = development appears stopped.

    This is half baked and dead. For my new computer I really really wanted a lightweight new shell with orthogonal syntax thought out from the ground up and not glued together over 4 decades, and this seemed like the closest option! But this isn't it.

    • hnlmorg 2 days ago ago

      There’s also a lot of good design that’s gone into Elvish. And I don’t think it’s fair to call it “dead” when the maintainers for Elvish are active both on Github and here on HN too (probably other places too).

      However if you’re looking for an alternative then there’s:

      - Murex (disclaimer: I’m one of the maintainers) which does support background processes and has extensive documentation. https://murex.rocks

      - Nushell: I’m not personally a fan of its design choices but it has a large following of people who do really enjoy it so it might also appeal to yourself too.

      As for Elvish, I do encourage others to give it a go themselves. It’s really well thought out and what might be a deal breaker for some people isn’t for others.

      • rendaw 2 days ago ago

        Elvish had some very cool ideas, which is why I tried it out! Like the built in script checker! But it also has a lot of very basic issues that have been open for years, and TODOs in the documentation as I mentioned. People are going to read your message and put N hours into it and get burned, and I think this is a fair warning.

        Nushell also had very minimal background task support, so I rejected that. They explicitly say use some other program for background tasks in their docs.

        I actually looked at Murex after seeing it in previous threads, but I bounced for some reason... I just took another look though skipping the tutorial and I see you have `bg` and `fg` support! But does `bg` return the `fid`? Can you use those in scripts, or are they hobbled the same way bg/fg are in bash?

        It's been a good 4-5 months since I went down this rabbit hole, but IIRC the basic things I wanted to do and got blocked in multiple shells were:

        - System-wide interactive-use config file, I use Nixos and manage my system config using that

        - Background task support - I need to start an ssh tcp proxy, pipe a command over it, then kill ssh once the command is done (all in a script).

        - Post-command hook, to send a notification when a long command finishes

        - Async iteration of command output, i.e. streaming events with swaymsg subscribe and running a command when certain events occur

        - Value/call arity safety - i.e. a clear distinction between a single value and multiple values that doesn't rely on stringification hacks. I.e. in `command $x` `command` should always have one argument, regardless of the contents of `x`, and making that plural should be explicit.

        And then other standard evaluation criteria, like I looked at xonsh but it seemed like a massive hack, despite handling a lot of the above.

        • hnlmorg 2 days ago ago

          > does `bg` return the `fid`

          There's two kinds of process IDs in Murex: FID (function IDs) and PID (process IDs)

          Forking is expensive in POSIX and has a number of drawbacks such as the inability to share scoped variables without resorting to environmental variables. So a FID is basically a PID but managed inside the scope of Murex's runtime. You can manage FIDs in much the same way as you can manage PIDs, albeit using Murex builtins rather than coreutils (though PID management tools in Bash are technically builtins rather than coreutils too).

          What this means in practice is you can have entire blocks of code pushed into the background, eg

              » GLOBAL.name = "rendaw"
              » bg { sleep 5; echo "Hello $name" }; echo "not bg"
              not bg
              Hello rendaw
          
          You can see the FID as well as the job ID the usual way, via `jobs`

              » jobs
              JobID  FunctionID  State      Background  Process  Parameters
              %1     2109        Executing  true        exec     sleep 5
          
          ...and you can kill that entire `bg` block too

              fid-kill 2109
          
          But you'd also see any non-builtins in `ps` too:

              » ps aux | grep sleep
              hnlmorg   72749   0.0  0.0 410743712   1728 s012  S+    4:24p.m.   0:00.00 /usr/bin/grep --color=auto sleep
              hnlmorg   72665   0.0  0.0 410593056    432 s012  S+    4:23p.m.   0:00.00 /bin/sleep 5
          
          
          > Can you use those in scripts, or are they hobbled the same way bg/fg are in bash?

          While the above seems very complicated, the advantage is that `bg` and `fg` become much more script friendly.

          > - System-wide config file, I use Nixos and manage my system config using that

          This isn't Murex's default behaviour but you could easily alter that with environmental variables: https://murex.rocks/user-guide/profile.html#overriding-the-d...

          The latest version of Murex (v7.0.x), which is due to be released in the next few days, makes this even easier with a $MUREX_CONFIG_DIR var that can be used instead of multiple specific ones.

          > - Background task support - I need to start an ssh tcp proxy, pipe a command over it, then kill ssh once the command is done (all in a script).

          Murex has another layer of support for piping in addition to those defined in POSIX, which are basically channels in the programming language sense. In murex they're called "Murex Named Pipes" but the only reason for that is that they can be used as a glue for traditional POSIX pipes too. This is one area where the documentation could use a little TLC: https://dev.murex.rocks/commands/pipe.html

          > - Post-command hook, to send a notification when a long command finishes

          There are two different events you can hook into here:

          - onPrompt: https://dev.murex.rocks/events/onprompt.html

          This is similar to Bash et al prompt hooks

          - onCommandCompletion: https://dev.murex.rocks/events/oncommandcompletion.html

          This hooks into any command name that's executed. It runs the comment in a new TTY and buffers the command's output. So, for example, if you want a command like `git` to automatically perform a task if `git push` fails with a specific error message, then you can do that with onCommandCompletion.

          > - Async iteration of command output, i.e. streaming events with swaymsg subscribe and running a command when certain events occur

          I'd need to understand this problem a little more. The channels / Murex Named Pipes above might work here. As might onCommandCompletion.

          > - Value/call arity safety - i.e. a clear distinction between a single value and multiple values that doesn't rely on stringification hacks. I.e. in `command $x` `command` should always have one argument, regardless of the contents of `x`, and making that plural should be explicit.

          This one is easy: scalars are always $ prefixed whereas arrays are @ prefixed. So take the following example:

              array = %[ a b c ]
              
              » echo $array
              ["a","b","c"]  # a single parameter representation of the array
          
              » echo @array
              a b c          # the array expanded as values
          
          -----

          This is quite a lengthy post but hope it helps answer a few questions

          • rendaw a day ago ago

            Thanks for the answer! I think we may be talking past eachother a bit, but it's good to get confirmation on a lot of those!

            AFAICT named pipes have nothing to do with ssh tcp proxies, or at least that bit is tangential to the key point - the key point I was making is that ssh is running in the background while I'm running another command (with no relation between them, as far as the shell is concerned).

            You didn't answer my question about if `bg` returns a `fid` or not, and the documentation doesn't answer this either, nor did you say if those could be used in scripts... but it sounds like you're saying I have to parse `jobs` to get the `fid` of the command I just launched?

            > Async iteration

            This isn't about objects but about how operators evaluate. Maybe "streams" or "async streams" would be a better description? Actually, maybe this is where the named pipes you mentioned would be useful?

                while true; do echo hi; sleep 1; done | while read line; do echo $line 2; done
            
            in bash, prints "hi 2" once a second. That is, the body of the while loop is executed asynchronously with the pre-pipe command.

            AFAICT elvish doesn't have a `read` command, and `for` waits for the argument to complete before executing the body at all. This is a pretty common pattern, so I was surprised when there's no mention of it in the elvish docs. I don't like `read` in bash since it's yet another completely different syntax, but maybe Murex has something similar?

            TBH I just looked at the Murex docs again and I'm lost. Where's the reference? There's the "user guide" which contains the "user guide" (again) and also the "beginners guide" which to me are synonyms, "cheat sheet" and "read more" which both appear to be a bunch of snippets (again synonyms?), "operators and tokens" which are a reference of just a subset of the language, etc etc. I couldn't find anything on loops, which I'd expect to be a section somewhere. Clicking on "read / write a named pipe" in "builtin commands" somehow teleports me to an identically named page in a completely different "operators and tokens" section.

            In the end the read/write pipes page only shows examples of reading/writing pipes to other pipes, not using them in a for loop or anything. One example appears to use some unique syntax in `a` to send array elements to the pipe, but I couldn't find anything about that in the `a` section when I finally found the `a` section.

            Also are named pipes always global? Why can't they be local variables like other structures?

            It's great you have so much documentation, but I think it actually turned me away - it seems bloated with every bit of information split into multiple pieces and scattered around. Simplifying, consolidating, and reorganizing it from the top might help.

            > Arity safety

            Thanks! Yeah, I wasn't talking about arrays here, but if e.g. `x` in my example contains space-separated "these three words", will `command` receive 3 arguments or 1. In bash, `command` would get 3 arguments, if you don't invoke the variable quote hack. I just tried this out though, and it gets 1 argument, so great!

            • hnlmorg a day ago ago

              > You didn't answer my question about if `bg` returns a `fid` or not, and the documentation doesn't answer this either, nor did you say if those could be used in scripts... but it sounds like you're saying I have to parse `jobs` to get the `fid` of the command I just launched?

              Everything in the documentation can be used in scripts.

              `bg` doesn't write anything to stdout. The documentation does actually answer that if you look at the Usage section. In there it doesn't list <stdout>. eg compare `bg` it to other builtins and you'll see the ones that do write to stdout have `-> <stdout>` in the usage. Also none of the examples show `bg` writing anything to stdout.

              You wouldn't want `bg` to write the PID to stdout anyway because then it would be harder to use in scripts, because you'd then need to remember to pipe stdout to null or risk contaminating your script output with lots of random numbers.

              However the good news is you can still grab the PID for any forked process using <pid:variable_name>. eg

                  » bg { sleep <pid:MOD.sleep_pid> 99999; echo "Sleep finished" }
                  » kill $MOD.sleep_pid
                  Sleep finished
              
              ($MOD is required because bg creates a new scope so any local variables created in there wouldn't be visible outside of `bg` block. MOD sets the scope to module level (also supported are GLOBAL and ENV).

              It's also worth reiterating my previous comment that `bg` blocks are not separate UNIX processes but instead just different threads of the existing Murex process. In fact all Murex builtins are executed as threads instead of processes. This is done both for performance reasons (UNIX processes are slow, threads are fast...relatively speaking) and because you cannot easily share data between processes. So if `bg` was a process, it would be impossible to grab the PID of `sleep` and then share it with the main process without then having to write complex and slow IPCs between each UNIX process.

              > That is, the body of the while loop is executed asynchronously with the pre-pipe command.

              Are you just talking about each command in the pipe executing concurrently? That's the default in Murex. for example the following would only work if each command is working as a stream:

                  tail -f example.log | grep something | sed -e s/foo/bar/ | tee -a output.log
              
              The loop example could be achieved via `foreach` too. eg

                  while { true } { echo hi; sleep 1 } | foreach line { echo "$line 2" }
              
              The only time `foreach` wouldn't stream is with data-types that aren't streamable, such as JSON arrays. But that's documented in the JSON docs (https://murex.rocks/types/json.html#tips-when-writing-json-i...)

              > TBH I just looked at the Murex docs again and I'm lost. Where's the reference? There's the "user guide" which contains the "user guide" (again) and also the "beginners guide" which to me are synonyms, "cheat sheet" and "read more" which both appear to be a bunch of snippets (again synonyms?), "operators and tokens" which are a reference of just a subset of the language, etc etc. I couldn't find anything on loops, which I'd expect to be a section somewhere. Clicking on "read / write a named pipe" in "builtin commands" somehow teleports me to an identically named page in a completely different "operators and tokens" section.

              I'd generally refer people to the language tour to begin with https://dev.murex.rocks/tour.html It's pretty visible on the landing page but sounds like we could do more to make it visible when elsewhere on the site. I can take that feedback and work to make to the tour more prominent in the menus too (the pages you described do also link to the language tour, but clearly not in a visible enough way)

              > Also are named pipes always global? Why can't they be local variables like other structures?

              The only reason for that is because named pipes were one of the original constructs in Murex. They came before modules, scoping, and so forth. It would be possible to make them scopeable too but careful thought needs to be made about how to achieve that in a backwards-compatible yet intuitive way. And there have been other requests raised by users and the other contributors that have taken priority.

              > It's great you have so much documentation, but I think it actually turned me away - it seems bloated with every bit of information split into multiple pieces and scattered around. Simplifying, consolidating, and reorganizing it from the top might help.

              We'd welcome some recommendations here. Organising documentation is really hard. Even more so when the people who organise it aren't the same people who are expected to depend upon it.

              > Thanks! Yeah, I wasn't talking about arrays here, but if e.g. `x` in my example contains space-separated "these three words", will `command` receive 3 arguments or 1. In bash, `command` would get 3 arguments, if you don't invoke the variable quote hack. I just tried this out though, and it gets 1 argument, so great!

              I know you weren't talking about arrays, but the next question people ask is "how do I now expand one variable to be multiple parameters?"

      • em-bee a day ago ago

        could you go into more detail how (and maybe why) murex and elvish differ?

        also, i'd like to know more about how the job control works. that's one of the pain points in elvish, but both are written in go, so maybe there are some ideas that elvish could copy.

        • hnlmorg 15 hours ago ago

          Murex and Elivish share a lot of similar design goals. The author of Elvish has done a lot of great talks about his approach to Elvish development and you can tell a lot of care has gone into its design.

          With Murex, I initially took a "let's just experiment until I find something that works" with no fear of writing ugly proof-of-concept code. This allowed me to build a lot of stuff very quickly and originally it was written in a self-hosted git repository to solve my own problems. But as the project evolved I realised there was some good stuff in there that's worth sharing. The downside to this approach is that there is some ugliness to its design that has lasted even to the latest version due to Murex's compatibility promise. However, the latest version of Murex does provide an internally versioned runtime, which means scripts can now pin to a specific version of Murex and not worry about gradual changes over time (even if "gradual over time" in this context literally means "years of compatibility" even before the versioned runtime.

          This means that Murex and Elvish might feel like very different shells despite being conceptually quite similar.

          I'm a little reluctant to give specific areas where the two shells diverge because both are under active development and thus moving targets. So what might be true today might not be true tomorrow. However, I will say the syntax for each does vary significantly despite being superficially similar.

          As for job control, this was part of Murex's early design because it's a feature I used heavily at the time. So the concept of background and foreground processes are weaved throughout all of the core runtime. Like with Elvish, Murex doesn't create new UNIX processes for builtins. And with commands that are forked processes, Murex doesn't hand over complete ownership of the TTY to them so that Murex can still catch the signals. The reason for the latter is because Murex can then add additional hooks to job control, such as returning a list of open files any stopped processes have opened, and how far through reading those files it is. So Murex has needed to re-implement some of the job control logic that would normally be handled by the POSIX kernel. This does result in a lot of additional code, and thus places for things to go wrong. On balance, I think I made the right tradeoff for Murex. However if I were to write an entirely new shell from the ground up, I'd probably not do it this way again.

          • em-bee 10 hours ago ago

            i really like your approach to job control. my hope is that elvish can implement something similar. i am hopeful in that you already managed to overcome the challenges go introduces here, so the elvish devs can potentially take advantage of that.

            • hnlmorg 6 hours ago ago

              The limitations here aren’t due to Go. You can define forked processes gpid and ctty, which are the two key pieces you need to define to “correctly” support job control.

              And in fact Go actually makes it very easy to both catch job control signals raised by the kernel and set those aforementioned parameters when calling the fork syscall.

              The real problem here is that we don’t actually want to POSIX compliant job control because that would mean builtins inside hot paths would perform significantly worse and we lose the ability to easily and efficiently share data between commands, such at type annotations, localised variables, etc.

              The lack of type annotations is a particularly hard problem to solve and also the main reason to use an alternative shell like Murex or Elvish. In fact I’d say having type annotations work across commands is more important than job control.

              So the end result is having to replicate a lot of what you would normally get for free in POSIX kernels, except this time running inside your shell. In places you’re basically writing kludge after kludge. But whenever I despair about the ugliness of the code I’ve written, I remind myself that this is all running on 40 year old emulation mechanical teletypes. So the whole stack is already one giant hot ball of kludges.

              • em-bee 5 hours ago ago

                the challenges in go is a reference to a discussion in the elvish chat where one participant claimed that go's os.StartProces() API makes it impossible to implement unix job control with 100% fidelity.

                i don't actually know what the issue there is, and maybe there is a way to avoid using os.StartProcess but the point is that murex is not implementing POSIX compliant job control. and that is one way to get around any issues that may exist.

                and now having learned how murex handles job control i am happy that elvish avoided implementating POSIX job control so far because this allows rethinking how to approach this.

                this is all running on 40 year old emulation mechanical teletypes

                isn't that the real issue right there?

                i have been wondering if it is not possible to get rid of that emulation layer and provide for a more rich way for programs to interact with the user.

                we'll never be able to get rid of the emulation completely, but i wonder if the position in the stack can be moved.

                right now it is:

                    GUI
                    GUI application that emulates a 40 yr old terminal
                    shell
                    programs running in the shell
                
                how about:

                    GUI
                    GUI application for commandline programs that provides a rich interface
                    modern shell that runs on that interface
                    modern programs running in the shell
                    or terminal emulation for legacy programs that need it
                       legacy programs running with an emulation layer.
                
                the emulation layer could be started by the shell as needed.

                to get that emulation layer we only need to port something like tmux onto that new api. there is also a layer that implements job-control for shells that don't support it: https://github.com/yshui/job-security so this can be done without having to reimplement the emulation yet again

                • hnlmorg 3 hours ago ago

                  > the challenges in go is a reference to a discussion in the elvish chat where one participant claimed that go's os.StartProcess() API makes it impossible to implement unix job control with 100% fidelity.

                  That's not true. os.StartProcess() takes a pointer to https://pkg.go.dev/os#ProcAttr which then takes a pointer to https://pkg.go.dev/syscall#SysProcAttr

                  If I recall correctly, either Setctty or Foreground needs to be set to true (I forget which offhand, possibly Foreground, but browsing the StartProcess()'s source should reveal that.

                  I don't actually do that in Murex, because I want to add additional hooks to SIGSTSP and I can't do that if I hand ownership of the TTY to the child process.

                  https://github.com/lmorg/murex/blob/master/lang/exec_unix.go...

                  But that means that some tools like Helix then break job control in Murex because they don't think Murex supports job control (due to processes being marked non-traditionally). You can see higher up in that file above where I need to force Murex to take ownership of the TTY again as part of the process clean up. (line 28 onwards)

                  Processes invoked from a shell should also be part of a process group:

                  https://github.com/lmorg/murex/blob/master/lang/exec_unix.go...

                  You also need to set the shell to be a process session leader:

                  https://github.com/lmorg/murex/blob/master/shell/session/ses...

                  All of this is UNIX (inc Linux) specific so you can see compiler directives at the top of those files to exclude WASM, Windows, and Plan 9 builds. I don't even try to emulate job control on those platforms because it's too much effort to write, test, and maintain.

                  > i have been wondering if it is not possible to get rid of that emulation layer and provide for a more rich way for programs to interact with the user.

                  Funny enough, this is something I'm experimenting with my terminal emulator, though it's very alpha at the moment https://github.com/lmorg/Ttyphoon

                  > to get that emulation layer we only need to port something like tmux onto that new api

                  My terminal emulator does exactly this. It uses tmux control mode so that tmux manages the TTY sessions and the terminal emulator handles the rendering.

                  > https://github.com/yshui/job-security so this can be done without having to reimplement the emulation yet again

                  The problem with a 3rd party tool for job control is that it doesn't work with shell builtins. And the massive value add for shells like Murex and Elvish is their builtins. This is another reason why I didn't want POSIX job control in Murex: I wanted to keep my shell builtins powerful but also allow them to support job control in a way that feels native and transparent to the casual user.

      • ghthor 2 days ago ago

        There is ysh

    • rahen 2 days ago ago

      If by any chance you're an Emacs user, check out Eshell. It blends Elisp macros with shell commands, and since it keeps the buffer model, you can use all the usual Emacs tools for searching, sorting, and more. It's a unique shell with some learning curve, but it's mature and powerful.

      https://www.youtube.com/watch?v=9xLeqwl_7n0

    • onli 2 days ago ago

      Oil shell (now oils) was too close to bash for your goal?

      • chubot 2 days ago ago

        FWIW Oils has two modes, and I wrote new landing pages for them recently:

        Nine Reasons to Use OSH - https://oils.pub/osh.html - it runs existing shell scripts, ...

        What is YSH? - https://oils.pub/ysh.html - It's the ultimate glue language, like shell + Python + JSON + YAML, seamlessly put together

      • rendaw 2 days ago ago

        I think I didn't look at it initially because it was too close to bash, and then by the time I burned out on fully reimagined shells I fell back to zsh which was the shell I knew supported post-command hooks. Definitely not a final decision, but it might be a while before I try new shells again...

    • rendaw 2 days ago ago

      Arrg, s/lazy/async/.

      Just to add some further qualification, I was fully prepared to learn something from the ground up, throw away all my preconceptions, and give some weirdness a try - including no string interpolation. I wanted to 100% replace bash, both as a shell and for scripting everywhere. I was exactly Elvish's target user.

    • em-bee 2 days ago ago

      it's still under development, but it most certainly isn't dead. it's stable in that development is not disruptive. i use it as a daily driver.

      you are right about lack of support for job control, it's annoying. but my understanding is that the problem seems to be a difficulty in implementing job control with go. when people say nobody needs parallel tasks that doesn't make sense because you can run jobs in the background. you just have to do it explicitly before starting, and you can't switch back and forth. yes, that's a problem, and for me it is one of the most annoying missing features. but it comes up seldom enough that it doesn't disrupt daily use for me. which is to show that the things i need for daily use are all there.

    • nerdponx 2 days ago ago

      The big one for me is no string interpolation, as a deliberate design choice.

      • em-bee 2 days ago ago

        what can string interpolation do that i can't also do by sandwiching a variable between strings: 'string1'$var'string2'?

        string interpolation is useful where concatenating strings requires an operator, but i don't see the benefit otherwise.

        for more complex examples i can use printf, or someone could write a function that does string interpolation. since there is no need to fork, that should not be that expensive

        • layer8 2 days ago ago

          I mostly agree, but in Bash for example "string1${var}string2" guarantees to be a single argument, which 'string1'$var'string2' doesn‘t (when $var contains whitespace). So it entails certain other language design choices.

          • em-bee 2 days ago ago

            in bash yes, so that makes sense. thanks for pointing that out. in elvish $var is guaranteed to remain one string, and so 'string1'$var'string2' is always going to remain one argument too.

  • dang 2 days ago ago

    Related. Others?

    Elvish, expressive programming language and a versatile interactive shell - https://news.ycombinator.com/item?id=40316010 - May 2024 (114 comments)

    Elvish Scripting Case Studies - https://news.ycombinator.com/item?id=39549902 - Feb 2024 (1 comment)

    Elvish is a friendly interactive shell and an expressive programming language - https://news.ycombinator.com/item?id=24422491 - Sept 2020 (49 comments)

    Elvish: a shell with some unique semantics - https://news.ycombinator.com/item?id=17987258 - Sept 2018 (1 comment)

    Elvish 0.11 released - https://news.ycombinator.com/item?id=16174559 - Jan 2018 (1 comment)

    Elvish: friendly and expressive shell for Linux, macOS and BSDs - https://news.ycombinator.com/item?id=14698187 - July 2017 (86 comments)

    Elvish – An experimental Unix shell in Go - https://news.ycombinator.com/item?id=8090534 - July 2014 (75 comments)

  • linsomniac 2 days ago ago

    I've been eyeing a "better shell" for a while, but I've just decided that a couple zsh plugins and I'm probably happiest. As the meme says "Change my mind".

    I've been using fish for the last year or more, and I like some of the "batteries included", particularly the predicting of the command you want to run. But fish is too much like bash in syntax, meaning that I just think of it like bash until I have to type "(foo)" instead of "$(foo)", or "end" instead of "fi". The zsh plugins for doing command predicting and fancy prompt seems to get me all the fish benefits with none of the rough spots. And, frankly, the changes fish does doesn't seem to have any benefit (what is the benefit of "end" over "fi").

    Even xonsh (I'm a huge Python fan) doesn't really have enough pull for me to stick in it. Oils, nu, elvish, they all have some benefits for scripting, but I can't see myself switching to them for interactive use.

    It's kind of feeling like zsh is "good enough" with no real downsides. Maybe this is mostly that I've been using sh/ksh/bash/zsh for 40 years, some of these other shells might be easier to switch to if you lack the muscle memory?

    • 3PS 2 days ago ago

      > But fish is too much like bash in syntax, meaning that I just think of it like bash until I have to type "(foo)" instead of "$(foo)", or "end" instead of "fi"

      Note that fish does also support bash's "$(foo)" syntax and has for a few years now.

      • em-bee 2 days ago ago

        supporting more and more bashisms is what makes fish less attractive for me. i used fish for years. $(foo) in bash forks a subshell. in fish it doesn't. i am not a fan of supporting different syntaxes to do the same thing. if they had implemented $() to fork a subshell, that might have made some sense, but otherwise it is just redundant. learning to use () instead of $() or `` really isn't hard. so why?

        • linsomniac a day ago ago

          >really isn't hard. so why?

          Fair question. For me, it's extra friction whenever I copy a shell snippet that includes these non-fishisms, or when I'm running things between my workstation and the nearly 200 machines I manage, and I don't want to force my coworkers to have fish as the default root shell, or have to remember to "sudo --shell" or set up aliases. Well, plus, I'm still not entirely sold on fish, so I haven't wanted to set it up on my whole fleet.

          I just recently switched my cordless tool ecosystem at home for DIY work. There's something about having tools that I'll reach for because they're a joy to work with, rather than avoiding picking them up because of rough edges.

      • linsomniac 2 days ago ago

        Ooh, good to know!

    • tasuki 2 days ago ago

      What plugins? And where is your zsh config?

      (Ftr, I've been using zsh for maybe 5-8 years, managed to avoid oh-my-zsh, and only use 'zsh-autosuggestions' and 'zsh-syntax-highlighting' plugins. I've customised a theme to suit me, but barely know anything about zsh to be honest...)

      • linsomniac 2 days ago ago

        Well, pretty much what you said: syntax-highlighting, oh-my-zsh, git, command-not-found, autosuggestions, atuin, zoxide, and zsh-vi-mode. These days, I'm looking for as little stuff that I have to maintain myself as possible.

  • account-5 2 days ago ago

    Ever since I 'discovered' Nushell I've noticed a lot of new shells appearing on HN.

    The thing I like about Nushell is it does away with some of the things that I found hard with bash, and made data formats a first class citizen (something I enjoyed about powershell).

    I think if you like Lisp elvish would be ideal but for me the lack (seeming, I've not done a deep dive on the docs) of built-in data parsing is a no.

    • srott 2 days ago ago

      Elvish was a bit slow to me, nush is nice but I found out I can do most of the tasks using yq and jc more intuitively.

  • gatane 2 days ago ago

    I just use rc shell from Plan9 (ported to Linux[1]) nowadays, it is more simple than bash and whatever other shell there is out there.

    You want functions? For loops? Lists? They got them.

    [1] https://github.com/rakitzis/rc

  • xiaq a day ago ago

    Elvish author here, seems like I missed the annual posting of Elvish to HN this time :)

  • IshKebab 3 days ago ago

    Looks nice. Obviously way better than Bash, but there are a few options that are way better than Bash, so I feel like it should spend some time convincing me why I should use this over e.g. Nushell.

    Anyone have any experience of both?

    • sidkshatriya 3 days ago ago

      nushell vs Elvish

      The Nushell and Elvish scripting languages are similar in many ways. I personally find the "shell" experience better in Nushell than Elvish.

      Nushell

      - Bigger community and more contributors

      - Bigger feature set than Elvish

      - Built in Rust (Yay :-)! )

      Elvish

      - Mostly developed by one person

      - Built in golang

      - Amazing documentation and general attention to detail

      - Less features than Nushell

      - Feels more stable, polished, complete than Nushell. Your script written today more likely to work unaltered in Elvish a year down the line. However this is an impression. Nushell must have settled down since I last looked at it.

      For "one off" scripts I prefer Elvish.

      I would recommend both projects. They are excellent. Elvish feels less ambitious which is precisely why I like it to write scripts. It does fewer things and I think does them better.

      Nushell feels like what a future scripting language and shell might be. It feels more futuristic than Elvish. But as mentioned earlier both languages have a lot of similarities.

      • graemep 3 days ago ago

        > Built in Rust

        > Built in golang

        Does that matter?

        If you intend to be a contributor, of course the chosen language matters, but only a very small proportion of users will be contributors.

        • dijit 3 days ago ago

          There are quirks specific to languages.

          Rust tends to be marginally faster and compile to smaller binaries.

          Go projects tend to hit maturity faster and develop quicker.

          Its a relevant factor to quickly stereotype certain characteristics of development, but its not anywhere close to important.

        • IshKebab 2 days ago ago

          I don't think it matters whether it's Rust or Go especially, for an end user tool. But it definitely matters if it's Rust/Go compared to something else like C or Python.

          The language choice has certain implications and I would say Rust & Go have fairly similar implications: it's going to be pretty fast and robust, and it'll have a static binary that makes it easy to install. Implications for other languages:

          C: probably going to have to compile this from source using some janky autotools bullshit. It'll be fast but segfault if you look at it funny.

          Python: probably very slow and fragile, a nightmare to install (less bad since UV exists I guess), and there's a good chance it's My First Project and consequently not well designed.

          • graemep 2 days ago ago

            Not even that matters to me: I will install from repos. It might make packagers' lives a bit more difficult in some cases but they are probably very familiar with that.

            I have not really had problems with installing C (on the rare occasions I have compiled anything of any complexity) nor Python applications. Xonsh is supposed to be pretty good and written in Python, and most existing shells (bash, zsh, csh etc.) are written in C.

            Amusing aside, I use fish and until I decided to fact check before adding it to the list of shells written in C, I did not realise it was written in Rust.

    • Levitating 3 days ago ago

      What about fish? I've enjoyed using it for years.

      There's a few obvious features missing in fish like backgrounding an alias or an equivalent to set -e, other than that I have no complaints.

      The first thing I do on any machine is install fish.

      • sidkshatriya 3 days ago ago

        fish is amazing. I use it as my primary shell.

        But for writing scripts I would reach for Elvish/Nushell. More powerful.

  • atiq-ca 3 days ago ago

    Looks interesting! Does it have OOP features kinda like how powershell has that?

  • baobun 3 days ago ago

    Anyone here using elvish on the regular? Anecdotes please!

    • sidkshatriya 3 days ago ago

      I don't use Elvish daily (I use fish) but writing scripts in Elvish is a great experience. The elvish executable can serve as an LSP server and that makes writing Elvish scripts a bit easier.

      I don't care much for the Elvish shell experience, rather I like the Elvish scripting language. The documentation is top notch and the language evolves slowly and feels stable.

      • einpoklum 3 days ago ago

        > I don't care much for the Elvish shell experience, rather I like the Elvish scripting language.

        It's a shell, aren't those two things supposed to be the same basically? Or - do you mean the interaction with the terminal/command-line?

        • sidkshatriya 3 days ago ago

          The shell prompt is also a small interface. How your shell responds to tab autocomplete, provides suggestions etc. can be quite helpful. Here I just like the way fish suggests filenames, provides an underline for filenames that exist and so on.

          The language is what you write in an $EDITOR. Here Elvish scripts can be nice, succinct and powerful. I like how I don't have to worry about strange "bashisms" like argument quoting etc. Everything feels consistent.

        • 3 days ago ago
          [deleted]
  • TrnsltLife a day ago ago

    Here I was hoping for a scripting language in Sindarin or Quenya. So disappointed.