I have a python script which builds and statically links my toolbox (fish, neovim, tmux, rg/fd/sd, etc.) into a self contained —-prefix which can be rsynced to any machine.
It has an activate script which sets PATH, XDG_CONFIG_HOME, XDG_DATA_HOME, and friends. This way everything runs out of that single dir and doesn’t pollute the remote.
My ssh RemoteCommand then just checks for and calls the activate script if it exists. I get dropped into a nice shell with all my config and tools wherever I go, without disturbing others’ configs or system packages.
I often need to login to colleagues' machines at work, but I find that their settings are not what I am familiar with.
So I wrote an SSH wrapper in POSIX shell which tars dotfiles into a base64 string, passes it to SSH, and decodes / setups on the remote temp directory. Automatically remove when session ends.
Supported: .profile, .vimrc, .bashrc, .tmux.conf, etc.
This idea comes from kyrat[1]; passing files via a base64 string is a really cool approach.
I came across something similar a few months ago. I pieced together a working hybrid by patching in parts from an older release with the latest version. I didn't ever work out if the latest version failed because of something in my environment or not, but I'm on a Mac fwiw.
We usually work on the VM with daily-built ISO. For example, I would compile and upload Java program to the frontend team member's VM, and type "srt" for "systemctl restart tomcat."
How about mounting your dotfiles directory (~/.config) or even your entire home directory on the remote system using SSHFS or NFS? I'm sure somebody would have tried it or some project may already exist. Any idea why that isn't as prevalent as copying your dotfiles over?
That requires the remote machine to be configured to SSH into your local machine. In the scenario where OP's project is useful (SSH to foreign machines) I might not want that.
On the other hand, if the remote machine is mine, it will have my config anyway.
I don't know, I just use the standard on my machine or on remote. Why bother to customize it all the time when you can't work without the customizations
For sure, you need to exclude whatever "dotfiles" you don't want copied (or explicitly copy the ones you want), particularly caches and other giant hidden things.
It's surprising to me how many projects can be replaced with just a line or two of shell script. This project is a slightly more sophisticated shell script that exposes a friendlier UI, but I don't see why it's needed when the alternative is much simpler, considering the target audience.
This reminds me - in a previous company I worked at, we had a bunch of old firewalls and switches that ran SSH servers without support for modern key exchange algorithms etc
One of the engineers wrote a shell alias called “shitssh”, which would call ssh with the right options to allow the old crufty crypto algorithms to be used. This alias got passed down to new members of the team like a family heirloom.
I didn't look closely at the project, but why take the extra step of base64? I do this all the time with tar by itself and it's wire-proof enough to work fine.
Using base64 would allow injecting tar inside the same ssh/bash session, that would later become interactive. That can save a second or two with slow(-ish) internet. Something along the lines of:
I have a dotfiles git repo that symlinks my dotfiles. Then I can either pull the repo down on remote machine or rsync. I’m not sure why I would pick this over a git repo with a dotfiles.sh script
This is for when you have to ssh into some machine that's not yours, in order to do debugging or troubleshooting -- and you need your precious dotfiles while you're in there, but it would be not nice to scatter your config and leave it as a surprise for the next person.
This installs into temp dirs and cleans it all up when you disconnect.
Personally, my old-man solution to this problem is different: always roll with defaults even if you don't like them, and don't use aliases. Not for everyone, but I can ssh into any random box and not be flailing about.
Even with OP's neat solution, it's not really going to work when you have to go through a jump box, or have to connect with a serial connection or some enterprise audit loggable ssh wrapper, etc
There's definitely something be said for speaking the common tongue, and being able to use the defaults when it's necessary. I have some nice customisations, but make a point of not becoming depwndent on them because I'm so often not in my own environment.
On the other hand, your comment has me wondering if ssh-agent could be abused to drag your config along between jump hosts and enterprise nonsense, like ti does forwarding of keys.
> I wonder why are dofiles have to be on remote machines?
Because the processes that use them run on the remote machines.
> I type an alias, the ssh client expands it on my local machine and send complex commands to remote.
This is not how SSH works. It merely takes your keystrokes and sends them to the remote machine, where bash/whatever reads and processes them.
Of course, you can have it work the way you imagine, it's just that it'd require a very special shell on your local machine, and a whole RAT client on the remote machine, which your special shell should be intimately aware about. E.g. TAB-completion of files would involve asking the remote machine to send the dir contents to your shell, and if your alias includes a process substitution... where should that process run?
Remote machines usually don’t need to know your keystrokes or handle your line editing, either. There’s a lot of latency to cut out, local customization to preserve, and protocol simplification to be had.
Be careful, this will force your defaults over system defaults possibly overriding compliance or security settings. There are a few places I noticed where well-placed malware could hop in etc.
It’s not bad software, it’s also not mature. I’m currently on a phone and on vacation so this is the extent of my review. Maybe I’ll circle back around with some PRs next week
Imagine somebody having AWS keys in their .bash_profile, then installing this thing, and ending up spraying copies of said keys all over some random systems. (facepalm)
I have a python script which builds and statically links my toolbox (fish, neovim, tmux, rg/fd/sd, etc.) into a self contained —-prefix which can be rsynced to any machine.
It has an activate script which sets PATH, XDG_CONFIG_HOME, XDG_DATA_HOME, and friends. This way everything runs out of that single dir and doesn’t pollute the remote.
My ssh RemoteCommand then just checks for and calls the activate script if it exists. I get dropped into a nice shell with all my config and tools wherever I go, without disturbing others’ configs or system packages.
Is this available somewhere? I'm curious to see how this works.
I often need to login to colleagues' machines at work, but I find that their settings are not what I am familiar with. So I wrote an SSH wrapper in POSIX shell which tars dotfiles into a base64 string, passes it to SSH, and decodes / setups on the remote temp directory. Automatically remove when session ends.
Supported: .profile, .vimrc, .bashrc, .tmux.conf, etc.
This idea comes from kyrat[1]; passing files via a base64 string is a really cool approach.
[1]: https://github.com/fsquillace/kyrat/
I came across something similar a few months ago. I pieced together a working hybrid by patching in parts from an older release with the latest version. I didn't ever work out if the latest version failed because of something in my environment or not, but I'm on a Mac fwiw.
https://github.com/cdown/sshrc
Ok, but what if your colleague does not have Vim installed?
Wouldn't it make more sense to have a tool that brings files over to the local computer, starts Vim on them, and then copies them back?
We usually work on the VM with daily-built ISO. For example, I would compile and upload Java program to the frontend team member's VM, and type "srt" for "systemctl restart tomcat."
I can’t recall encountering a system in the last 15 years that didn’t have vim (or at least vi for esoteric things) on it.
That starts to sound like using VS Code in remote mode.
Emacs in tramp mode.
How about mounting your dotfiles directory (~/.config) or even your entire home directory on the remote system using SSHFS or NFS? I'm sure somebody would have tried it or some project may already exist. Any idea why that isn't as prevalent as copying your dotfiles over?
That requires the remote machine to be configured to SSH into your local machine. In the scenario where OP's project is useful (SSH to foreign machines) I might not want that.
On the other hand, if the remote machine is mine, it will have my config anyway.
I’m trying to imagine why sshfs mounting the less-capable remote onto the workstation would be blocked.
This would enable a lot of attacks.
Could you elaborate?
I don't know, I just use the standard on my machine or on remote. Why bother to customize it all the time when you can't work without the customizations
Overriding HOME variable is neat! Make things much easier.
I think this will copy your 9gb Mozilla cache directory as well? Still one liners like this is all you need lol
My mozilla cache would be under ~/.mozilla/firefox. Is the nightly version moving to ~/.config?
Reason I say would be is that I disable disk cache among other things performed by Arkenfox [1]
[1] - https://github.com/arkenfox/user.js
What does config have to do with the one liner?
Prevents some data from ending up in ~/.mozilla. We dont sync what does not exist.
Any sufficiently-advanced automated rsync would have a filter for caches.
Except only ssh is filtered. Just commenting on what I see, not what should be
For sure, you need to exclude whatever "dotfiles" you don't want copied (or explicitly copy the ones you want), particularly caches and other giant hidden things.
I use something similar.
It's surprising to me how many projects can be replaced with just a line or two of shell script. This project is a slightly more sophisticated shell script that exposes a friendlier UI, but I don't see why it's needed when the alternative is much simpler, considering the target audience.
This reminds me - in a previous company I worked at, we had a bunch of old firewalls and switches that ran SSH servers without support for modern key exchange algorithms etc
One of the engineers wrote a shell alias called “shitssh”, which would call ssh with the right options to allow the old crufty crypto algorithms to be used. This alias got passed down to new members of the team like a family heirloom.
Is this similar to sshrc?
https://github.com/cdown/sshrc
Maybe also kind of related xxh
https://github.com/xxh/xxh
chezmoi has similar functionality, but it does install a binary on the target machine:
https://www.chezmoi.io/reference/commands/ssh/
I didn't look closely at the project, but why take the extra step of base64? I do this all the time with tar by itself and it's wire-proof enough to work fine.
Using base64 would allow injecting tar inside the same ssh/bash session, that would later become interactive. That can save a second or two with slow(-ish) internet. Something along the lines of:
Although, I don't think TFA does that.I have a dotfiles git repo that symlinks my dotfiles. Then I can either pull the repo down on remote machine or rsync. I’m not sure why I would pick this over a git repo with a dotfiles.sh script
https://erock-git-dotfiles.pgs.sh/tree/main/item/dotfiles.sh...
This is for when you have to ssh into some machine that's not yours, in order to do debugging or troubleshooting -- and you need your precious dotfiles while you're in there, but it would be not nice to scatter your config and leave it as a surprise for the next person.
This installs into temp dirs and cleans it all up when you disconnect.
Personally, my old-man solution to this problem is different: always roll with defaults even if you don't like them, and don't use aliases. Not for everyone, but I can ssh into any random box and not be flailing about.
Even with OP's neat solution, it's not really going to work when you have to go through a jump box, or have to connect with a serial connection or some enterprise audit loggable ssh wrapper, etc
There's definitely something be said for speaking the common tongue, and being able to use the defaults when it's necessary. I have some nice customisations, but make a point of not becoming depwndent on them because I'm so often not in my own environment.
On the other hand, your comment has me wondering if ssh-agent could be abused to drag your config along between jump hosts and enterprise nonsense, like ti does forwarding of keys.
Why would you want to ssh into a machine that's not yours? That's a violation of the Computer Frauds and Abuse Act, up to 10 years in prison!
When you have permission to do so, it isn’t.
For kitty users, see also https://sw.kovidgoyal.net/kitty/kittens/ssh/
I wonder why are dofiles have to be on remote machines?
e.g. I type an alias, the ssh client expands it on my local machine and send complex commands to remote. Could this be possible?
I suppose a special shell could make it work.
> I wonder why are dofiles have to be on remote machines?
Because the processes that use them run on the remote machines.
> I type an alias, the ssh client expands it on my local machine and send complex commands to remote.
This is not how SSH works. It merely takes your keystrokes and sends them to the remote machine, where bash/whatever reads and processes them.
Of course, you can have it work the way you imagine, it's just that it'd require a very special shell on your local machine, and a whole RAT client on the remote machine, which your special shell should be intimately aware about. E.g. TAB-completion of files would involve asking the remote machine to send the dir contents to your shell, and if your alias includes a process substitution... where should that process run?
> I suppose a special shell could make it work.
Working on it! :)
Remote machines usually don’t need to know your keystrokes or handle your line editing, either. There’s a lot of latency to cut out, local customization to preserve, and protocol simplification to be had.
More like shit toilet paper. Name like findtherapist.com
time to call the it team at work (on the phone) to ask them to add a new item to the software allowlist
Be careful, this will force your defaults over system defaults possibly overriding compliance or security settings. There are a few places I noticed where well-placed malware could hop in etc.
It’s not bad software, it’s also not mature. I’m currently on a phone and on vacation so this is the extent of my review. Maybe I’ll circle back around with some PRs next week
Imagine somebody having AWS keys in their .bash_profile, then installing this thing, and ending up spraying copies of said keys all over some random systems. (facepalm)
i was merely joking about the name apparently being intended to be pronounced in a rather juvenile manner
It's not obvious, but the shitt-p is borrowed from an anime character. So it should pronounce like sheet-p: https://ipa-reader.com/?text=%C9%95it%CB%90opi%CB%90