Only tangentially related, but: what is the appeal of TUI's? I don't really understand.
The advantages of CLI's are (IMO) that they compose well and can be used in scripts. With TUI's, it seems that you just get a very low fidelity version of a browser UI?
The advantage of TUIs is that you get a low-fidelity browser UI that doesn’t need to be exposed to the internet, that can be run remotely via SSH, which doesn’t ship you megabytes of JavaScript, and which works equally well on everyone’s machine
Apart from the apparent comparative ease of creation relative to GUIs (I suspect Electron apps may be easier than TUIs), I think the main benefits from a user perspective seems to be down to cultural factors & convention:
- TUIs tend to be faster & easier to use for cli users than GUI apps: you get the discoverability of GUI without the bloated extras you don't need, the mouse-heavy interaction patterns & the latency.
- keybindings are consistent & predictable across apps: once you know one you're comfortable everywhere. GUI apps are highly inconsistent here if they even have keybindings
- the more limited widget options brings more consistency - GUI widgets can be all sorts of unpredictable exotic
I had the same doubt. With CLIs you can make your own custom shortcuts, LLMs can use it to get things done for you as well. With TUIs I think either these are hobby projects or meant for people who are obsessed with speed.
Though speed impacts are also something which I am uncertain about. Comparing Vim with IDEs, for sure there will be few things which are faster in vim but decent no of things which can be done faster in an IDE as well, so can't comment on your overall speed gains.
TUIs can be self explanatory if designed well.
Ideally the same tool would have a CLI mode with JSON(L) formatted output, launched with a flag like —json so that it can be composed (unix-like) with other CLI commands, and also usable by LLM-agents, with jq etc. This is what I do in a TUI/CLI tool I’ve been building
I recently started using k9s after using kubectl for a while. It's just faster and more convenient. A well made TUI also offers a bit more discoverability than a CLI. If you know exactly what you're looking for the CLI is fine, but if you need to explore a little bit, a TUI is better.
Many tools offer both CLI and TUI interface. TUI is especially useful at scale, when you need to deal with a large amount of resources efficiently or have a good overview of the whole environmtnt faster - e.g. *top, k9s, Midningt Commander etc.
Before Windows / GUIs, everything was a TUI. Some of those applications were kept around for a long time even when Windows was mainstream, because they were faster. If you've ever seen an employee (or co-worker) work in one of those applications you'll see it. They can zip through screens much quicker than someone doing point and click work.
It's truly an amazing sight, our payroll system was all text based screens. I had a question and the clerk ripped through like 10 screens to get the information I needed, we're talking 200ms human reaction speed through each screen.
I also worked with a mythical 10x developer and he knew all the Visual Studio keyboard shortcuts. It was just like watching that payroll clerk (well, almost, we had under-specced machines and Visual Studio got very slow and bloated post v2008), I don't think I ever saw him touch the mouse.
Faster and easier to use. I love for example Lazygit. It’s the fastest way to use git (other than directly as a cli of course but if you want some graphical info lazygit is great)
you also get a very slimmed down interface that is usually way faster to load. one of the reasons I love HN is that it is super snappy to load and isn’t riddled with dependencies that take forever to load and display. Snappy UIs are always a breath of fresh air.
UIs used to be more responsive on slower hardware, if they took longer then the human reaction time, it was considered unacceptable.
Somewhere along the line we gave up and instead spend our time making skeleton loading animations as enticing as possible to try and stop the user from leaving rather then speeding things up.
Even with compression on, running most apps like a web browser over x11 forwarding, is slow to the point of almost being unusuable.
However running web apps over forwarding is pretty decent. VS Code and pgAdmin have desktop like performance running in the browser SSH port forwarded from a remote server.
More broadly, I have concerns about introducing a middleware layer over AWS infrastructure. A misinterpreted command or bug could lead to serious consequences. The risk feels different from something like k9s, since AWS resources frequently include stateful databases, production workloads, and infrastructure that's far more difficult to restore.
I appreciate the effort that went into this project and can see the appeal of a better CLI experience. But personally, I'd be hesitant to use this even for read-only operations. The direct AWS cli/console at least eliminates a potential failure point.
Curious if others have thoughts on the risk/benefit tradeoff here.
This was my first thought too. We already have terraform for repeatable, source controlled service provisioning and we have the relatively straightforward aws cli for ad hoc management. I don’t know that I really need another layer, and it feels quite risky.
The AWS APIs are quite stable and usually do exactly one thing. It’s hard to really see much risk. The worst case seems to be that the API returns a new enum value and the code misinterprets it rather than showing an error message.
The read-only hesitation seems overcautious. If you’re genuinely using it read-only, what’s the failure mode? The tool crashes or returns bad data - same risks as the AWS CLI or console.
The “middleware layer” concern doesn’t hold up. This is just a better interface for exploring AWS resources, same as k9s is for Kubernetes. If you trust k9s (which clearly works, given how widely it’s used), the same logic applies here.
If you’re enforcing infrastructure changes through IaC, having a visual way to explore your AWS resources makes sense. The AWS console is clunky for this.
All the use cases that popped into my head when I saw this were around how nice it would be to be able to quickly see what was really happening without trying to flop between logs and the AWS console. That's really how I use k9s and wouldn't be able to stand k8s without it. I almost never make any changes from inside k9s. But yeah... I could see using this with a role that only has Read permissions on everything.
I guess it's the kind of thing where you want an almost Terraform like "plan" that it prints out before it does anything, and then a very literal execution engine that is incapable of doing anything that isn't in the plan.
Different use cases. I want aws-cli for scripting, repeated cases, and embedding those executions for very specific results. I want this for exploration and ad-hoc reviews.
Nobody is taking away the cli tool and you don't have to use this. There's no "turns into" here.
Oh I think you misinterpreted my comment! I am very much a fan of this, wasn't throwing shade. I am just remarking on how my side-project scope today dwarfs my side-project scope of a year or two ago.
Looks great! If you have multiple AWS accounts in your org, you probably want to use something like aws-sso-util to populate your profiles so you can quickly swap between them
Embarrassingly dumb question: if you’re one of the few users who don’t run a dark background terminal … how well do these TUI render (in a light background)?
Not a dumb question at all. I grew up using actual green screen terminals, and the advent of high-resolution colour monitors and applications with dark text on a white background felt like a blessing. I truly do not understand the regression to dark mode. It's eyestrain hell for me.
Unfortunately, I was unable to test in my light-background terminal, since the application crashes on startup.
If I'm working in a dark room, then light mode is eye strain hell. With dark mode, the minimum brightness I can achieve is about 100x lower than with light mode.
OLED monitors will bring green screen terminals back in style quite soon (with occasional orange and red highlights for that Hollywood haxx0r UX effect)
> // TODO: Handle credential_source, role_arn, source_profile, sso_*, etc.
So it does not support any meaningful multi-account login (SSO, org role assumption, etc), and requires AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY. That's a no-no from security POV for anything in production, so not sure what's the meaningful way to use that.
You or the developer could piggy back on “aws configure export-credentials --profile profile-name —-format process” to support any authentication that the CLI supports.
I also care security part, but this is just beginning :) New features will be added iteratively based on community requests, and it seems there are plenty of good requirements in HN thread, thanks
I wish more TUI designers would spend some time playing with Hercules and experiencing the old "mainframe" way of arranging interfaces. Those guys really knew what they were doing.
They are like web forms. Fill in everything, then hit send.
Fixed positions, shortcuts, tab-indexed, the order is usually smartly layed out. Zero latency. Very possible to learn how forms are organized and enter data with muscle memory. No stealing focus when you don't expect it.
Optimized for power users, which is something of a lost art nowadays. GUIs were good for discoverability for a while but increasingly I think they are neither great for power users nor for novices, just annoying and yanky.
I remember airport hostesses when they used it to get your boarding pass from the mainframe, it took them 5 seconds and a few key-strokes like 3 letter of my name to get the job done. When they switched to web-uis some year, I vividly remember seeing them, 4 at a time on the same screen, trying to figure out what was going on. Took them 15 minutes and a phone call to get the boarding pass ready. I feel sad when I think about this.
Nice! A while back I had started something similar for Azure but it never really got traction (or nearly as polished as this!). It's a rough proof of concept but maybe it'll be useful to Azure users:
I run a neocloud and our entire UX is TUI-based, somewhat like this but obviously simpler. The customer feedback has been extremely positive, and it's great to see projects like this.
Can you tell me more about what do you mean by Neocloud and where are you exactly hosting the servers (do you colocate or do you resell dedicated servers or do you use the major cloud providers)
this is my first time hearing the term neocloud, seems like its focused on AI but I am gonna be honest that is a con in my book and not a pro (I like hetzner and compute oriented compute cloud providers)
Share to me more about neoclouds please and tell me more about it and if perhaps it could be expanded beyond the AI use case which is what I am seeing when I searched the term neocloud
Neocloud has come to refer to a new class of GPU-focused cloud providers. Sure, most of our customers use us for AI purposes, but it is really open to anything GPU related.
We buy, deploy and manage our own hardware. On top of that, we've built our own automation for provisioning. For example, K8S assumes that an OS is installed, we're operating at a layer below that which enables to machine to boot and be configured on-demand. This also includes DCIM and networking automation.
We built our own ironic. Instead of a ton of services and configuration, we just have a single golang binary. Our source of truth is built on top of NetBox. We integrate Stripe for billing. We're adding features as customers ask for them.
While it is a lot of moving parts coordination, I'm not sure I agree with the complexity...
Because when a project is done in 10 minutes by llm - it will be abandoned in a week.
When a person intentionally does it and spends a month or two - they far more likely will support it as they created this project with some intention in the first place.
Then I don’t understand. My point was that it doesn’t matter whether the machine or the human actually wrote the code; liability for any injury ultimately remains with the human that put the agent to work. Similarly, if a developer at a company wrote code that injured you, and she wrote that code at the direction of the company, you don’t sue the developer, you sue the company.
I’d be willing to bet the classes of bugs introduced would be different for humans vs LLMs. You’d probably see fewer low level bugs (such as off-by-one bugs), but more cases where the business logic is incorrect or other higher concerns are incorrect.
Also, I find it is usually better to follow up with something like:
'It's better to use Y instead of X BECAUSE of reasons O, P, Q, R & S' vs making a blanket statement like 'Don't use X, use this other insecure solution instead', as that way I get to learn something too.
I use mise to update binaries. Especially TUIs that are not on the arch repos. It supports several backends, from cargo crates to GitHub releases, to uv for python and so on.
So one doesn't really need homebrew that has Linux as third class citizen (with the 2nd class empty)
Also don't use Homebrew on MacOS because it screws around in /usr/local and still hasn't worked out how root is supposed to work.
Use Macports, it's tidy, installs into /opt/macports, works with Apple's frameworks and language configuration (for python, java etc), builds from upstream sources + patches, has variants to add/remove features, supports "port select" to have multiple versions installed in parallel.
Linuxbrew is absolutely fantastic. No need to mess with apt repositories and can keep custom binaries separate from the os.
Almost everything is there, and it just works.
>the best way to install these tools is to build it yourself, i.e. make install, etc.
And you're fully auditing the source code before you run make, right? I don't know anyone who does, but you're handing over just as much control as with curl|bash from the developer's site, or brew install, you're just adding more steps...
> And you're fully auditing the source code before you run make.
I mean you can?
But that is the whole point when the source is available, it is easier to audit, rather than binaries.
Even with brew, the brew maintainers have already audited the code, and it the source to install and even install using --HEAD is hosted on brew's CDN.
As a user of immutable Linux (bazzite), I suggest speaking for yourself and not for others.
On my platform, Homebrew is a preferred method for installing CLI tools. I also personally happen to like it better on Linux than Mac (it seems faster/better).
The crazier part is a reddit post on AWS was made for someone releasing a $3 a month closed source version of this, that received a lot of traction, but a bit of flack for being closed source was made 3 hours before the first commit. This guy 100% took the idea and the open source parts and recreated it to post here. Look at the readme and compare them. It is almost a 1:1 copy of the other. This dude is hella sketch. And if this is getting traction we are cooked as developers.
That someone would be you (I saw that Reddit post: https://www.reddit.com/r/aws/comments/1q3ik9z/i_made_a_termi...). I'm not sure I would describe the collective response as having "a lot of traction"; most respondents panned both the price and the closed-source nature of the offering.
What you're learning here is that there's not really a viable market for simple, easily replicable tools. People simply won't pay for them when they can spin up a Claude session, build one in a few hours (often unattended!), and post it to GitHub.
Real profit lies in real value. In tooling, value lies in time or money saved, plus some sort of moat that others cannot easily cross. Lick your wounds and keep innovating!
Please dont open source your code if you’re going to call people hella sketch for deriving from it. Did he violate your license? Attack that action, not the person doing open source.
It is indeed not open sourced, as the repo only has a README and a download script. The "open source" they are referring to I think is the similar README convention.
> And the folder structure is almost an exact mirror of mine
Even though Rust has patterns on how to organize source code, similar folder structure is unlikely, particularly since the original code is not public so it would have to be one hell of a coincidence. (the funniest potential explanation for this would be that both people used the same LLMs to code the TUI app)
It looks like the first commit was just a squash and merge, I probably would never trust a public commit history as some kind of source of truth anyways. I'm curious what your issue is?
> I probably would never trust a public commit history as some kind of source of truth
What _would_ you trust as a source of truth for source code if not a public commit log? I agree that a squash commit’s timestamp in particular ought not be taken as authoritative for all of the changes in the commit, but commit history in general feels like the highest quality data most projects will ever have.
I really hate when cryptocurrency has valid applications but in this case, you're looking for a public adversarial append only log system which is what a blockchain is.
I think you’re vastly overestimating how difficult this type of application would be to an LLM. There’s no need to steal another code base…isn’t yours closed source, anyways?
You could probably get 90% of the way there with a prompt that literally just says:
> Create a TUI application for exploring deployed AWS resources. Write it in Rust using the most popular TUI library.
I didn’t take code or reverse-engineer anything from that Reddit project, and I wasn’t aware of it when I started.
I’ve been a long-term k9s user, and the motivation was simply: “I wish I had something like k9s, but for AWS.” That’s a common and reasonable source of inspiration.
A terminal UI for AWS is a broad, well-explored idea. Similar concepts don’t imply copied code. In this case, even the UIs are clearly different—the interaction model and layout are not the same.
The implementation, architecture, and UX decisions are my own, and the full commit history is public for anyone who wants to review how it evolved.
If there’s a specific piece of code you believe was copied, I’m happy to look at it. Otherwise, it’s worth checking what someone actually built before making accusations based on surface-level assumptions.
It’s pretty clear it was your post/project you reference, but how do you know he got inspiration from you? Did OP post on your Reddit post, confirming they were even aware of it?
Creating a tool via a LLM based on a similar idea isn’t quite stealing.
Only tangentially related, but: what is the appeal of TUI's? I don't really understand.
The advantages of CLI's are (IMO) that they compose well and can be used in scripts. With TUI's, it seems that you just get a very low fidelity version of a browser UI?
The advantage of TUIs is that you get a low-fidelity browser UI that doesn’t need to be exposed to the internet, that can be run remotely via SSH, which doesn’t ship you megabytes of JavaScript, and which works equally well on everyone’s machine
They are usually faster to create and pretty much cross-platform. They should also work great with screen readers, though that is only an assumption.
TUI also means that I do not have to memorize an infinite amount of command line parameters.
I really like well-made TUIs.
Practically? The best keyboard-driven programs are (incidentally) TUIs.
For some reason, expressive keyboard-driven interfaces aren't as popular in GUI interfaces.
Apart from the apparent comparative ease of creation relative to GUIs (I suspect Electron apps may be easier than TUIs), I think the main benefits from a user perspective seems to be down to cultural factors & convention:
- TUIs tend to be faster & easier to use for cli users than GUI apps: you get the discoverability of GUI without the bloated extras you don't need, the mouse-heavy interaction patterns & the latency.
- keybindings are consistent & predictable across apps: once you know one you're comfortable everywhere. GUI apps are highly inconsistent here if they even have keybindings
- the more limited widget options brings more consistency - GUI widgets can be all sorts of unpredictable exotic
- anecdotally they just seem higher quality
For one thing, you don't need to run them in a browser.
Look up k9s, it's a great example. But as sibling comments say, it's all keyboard driven and most actions are single keypresses.
I had the same doubt. With CLIs you can make your own custom shortcuts, LLMs can use it to get things done for you as well. With TUIs I think either these are hobby projects or meant for people who are obsessed with speed.
Though speed impacts are also something which I am uncertain about. Comparing Vim with IDEs, for sure there will be few things which are faster in vim but decent no of things which can be done faster in an IDE as well, so can't comment on your overall speed gains.
Tuis are fine if you've got a bunch of pets or cattle you admin over ssh
TUIs can be self explanatory if designed well. Ideally the same tool would have a CLI mode with JSON(L) formatted output, launched with a flag like —json so that it can be composed (unix-like) with other CLI commands, and also usable by LLM-agents, with jq etc. This is what I do in a TUI/CLI tool I’ve been building
I recently started using k9s after using kubectl for a while. It's just faster and more convenient. A well made TUI also offers a bit more discoverability than a CLI. If you know exactly what you're looking for the CLI is fine, but if you need to explore a little bit, a TUI is better.
Many tools offer both CLI and TUI interface. TUI is especially useful at scale, when you need to deal with a large amount of resources efficiently or have a good overview of the whole environmtnt faster - e.g. *top, k9s, Midningt Commander etc.
Before Windows / GUIs, everything was a TUI. Some of those applications were kept around for a long time even when Windows was mainstream, because they were faster. If you've ever seen an employee (or co-worker) work in one of those applications you'll see it. They can zip through screens much quicker than someone doing point and click work.
It's truly an amazing sight, our payroll system was all text based screens. I had a question and the clerk ripped through like 10 screens to get the information I needed, we're talking 200ms human reaction speed through each screen.
I also worked with a mythical 10x developer and he knew all the Visual Studio keyboard shortcuts. It was just like watching that payroll clerk (well, almost, we had under-specced machines and Visual Studio got very slow and bloated post v2008), I don't think I ever saw him touch the mouse.
Faster and easier to use. I love for example Lazygit. It’s the fastest way to use git (other than directly as a cli of course but if you want some graphical info lazygit is great)
you also get a very slimmed down interface that is usually way faster to load. one of the reasons I love HN is that it is super snappy to load and isn’t riddled with dependencies that take forever to load and display. Snappy UIs are always a breath of fresh air.
> Snappy UIs are always a breath of fresh air.
UIs used to be more responsive on slower hardware, if they took longer then the human reaction time, it was considered unacceptable.
Somewhere along the line we gave up and instead spend our time making skeleton loading animations as enticing as possible to try and stop the user from leaving rather then speeding things up.
In addition to what other commenters said - TUIs can be installed on a server and used over SSH
Well CLI and web UIs can also be used remotely. (Arguably even x11 apps can.)
Even with compression on, running most apps like a web browser over x11 forwarding, is slow to the point of almost being unusuable.
However running web apps over forwarding is pretty decent. VS Code and pgAdmin have desktop like performance running in the browser SSH port forwarded from a remote server.
The appeal is I can use it with just a terminal connection to the server
I couldn't get this to run successfully.
More broadly, I have concerns about introducing a middleware layer over AWS infrastructure. A misinterpreted command or bug could lead to serious consequences. The risk feels different from something like k9s, since AWS resources frequently include stateful databases, production workloads, and infrastructure that's far more difficult to restore.
I appreciate the effort that went into this project and can see the appeal of a better CLI experience. But personally, I'd be hesitant to use this even for read-only operations. The direct AWS cli/console at least eliminates a potential failure point.
Curious if others have thoughts on the risk/benefit tradeoff here.
This was my first thought too. We already have terraform for repeatable, source controlled service provisioning and we have the relatively straightforward aws cli for ad hoc management. I don’t know that I really need another layer, and it feels quite risky.
cdk bro
Terraform CDK is just a layer on top of terraform to avoid writing HCL/JSON.
It's also deprecated by Hashicorp now.
CDK on AWS itself uses CFN, which is a dog's breakfast and has no visibility on what's happening under the covers.
Just write HCL (or JSON, JSONNET etc) in the first place.
Am I the only person that despises CDK? Why would I use a cloud specific language instead of something like opentofu?
I thought that was deprecated?
cdktf is, not AWS CDK. The former allows you to use Terraform without HCL, the latter is a generator for CloudFormation.
The AWS APIs are quite stable and usually do exactly one thing. It’s hard to really see much risk. The worst case seems to be that the API returns a new enum value and the code misinterprets it rather than showing an error message.
The read-only hesitation seems overcautious. If you’re genuinely using it read-only, what’s the failure mode? The tool crashes or returns bad data - same risks as the AWS CLI or console.
The “middleware layer” concern doesn’t hold up. This is just a better interface for exploring AWS resources, same as k9s is for Kubernetes. If you trust k9s (which clearly works, given how widely it’s used), the same logic applies here.
If you’re enforcing infrastructure changes through IaC, having a visual way to explore your AWS resources makes sense. The AWS console is clunky for this.
> what’s the failure mode?
The tool misrepresents what is in AWS, and you make a decision based on the bad info.
FWIW I agree with you it doesn’t seem that bad, but this is what came to mind when I read GPs comment
Fair. Best use might be to double check on the proper UI before making any big decisions, and just use it as a general monitor
I mean sure… but to me that is as likely as the official ui misrepresenting the info.
All the use cases that popped into my head when I saw this were around how nice it would be to be able to quickly see what was really happening without trying to flop between logs and the AWS console. That's really how I use k9s and wouldn't be able to stand k8s without it. I almost never make any changes from inside k9s. But yeah... I could see using this with a role that only has Read permissions on everything.
I guess it's the kind of thing where you want an almost Terraform like "plan" that it prints out before it does anything, and then a very literal execution engine that is incapable of doing anything that isn't in the plan.
Should have a Price Of Current Changes menu bar item! So you can see if your changes cost $.01 or $10,001.
If only Amazon made it so simple
That's how they get you, lol.
Somehow every 15 line shell script I write now turns into a 50kloc bun cli or tui app. Apparently there are many such cases.
Different use cases. I want aws-cli for scripting, repeated cases, and embedding those executions for very specific results. I want this for exploration and ad-hoc reviews.
Nobody is taking away the cli tool and you don't have to use this. There's no "turns into" here.
Oh I think you misinterpreted my comment! I am very much a fan of this, wasn't throwing shade. I am just remarking on how my side-project scope today dwarfs my side-project scope of a year or two ago.
Terminal electron.
Looks great! If you have multiple AWS accounts in your org, you probably want to use something like aws-sso-util to populate your profiles so you can quickly swap between them
Embarrassingly dumb question: if you’re one of the few users who don’t run a dark background terminal … how well do these TUI render (in a light background)?
Not a dumb question at all. I grew up using actual green screen terminals, and the advent of high-resolution colour monitors and applications with dark text on a white background felt like a blessing. I truly do not understand the regression to dark mode. It's eyestrain hell for me.
Unfortunately, I was unable to test in my light-background terminal, since the application crashes on startup.
If I'm working in a dark room, then light mode is eye strain hell. With dark mode, the minimum brightness I can achieve is about 100x lower than with light mode.
OLED monitors will bring green screen terminals back in style quite soon (with occasional orange and red highlights for that Hollywood haxx0r UX effect)
The worst is when you're in dark mode and suddenly open a website or PDF that's pure white. Instant flashbang.
I thank Apple every day for adding dark mode to the native PDF viewer for this exact reason.
I thought the title meant the AWS UI was “terminal”, which I would be on board with
> // TODO: Handle credential_source, role_arn, source_profile, sso_*, etc.
So it does not support any meaningful multi-account login (SSO, org role assumption, etc), and requires AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY. That's a no-no from security POV for anything in production, so not sure what's the meaningful way to use that.
You or the developer could piggy back on “aws configure export-credentials --profile profile-name —-format process” to support any authentication that the CLI supports.
I also care security part, but this is just beginning :) New features will be added iteratively based on community requests, and it seems there are plenty of good requirements in HN thread, thanks
Yeah, without SSO support this is a no-go for me too.
Looks very nice! Need to test if it supports AWS_ENDPOINT_URL so it works with LocalStack.
Will be available soon ;) https://github.com/huseyinbabal/taws/issues/18
Crashes on first use. Not a good way to go viral.
There was a resource handling problem, but it is fixed in 1.0.1 that you can try
I wish more TUI designers would spend some time playing with Hercules and experiencing the old "mainframe" way of arranging interfaces. Those guys really knew what they were doing.
I would like to know more about this. I love the resurgence of TUI apps but there is a samey-ness to them.
https://www.prince-webdesign.nl/tk5
Anything in particular you liked about them?
They are like web forms. Fill in everything, then hit send.
Fixed positions, shortcuts, tab-indexed, the order is usually smartly layed out. Zero latency. Very possible to learn how forms are organized and enter data with muscle memory. No stealing focus when you don't expect it.
Optimized for power users, which is something of a lost art nowadays. GUIs were good for discoverability for a while but increasingly I think they are neither great for power users nor for novices, just annoying and yanky.
I remember airport hostesses when they used it to get your boarding pass from the mainframe, it took them 5 seconds and a few key-strokes like 3 letter of my name to get the job done. When they switched to web-uis some year, I vividly remember seeing them, 4 at a time on the same screen, trying to figure out what was going on. Took them 15 minutes and a phone call to get the boarding pass ready. I feel sad when I think about this.
Were these 3270 or ansi terminals?
GUIs are for distracting otherwise uninterested users into doing what you want them to do.
Nice! A while back I had started something similar for Azure but it never really got traction (or nearly as polished as this!). It's a rough proof of concept but maybe it'll be useful to Azure users:
https://github.com/brendank310/aztui
Seems like everyone is interested in Rust, but yours was written in Go.
Why does the implementation language of a TUI matter?
Interesting, looks like k9s... but for AWS
That was my first thought too, it looks like it was directly inspired by k9s according to the bottom of the readme.
Great TUI app. Kudos & Ellerinize saglik
I run a neocloud and our entire UX is TUI-based, somewhat like this but obviously simpler. The customer feedback has been extremely positive, and it's great to see projects like this.
ssh admin.hotaisle.app
Oh this looks really interesting as well.
Can you tell me more about what do you mean by Neocloud and where are you exactly hosting the servers (do you colocate or do you resell dedicated servers or do you use the major cloud providers)
this is my first time hearing the term neocloud, seems like its focused on AI but I am gonna be honest that is a con in my book and not a pro (I like hetzner and compute oriented compute cloud providers)
Share to me more about neoclouds please and tell me more about it and if perhaps it could be expanded beyond the AI use case which is what I am seeing when I searched the term neocloud
Neocloud has come to refer to a new class of GPU-focused cloud providers. Sure, most of our customers use us for AI purposes, but it is really open to anything GPU related.
We buy, deploy and manage our own hardware. On top of that, we've built our own automation for provisioning. For example, K8S assumes that an OS is installed, we're operating at a layer below that which enables to machine to boot and be configured on-demand. This also includes DCIM and networking automation.
We colocate in a datacenter (Switch).
Rackspace called; they want their business model back. :P
imitation is the sincerest form of flattery. the rackspace folks did a great job.
I’m not sure these Neoclouds have Rackspace’s Fanatical Support, though.
We're developers ourselves, so we're treating everyone as we'd want to be treated.
This is sometimes called bare metal as a service.
Ironic is an open source project in this space if people are curious what this looks like.
We built our own ironic. Instead of a ton of services and configuration, we just have a single golang binary. Our source of truth is built on top of NetBox. We integrate Stripe for billing. We're adding features as customers ask for them.
While it is a lot of moving parts coordination, I'm not sure I agree with the complexity...
https://docs.openstack.org/ironic/latest/_images/graphviz-21...
> seems like its focused on AI but I am gonna be honest that is a con in my book and not a pro
A service you have no use for or interest in is “a con in your book”, what?
How much of this was made with LLM?
Why does it matter?
Because when a project is done in 10 minutes by llm - it will be abandoned in a week.
When a person intentionally does it and spends a month or two - they far more likely will support it as they created this project with some intention in the first place.
With llms this is not the case
Why are you entitled to ongoing support of a free tool?
How long are you entitled to such support?
What does “support” mean to you, exactly?
If the tool works for you already, why do you need support for it?
A bug from slop could cost $10K
So could a bug introduced by a human being. What's the difference?
The human
Accountability is the difference.
An LLM is just an agent. The principal is held accountable. There’s nothing really all that novel here from a liability perspective.
That was my point exactly. I just didn’t write it as precisely as you.
Then I don’t understand. My point was that it doesn’t matter whether the machine or the human actually wrote the code; liability for any injury ultimately remains with the human that put the agent to work. Similarly, if a developer at a company wrote code that injured you, and she wrote that code at the direction of the company, you don’t sue the developer, you sue the company.
How exactly do end users hold AWS devs / AWS LLMs accountable
How much would a bug from a human cost?
I’d be willing to bet the classes of bugs introduced would be different for humans vs LLMs. You’d probably see fewer low level bugs (such as off-by-one bugs), but more cases where the business logic is incorrect or other higher concerns are incorrect.
Please don't use or suggest using homebrew as a Linux installation solution. It's better to simply point at the binaries directly.
Why?
Is it the best out there? No. But it does work, and it provides me with updates for my tools.
Random curl scripts don't auto-update.
Me downloading executables and dropping them in /bin, /sbin, /usr/bin or wherever I'm supposed to drop them [0] also isn't secure.
[0] https://news.ycombinator.com/item?id=46487921
Also, I find it is usually better to follow up with something like:
'It's better to use Y instead of X BECAUSE of reasons O, P, Q, R & S' vs making a blanket statement like 'Don't use X, use this other insecure solution instead', as that way I get to learn something too.
I use mise to update binaries. Especially TUIs that are not on the arch repos. It supports several backends, from cargo crates to GitHub releases, to uv for python and so on.
So one doesn't really need homebrew that has Linux as third class citizen (with the 2nd class empty)
Also don't use Homebrew on MacOS because it screws around in /usr/local and still hasn't worked out how root is supposed to work.
Use Macports, it's tidy, installs into /opt/macports, works with Apple's frameworks and language configuration (for python, java etc), builds from upstream sources + patches, has variants to add/remove features, supports "port select" to have multiple versions installed in parallel.
Just a better solution all around.
Nice, download a random binary off the internet and give it your AWS credentials.
Please people, inspect the source to your tools, or don't use them on production accounts.
How did you install homebrew?
Comes with my district (bazzite). It’s a preferred solution for that distro in particular because it is convenient for immutable Linux.
https://docs.bazzite.gg/Installing_and_Managing_Software/
> Please people, inspect the source to your tools, or don't use them on production accounts.
This is not realistic. Approximately nobody installing AWS cli has reviewed its code.
Official AWS cli from AWS is a bit different than "random binary off the internet"?
What's the problem with Homebrew?
> It's better to simply point at the binaries directly.
Binaries aren't at all signed and can be malicious and do dangerous things.
Especially if it's using curl | bash to install binaries.
Are you using Homebrew on Linux? Genuinely curious - I never met a Linux user doing that.
Brew actually works very nicely for Linux and is a useful method to enable package management of cli tools/libraries at the user level.
It's also widely accepted as one of the tools of choice for package persistence on immutable distros (distrobox/toolbox is also another approach):
https://docs.projectbluefin.io/bluefin-dx/
Also, for example I use it for package management for KASM workspaces:
https://gist.github.com/jgbrwn/28645fcf4ac5a4176f715a6f9b170...
Linuxbrew is absolutely fantastic. No need to mess with apt repositories and can keep custom binaries separate from the os. Almost everything is there, and it just works.
At least one other person also does:
> as long as I have a basic Linux environment, Homebrew, and Steam
https://xeiaso.net/blog/2025/yotld/ (An year of the Linux Desktop)
I guess some post-macOS users might bring it with them when moving. If it works :shrug:
I had some issues with brew breaking up my system and pkg-config.
It is a bit hard to know what the issue is here.
But on average brew is much more safer than downloading a binary from the ether where we don't know what it does.
I see more tools use the curl | bash install pattern as well, which is completely insecure and very vulnerable to machines.
Looks like the best way to install these tools is to build it yourself, i.e. make install, etc.
>the best way to install these tools is to build it yourself, i.e. make install, etc.
And you're fully auditing the source code before you run make, right? I don't know anyone who does, but you're handing over just as much control as with curl|bash from the developer's site, or brew install, you're just adding more steps...
> And you're fully auditing the source code before you run make.
I mean you can?
But that is the whole point when the source is available, it is easier to audit, rather than binaries.
Even with brew, the brew maintainers have already audited the code, and it the source to install and even install using --HEAD is hosted on brew's CDN.
What’s the issue with homebrew?
It’s specifically a Mac workaround package manager. There’s better/cleaner ways to do it on Linux.
I love Debian's stability, but I rely on Homebrew (instead of apt) to get more recent releases of software. Overall it works swimmingly!
Unless you have immutable Linux where Homebrew is a preferred method of CLI tool installation.
https://docs.bazzite.gg/Installing_and_Managing_Software/
Linux is just a kernel, not everyone agrees on what is “better” and “cleaner” to use with it!
What's wrong with Brew?
As a user of immutable Linux (bazzite), I suggest speaking for yourself and not for others.
On my platform, Homebrew is a preferred method for installing CLI tools. I also personally happen to like it better on Linux than Mac (it seems faster/better).
https://docs.bazzite.gg/Installing_and_Managing_Software/
brew is for users of non-Arch distros who want to experience what using Arch feels like.
looks good. definitely will try
wow, that looks like k9s for aws. That's awesome
https://github.com/huseyinbabal/taws?tab=readme-ov-file#ackn...
yea let me just give access to my company AWS account credentials to this program made by some random dude on the internet
If you have permanent credentials then you are already in great danger. You should be using temporary credentials with something like Granted.
so stealing temporary credentials are fine, right?
Yeah just delay them by ~15 minutes :)
Nice idea but I won't trust a tool that first the commit is 11 hours ago.
The crazier part is a reddit post on AWS was made for someone releasing a $3 a month closed source version of this, that received a lot of traction, but a bit of flack for being closed source was made 3 hours before the first commit. This guy 100% took the idea and the open source parts and recreated it to post here. Look at the readme and compare them. It is almost a 1:1 copy of the other. This dude is hella sketch. And if this is getting traction we are cooked as developers.
That someone would be you (I saw that Reddit post: https://www.reddit.com/r/aws/comments/1q3ik9z/i_made_a_termi...). I'm not sure I would describe the collective response as having "a lot of traction"; most respondents panned both the price and the closed-source nature of the offering.
What you're learning here is that there's not really a viable market for simple, easily replicable tools. People simply won't pay for them when they can spin up a Claude session, build one in a few hours (often unattended!), and post it to GitHub.
Real profit lies in real value. In tooling, value lies in time or money saved, plus some sort of moat that others cannot easily cross. Lick your wounds and keep innovating!
Please dont open source your code if you’re going to call people hella sketch for deriving from it. Did he violate your license? Attack that action, not the person doing open source.
To add since the poster is being confusing: this is the GitHub repo for their project: https://github.com/fells-code/seamless-glance-distro
It is indeed not open sourced, as the repo only has a README and a download script. The "open source" they are referring to I think is the similar README convention.
Which makes this comment they made on Reddit especially odd: https://www.reddit.com/r/aws/comments/1q3ik9z/comment/nxpq7t...
> And the folder structure is almost an exact mirror of mine
Even though Rust has patterns on how to organize source code, similar folder structure is unlikely, particularly since the original code is not public so it would have to be one hell of a coincidence. (the funniest potential explanation for this would be that both people used the same LLMs to code the TUI app)
“Someone”
It looks like the first commit was just a squash and merge, I probably would never trust a public commit history as some kind of source of truth anyways. I'm curious what your issue is?
> I probably would never trust a public commit history as some kind of source of truth
What _would_ you trust as a source of truth for source code if not a public commit log? I agree that a squash commit’s timestamp in particular ought not be taken as authoritative for all of the changes in the commit, but commit history in general feels like the highest quality data most projects will ever have.
Until you realize it’s trivial for an LLM to fabricate it in about a minute
I really hate when cryptocurrency has valid applications but in this case, you're looking for a public adversarial append only log system which is what a blockchain is.
This guy stole this idea and basically the whole code base from another developer and ran it through an LLM to recreate it.
I think you’re vastly overestimating how difficult this type of application would be to an LLM. There’s no need to steal another code base…isn’t yours closed source, anyways?
You could probably get 90% of the way there with a prompt that literally just says:
> Create a TUI application for exploring deployed AWS resources. Write it in Rust using the most popular TUI library.
https://www.reddit.com/r/aws/comments/1q3ik9z/i_made_a_termi...
I didn’t take code or reverse-engineer anything from that Reddit project, and I wasn’t aware of it when I started.
I’ve been a long-term k9s user, and the motivation was simply: “I wish I had something like k9s, but for AWS.” That’s a common and reasonable source of inspiration.
A terminal UI for AWS is a broad, well-explored idea. Similar concepts don’t imply copied code. In this case, even the UIs are clearly different—the interaction model and layout are not the same.
The implementation, architecture, and UX decisions are my own, and the full commit history is public for anyone who wants to review how it evolved.
If there’s a specific piece of code you believe was copied, I’m happy to look at it. Otherwise, it’s worth checking what someone actually built before making accusations based on surface-level assumptions.
It’s pretty clear it was your post/project you reference, but how do you know he got inspiration from you? Did OP post on your Reddit post, confirming they were even aware of it?
Creating a tool via a LLM based on a similar idea isn’t quite stealing.
Making those accusations while hiding the fact that the “other developer” was you is extremely disingenuous.
claude code can do this, natively without a custom implementation
Just need to pay monthly for Claude and run software that's propped up by a VC funded bubble. Due for enshittification if not shuttering.
Hardly the same.