I've been planning to build something like this for a while now (just for myself). Love the planning workflow, will likely steal that idea.
But code review is more than just reviewing diffs. I need to test the code by actually building and running it. How does that critical step fit in to this workflow? If the async runner stops after it finishes writing code, do I then need to download the PR to my machine, install dependencies, etc. to test it? Major flow blocker for me, defeats the entire purpose of such a tool.
I was planning to build always-on devcontainers on a baremetal server. So after Claude Code does its thing, I have a live, running version of my app to test alongside the diffs. Sort of like Netlify/Vercel branch deploys, but with a full stack container.
Claude Code also works far better in an agentic loop when it can self-heal by running tests, executing one-off terminal commands, tailing logs, and querying the database. I need to do this anyway. For me, a mobile async coding workflow needs to have a container running with a mobile-friendly SSH terminal, database viewer, logs viewer, lightweight editor with live preview, and a test runner. Diffs just don't cut it for me.
I do believe that before 2025 is over we will achieve the dream of doing real software engineering on mobile. I was planning to build it myself anyway.
Completely agreed. The first version of our app was on mobile. We implemented preview deployment for frontend testing (and we were going to work on backend integration testing next). But yeah, without a reliable way to test and verify changes, I agree it's not a complete solution. We are going to work on that next.
Thumbs up for dark mode. I really want to love this but I can’t get over the idea of paying GCP to have cloud run clone my repo over and over again every time I interact with Async. I’m still going to try it, but I think I’d rather rent a VM and just have it be faster. This is coming from someone who deals with big fat monorepos, so maybe it’s not that bad for the average user.
> Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool
Sadly, this seems inaccurate. Appears to be Claude Code and GitHub PRs, but not Linear.
It should be Linear, since Linear does an extraordinary number of useful things beyond "issue list".
Since it seems to have nothing to do with Linear, I'm surprised the headline says it it's those three things, by trademarked brand name.
Speaking of tracking tasks:
> Tracking sucks. I use Apple Notes with bullet points to track tasks...
Claude Code seems very good at its own "org mode", using .md file outline and checklists to organize and track progress as well as keep an easy to leverage record.
It is also able to sync the outline level items with GitHub issues, then plan and maintain checklists under them as it works, including the checklist items in commits and PRs, and even help you commit that roadmap outline snapshot at the same time to have progress through time as diffs...
Great pitch, you've articulated the pain point super well and I agree with it.
I have personally had no luck with prompting models to ask me clarifying questions. They just never seem to think of the key questions, just asking random shit to "show" that they planned ahead. And they also never manage to pause halfway through when it gets tough and ask for further planning.
My question is how well you feel it actually works today with your tool.
Honestly, it's not there yet and I'm iterating to making it better and consistent. But, I've had a few moments where it got questions and implementations right and it felt magical. So, wanted to share it with more people and see how people like the approach.
I think this is a neat approach. When I interact with AI tooling, such as Claude Code, my general philosophy has been to maintain a strong opinion about what it is that I actually want to build. I usually have some system design done or some picture that I've drawn to make sure that I can keep it straight throughout a given session. Without that core conception of what needs to be done, it's a little too easy for an LLM to run off the rails.
This dialogue-based path is a cool way to interact with an existing codebase (and I'm a big proponent of writing and rewriting). At the very least you're made to actually think through the implications of what needs to be done and how it will play with the rest of the application.
How well do you find that this approach handles the long tail of little things that need to be corrected before finally merging? Does this approach solve the fiddly stylistic errors that need to be made on its own, or is it more that the UI / PR review approach that you've taken is more ergonomic for solving them?
hey! that's awesome to hear, thanks for the feedback.
we've tried a lot of things to make code more in-line with our paradigms (initially tried a few agents to parse out "project rules" from existing code, then used that in the system prompt), but have found that the agents tend to go off-track regardless. the highest leverage has just been changing the model (Claude writes code a certain way which we tend to prefer, vs GPT, etc) and a few strong system prompts (NEVER WRITE COMMENTS, repeated twice).
so the questions here are less about that, but more about overall functional / system requirements, and acknowledging that for stylistic things, the user will still have to review.
Something I'd consider a game-changer would be making it really easy to kick off multiple claude instances to tackle a large researched task and then to view the results and collect them into a final research document.
IME no matter how well I prompt, a single claude/codex will never get a successful implementation of a significant feature single-shot. However, what does work is having 5 Claudes try it, reading the code and cherry picking the diff segments I like into one franken-spec I give to a final claude instance with essentially just "please implement something like this"
It's super manual nd annoying with git work-trees for me, but sounds like your setup could make it slick
Interesting. So, do you just start multiple instances of Claude Code and ask the same prompt on all of them? Manually cherry picking from 5 different worktrees sounds complicated. Will see what I can do :)
I agree, it's more complex. But, I feel like the potential with a claude code wrapper is precisely in enabling workflows that are a pain to self-implement but nonetheless are incredibly powerful
Very cool! I’ve been building an internal tool at work that’s very similar but primarily focused on automatically triaging bugs and tech support issues, with MCP tools to query logs, search for errors in bugsnag, query the db etc. also using linear for issue tracking. They’ve been launching some cool stuff for agent integrations.
I've really been enjoying the mobile coding agent workflow with [Omnara](https://omnara.com/). I'd love to try this as well with a locally hosted version.
Looks cool, tbh I think i'd be more interested in just a lightweight local UI to track and monitor claude code, I could skip the linear and github piece.
I love your video, it is very clear. I am building in this space so I am very curious and happy about all the products coming in to help the current tooling gap. What is not clear to me is how Async works, is it all local or a mix of local or cloud since I see "executes in cloud" but then I see a download-able app.
I see a lot of information on API endpoints in the README. Perhaps that is not so critical to getting started. Perhaps a `Getting Started` would help, explaining what is the desktop app and what goes into cloud.
I have been hosting online sessions for Claude Code. I have 100+ guests for my session this Friday. And after "vibe coding" full time for a few months, I am building https://github.com/brainless/nocodo. It is not ready for actual use and I first want to use it to build itself (well the core of it to build the rest of the parts).
To clarify, most of the execution (writing code or researching) is happening on the cloud. And we use Firestore as DB to store tasks. The app (both desktop and mobile) is just interface to talk to those backends. We are currently working to see if we can bring majority of the execution to local. Hope this makes it a bit clearer.
The main benefit is that you can issue tasks on mobile. And, initially we were just a mobile app. When we decided to build a desktop version, we just reused all the infra we had. Realized for desktop, cloud isn't necessary. So, we are trying to migrate to local now
I've been planning to build something like this for a while now (just for myself). Love the planning workflow, will likely steal that idea.
But code review is more than just reviewing diffs. I need to test the code by actually building and running it. How does that critical step fit in to this workflow? If the async runner stops after it finishes writing code, do I then need to download the PR to my machine, install dependencies, etc. to test it? Major flow blocker for me, defeats the entire purpose of such a tool.
I was planning to build always-on devcontainers on a baremetal server. So after Claude Code does its thing, I have a live, running version of my app to test alongside the diffs. Sort of like Netlify/Vercel branch deploys, but with a full stack container.
Claude Code also works far better in an agentic loop when it can self-heal by running tests, executing one-off terminal commands, tailing logs, and querying the database. I need to do this anyway. For me, a mobile async coding workflow needs to have a container running with a mobile-friendly SSH terminal, database viewer, logs viewer, lightweight editor with live preview, and a test runner. Diffs just don't cut it for me.
I do believe that before 2025 is over we will achieve the dream of doing real software engineering on mobile. I was planning to build it myself anyway.
Completely agreed. The first version of our app was on mobile. We implemented preview deployment for frontend testing (and we were going to work on backend integration testing next). But yeah, without a reliable way to test and verify changes, I agree it's not a complete solution. We are going to work on that next.
FYI, our initial app demo: https://youtu.be/WzFP3799K2Y?feature=shared
Thumbs up for dark mode. I really want to love this but I can’t get over the idea of paying GCP to have cloud run clone my repo over and over again every time I interact with Async. I’m still going to try it, but I think I’d rather rent a VM and just have it be faster. This is coming from someone who deals with big fat monorepos, so maybe it’s not that bad for the average user.
> Show HN: Async – Claude code and Linear and GitHub PRs in one opinionated tool
Sadly, this seems inaccurate. Appears to be Claude Code and GitHub PRs, but not Linear.
It should be Linear, since Linear does an extraordinary number of useful things beyond "issue list".
Since it seems to have nothing to do with Linear, I'm surprised the headline says it it's those three things, by trademarked brand name.
Speaking of tracking tasks:
> Tracking sucks. I use Apple Notes with bullet points to track tasks...
Claude Code seems very good at its own "org mode", using .md file outline and checklists to organize and track progress as well as keep an easy to leverage record.
It is also able to sync the outline level items with GitHub issues, then plan and maintain checklists under them as it works, including the checklist items in commits and PRs, and even help you commit that roadmap outline snapshot at the same time to have progress through time as diffs...
Great pitch, you've articulated the pain point super well and I agree with it.
I have personally had no luck with prompting models to ask me clarifying questions. They just never seem to think of the key questions, just asking random shit to "show" that they planned ahead. And they also never manage to pause halfway through when it gets tough and ask for further planning.
My question is how well you feel it actually works today with your tool.
Honestly, it's not there yet and I'm iterating to making it better and consistent. But, I've had a few moments where it got questions and implementations right and it felt magical. So, wanted to share it with more people and see how people like the approach.
I think this is a neat approach. When I interact with AI tooling, such as Claude Code, my general philosophy has been to maintain a strong opinion about what it is that I actually want to build. I usually have some system design done or some picture that I've drawn to make sure that I can keep it straight throughout a given session. Without that core conception of what needs to be done, it's a little too easy for an LLM to run off the rails.
This dialogue-based path is a cool way to interact with an existing codebase (and I'm a big proponent of writing and rewriting). At the very least you're made to actually think through the implications of what needs to be done and how it will play with the rest of the application.
How well do you find that this approach handles the long tail of little things that need to be corrected before finally merging? Does this approach solve the fiddly stylistic errors that need to be made on its own, or is it more that the UI / PR review approach that you've taken is more ergonomic for solving them?
hey! that's awesome to hear, thanks for the feedback.
we've tried a lot of things to make code more in-line with our paradigms (initially tried a few agents to parse out "project rules" from existing code, then used that in the system prompt), but have found that the agents tend to go off-track regardless. the highest leverage has just been changing the model (Claude writes code a certain way which we tend to prefer, vs GPT, etc) and a few strong system prompts (NEVER WRITE COMMENTS, repeated twice).
so the questions here are less about that, but more about overall functional / system requirements, and acknowledging that for stylistic things, the user will still have to review.
Something I'd consider a game-changer would be making it really easy to kick off multiple claude instances to tackle a large researched task and then to view the results and collect them into a final research document.
IME no matter how well I prompt, a single claude/codex will never get a successful implementation of a significant feature single-shot. However, what does work is having 5 Claudes try it, reading the code and cherry picking the diff segments I like into one franken-spec I give to a final claude instance with essentially just "please implement something like this"
It's super manual nd annoying with git work-trees for me, but sounds like your setup could make it slick
Interesting. So, do you just start multiple instances of Claude Code and ask the same prompt on all of them? Manually cherry picking from 5 different worktrees sounds complicated. Will see what I can do :)
Yeah, exactly, same prompt.
I agree, it's more complex. But, I feel like the potential with a claude code wrapper is precisely in enabling workflows that are a pain to self-implement but nonetheless are incredibly powerful
Very cool! I’ve been building an internal tool at work that’s very similar but primarily focused on automatically triaging bugs and tech support issues, with MCP tools to query logs, search for errors in bugsnag, query the db etc. also using linear for issue tracking. They’ve been launching some cool stuff for agent integrations.
And sorry I’m a light mode fan
Nice, are you building a linear app? I saw their recent post about integrating cursor, devin, etc into their platform.
And, light mode? I'm sorry, we can't be friends anymore
yup was building it as a linear agent https://linear.app/developers/agents
I've really been enjoying the mobile coding agent workflow with [Omnara](https://omnara.com/). I'd love to try this as well with a locally hosted version.
you can also give our mobile app a try :)
Looks cool, tbh I think i'd be more interested in just a lightweight local UI to track and monitor claude code, I could skip the linear and github piece.
I second this. I love the flow you are building but I want this to run locally :)
Thanks for the feedback. Yeah, that is where we are heading as said in the demo video. We will follow up shortly to release local tool :)
I love your video, it is very clear. I am building in this space so I am very curious and happy about all the products coming in to help the current tooling gap. What is not clear to me is how Async works, is it all local or a mix of local or cloud since I see "executes in cloud" but then I see a download-able app.
I see a lot of information on API endpoints in the README. Perhaps that is not so critical to getting started. Perhaps a `Getting Started` would help, explaining what is the desktop app and what goes into cloud.
I have been hosting online sessions for Claude Code. I have 100+ guests for my session this Friday. And after "vibe coding" full time for a few months, I am building https://github.com/brainless/nocodo. It is not ready for actual use and I first want to use it to build itself (well the core of it to build the rest of the parts).
To clarify, most of the execution (writing code or researching) is happening on the cloud. And we use Firestore as DB to store tasks. The app (both desktop and mobile) is just interface to talk to those backends. We are currently working to see if we can bring majority of the execution to local. Hope this makes it a bit clearer.
Thanks for the clarification.
Does this mean that my codebase gets cloned somewhere? Is it your compute or mine, with my cloud provider API keys?
If you use the app as is, it will be cloned to our server. If you choose to host your own server, it will be on yours
OK thanks.
Your docs on selfhosting are a bit light. Can you use the mobile app while selfhosting? That would be the main selling point for me.
> Traditional AI coding tools
I love this phrase :)
:)
Super cool. Have been looking for something like this. Nice work!
thank you :) let us know how it feels
I hope it works better than GitHub Copilot Agent.
Whats the benefit of cloud hosting it?
The main benefit is that you can issue tasks on mobile. And, initially we were just a mobile app. When we decided to build a desktop version, we just reused all the infra we had. Realized for desktop, cloud isn't necessary. So, we are trying to migrate to local now
thumbs down
:( light mode gangs