Linear sent me down a local-first rabbit hole

(bytemash.net)

376 points | by jcusch 14 hours ago ago

157 comments

  • bob1029 4 hours ago ago

    I'm all-in on SSR. The client shouldn't have any state other than the session token, current URL and DOM.

    Networks and servers will only get faster. Speed of light is constant, but we aren't even using its full capabilities right now. Hollow core fiber promises upward of 30% reduction in latency for everyone using the internet. There are RF-based solutions that provide some of this promise today. Even ignoring a wild RTT of 500ms, a SSR page rendered in 16ms would feel relatively instantaneous next to any of the mainstream web properties online today if delivered on that connection.

    I propose that there is little justification to take longer than a 60hz frame to render a client's HTML response on the server. A Zen5 core can serialize something like 30-40 megabytes of JSON in this timeframe. From the server's perspective, this is all just a really fancy UTF-8 string. You should be measuring this stuff in microseconds, not milliseconds. The transport delay being "high" is not a good excuse to get lazy with CPU time. Using SQLite is the easiest way I've found to get out of millisecond jail. Any hosted SQL provider is like a ball & chain when you want to get under 1ms.

    There are even browser standards that can mitigate some of the navigation delay concerns:

    https://developer.mozilla.org/en-US/docs/Web/API/Speculation...

    • random3 4 hours ago ago

      > networks and servers are will only get faster

      this isn't an argument for SSR. In fact there's hardly a universal argument for SSR. You're thinking of a specific use-case where there's more compute capacity on the server, where logic can't be easily split, etc. There are plenty of examples that make the client-side rendering faster.

      Rendering logic can be disproportionately complex relative to the data size. Moreover, client resources may actually be larger in aggregate than sever. If SSR would be the only reasonable game in we wouldn't have excitement around Web Assembly.

      Also take a look at the local-computation post https://news.ycombinator.com/item?id=44833834

      The reality is that you can't know which one is better and you should be able to decide at request time.

    • TimTheTinker 4 hours ago ago

      If you could simply drop in a library to any of your existing SSR apps that:

      - is 50kb (gzipped)

      - requires no further changes required from you (either now or in the future)

      - enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation

      would you do it?

      The problem I see with SSR evangelism is that it assumes that compromising that one use case (offline/low bandwidth use of the app) is necessary to achieve developer happiness and a good UX. And in some cases (like this) it goes on to justify that compromise with promises of future network improvements.

      The fact is, low bandwidth requirement will always be a valuable feature, no matter the context. It's especially valuable to people in third-world countries, in remote locations, or being served by Comcast (note I'm being a little sarcastic with that last one).

      • bob1029 4 hours ago ago

        > - enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation

        > would you do it?

        No, because the "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise wherein the specific implementation details are omitted. Everything is domain specific when it comes to sync-based latency hiding techniques. SSR is domain agnostic.

        > low bandwidth requirements

        If we are looking at this from a purely information theoretical perspective, the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response. If I am being provided internet via avian species in Africa, I would also prefer the entire webpage be delivered in one simple response body without any dependencies. It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this because you can simply use multipart form submissions for all of the interactions. The browser already knows how to do this stuff without any help.

        • horsawlarway 3 hours ago ago

          I just want to point out that your argument is now contradictory.

          You're stating that networks and latency will only improve, and that this is a reason to prefer SSR.

          You're also stating that 50kb feels too heavy.

          But at 8kb of SSR'd plaintext, you're ~6 page loads away from breaking even with the 50kb of content that will be cached locally, and you yourself are arguing that the transport for that 50kb is only getting better.

          Basically: you're arguing it's not a problem to duplicate all the data for the layout on every page load because networks are good and getting better. But also arguing that the network isn't good enough to load a local-first layout engine once, even at low multiples of your page size.

          So which is it?

          ---

          Entirely separate of the "rest of the owl" argument, with which I agree.

        • TimTheTinker 3 hours ago ago

          > "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise

          That's an assumption you're making, but that doesn't necessarily have to be true. I offered you what amounts to a magic button (drop this script in, done), not a full implementation exercise.

          If it really were just a matter of dropping a 50kb script in (nothing else) would you do it? Where's the size cutoff between "no" and "yes" for you?

          > Everything is domain specific when it comes to sync-based latency hiding techniques.

          Yes and no. To actually add it to your app right now would most likely require domain-specific techniques. But that doesn't imply that a more general technique won't appear in the future, or that an existing technique can't be sufficiently generalized.

          > the extra 50kb gzipped is starting to feel kind of heavy

          Yeah - but we can reasonably assume it's a one-and-done cached asset that effectively only has to be downloaded once for your app.

    • packetlost 4 hours ago ago

      Latency is additive, so all that copper coax that and mux/demux in between a sizeable chunk of Americans and the rest of the internet means you're looking at a minimum roundtrip latency of 30ms if server is in the same city. Most users are also on Wi-Fi which adds and additional mux/demux + rebroadcast step that adds even more. And most people do not have the latest CPU. Not to mention mobile users over LTE.

      Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.

      • markasoftware 4 hours ago ago

        uh have you ever tried pinging a server in your same city? It's usually substantially <30ms. I'm currently staying at a really shitty hotel that has 5mbps wifi, not to mention I'm surrounded by other rooms, and I can still ping 8.8.8.8 in 20ms. From my home internet, which is /not/ fiber, it's 10ms.

  • JusticeJuice 24 minutes ago ago

    I remember being literally 12 when google docs was launched, which featured real-time sync, and a collaborative cursor. I remember thinking that this is how all web experience will be in the future, at the time 'cloud computing' was the buzzword - I (incorrectly) thought realtime collaboration was the very definition of cloud computing.

    And then it just... never happened. 20 years went by, and most web products are still CRUD experiences, such as this site included.

    The funny thing is it feels like it's been on the verge of becoming mainstream for all this time. When meteor.js got popular I was really excited, and then with react surely it was gonna happen - but even now, it's still not the default choice for new software.

    I'm still really excited to see it happen, and I do think it will happen eventually - it's just trickier than it looks, and it's tricky to make the tooling so cheap that it's worth it in all situations.

    • levmiseri 14 minutes ago ago

      I feel the same way. The initial magic of real-timeness felt like a glimpse into a future that... where is it?

      I'm still excited about the prospects of it — shameless plug: actually building a tool with one-of-a-kind messaging experience that's truly real-time in the Google docs collaboration way (no compose box, no send button): https://kraa.io/hackernews

  • aboodman an hour ago ago

    > Using Zero is another option, it has many similarities to Electric, while also directly supporting mutations.

    The core differentiator of Zero is actually query-driven sync. We apparently need to make this more clear.

    You build your app out of queries. You don't have to decide or configure what to sync up front. You can sync as much, or as little as you want, just by deciding which queries to run.

    If Zero does not have the data that it needs on the client, queries automatically fall back to the server. Then that data is synced, and available for next query.

    This ends up being really useful for:

    - Any reasonably sized app. You can't sync all data to client.

    - Fast startup. Most apps have publicly visible views that they want to load fast.

    - Permissions. Zero doesn't require you to express your permissions in some separate system, you just use queries.

    So the experience of using Zero is actually much closer to a reactive db, something like Convex or RethinkDB ().

    Except that it uses standard Postgres, and you also get the instant interactions of a sync engine.

  • Cassandra99 11 hours ago ago

    I developed an open-source task management software based on CRDT with a local-first approach. The motivation was that I primarily manage personal tasks without needing collaboration features, and tools like Linear are overly complex for my use case.

    This architecture offers several advantages:

    1. Data is stored locally, resulting in extremely fast software response times 2. Supports convenient full database export and import 3. Server-side logic is lightweight, requiring minimal performance overhead and development complexity, with all business logic implemented on the client 4. Simplified feature development, requiring only local logic operations

    There are also some limitations:

    1. Only suitable for text data storage; object storage services are recommended for images and large files 2. Synchronization-related code requires extra caution in development, as bugs could have serious consequences 3. Implementing collaborative features with end-to-end encryption is relatively complex

    The technical architecture is designed as follows:

    1. Built on the Loro CRDT open-source library, allowing me to focus on business logic development

    2. Data processing flow: User operations trigger CRDT model updates, which export JSON state to update the UI. Simultaneously, data is written to the local database and synchronized with the server.

    3. The local storage layer is abstracted through three unified interfaces (list, save, read), using platform-appropriate storage solutions: IndexedDB for browsers, file system for Electron desktop, and Capacitor Filesystem for iOS and Android.

    4. Implemented end-to-end encryption and incremental synchronization. Before syncing, the system calculates differences based on server and client versions, encrypts data using AES before uploading. The server maintains a base version with its content and incremental patches between versions. When accumulated patches reach a certain size, the system uploads an encrypted full database as the new base version, keeping subsequent patches lightweight.

    If you're interested in this project, please visit https://github.com/hamsterbase/tasks

  • jeremy_k 5 hours ago ago

    Not a lot of mention for the collaboration aspect that local first / sync engines enabled. I've been building a project using Zero that is meant to replace a Google Sheet a friend of mine uses for his business. He routinely gets on a Google Meet with a client, they both open the Sheet and then go through the data.

    Before the emergence of tools like Zero I wouldn't have ever considered attempting to recreate the experience of a Google Sheet in a web app. I've previously built many live updating UIs using web sockets but managing that incoming data and applying it to the right area in the UI is not trivial. Take that and multiply it by 1000 cells in a Sheet (which is the wrong approach anyway, but it's what I knew how to build) and I can only imagine the mess of code.

    Now with Zero, I write a query to select the data and a mutator to change the data and everything syncs to anyone viewing the page. It is a pleasure to work with and I enjoy building the application rather than sweating dealing with applying incoming hyper specific data changes.

  • blixt 10 hours ago ago

    I've been very impressed by Jazz -- it enables great DX (you're mostly writing sync, imperative code) and great UX (everything feels instant, you can work offline, etc).

    Main problems I have are related to distribution and longevity -- as the article mentions, it only grows in data (which is not a big deal if most clients don't have to see that), and another thing I think is more important is that it's lacking good solutions for public indexes that change very often (you can in theory have a public readable list of ids). However, I recently spoke with Anselm, who said these things have solutions in the works.

    All in all local-first benefits often come with a lot of costs that are not critical to most use cases (such as the need for much more state). But if Jazz figures out the main weaknesses it has compared to traditional central server solutions, it's basically a very good replacement for something like Firebase's Firestore in just about every regard.

    • ChadNauseam 2 hours ago ago

      Yeah, Jazz is amazing. The DX is unmatched. My issue when I used it was, they mainly supported passkey-based encryption, which was poorly implemented on windows. That made it kind of a non-starter for me, although I'm sure they'll support traditional auth methods soon. But I love that it's end-to-end encrypted and it's super fun to use.

  • petralithic 12 hours ago ago

    ElectricSQL and TanStack DB are great, but I wonder why they focus so much on local first for the web over other platforms, as in, I see mobile being the primary local first use case since you may not always have internet. In contrast, typically if you're using a web browser to any capacity, you'll have internet.

    Also the former technologies are local first in theory but without conflict resolution they can break down easily. This has been from my experience making mobile apps that need to be local first, which led me to using CRDTs for that use case.

    • jitl 12 hours ago ago

      Because building local first with web technologies is like infinity harder than building local first with native app toolkits.

      Native app is installed and available offline by default. Website needs a bunch of weird shenanigans to use AppManifest or ServiceWorker which is more like a bunch of parts you can maybe use to build available offline.

      Native apps can just… make files, read and write from files with whatever 30 year old C code, and the files will be there on your storage. Web you have to fuck around with IndexedDB (total pain in the ass), localStorage (completely insufficient for any serious scale, will drop concurrent writes), or OriginPrivateFileSystem. User needs to visit regularly (at least once a month?) or Apple will erase all the local browser state. You can use JavaScript or hit C code with a wrench until it builds for WASM w/ Emscripten, and even then struggle to make sync C deal with waiting on async web APIs.

      Apple has offered CoreData + CloudKit since 2015, a completed first party solution for local apps that sync, no backend required. I’m not a Google enthusiast, maybe Firebase is their equivalent? Idk.

      • petralithic 3 hours ago ago

        Sure they're harder to build but my question is mainly why build them (for web in particular)? I don't see the benefits for a web app where I'll usually be online versus a mobile app where I may frequently have internet shortages when out and about.

        I don't think Apple's solution syncs seamlessly, I needed to use CRDTs for that, that's still an unsolved problem for both mobile and web.

      • mike_hearn 10 hours ago ago

        Well .... that's all true, until you want to deploy. Historically deploying desktop apps has been a pain in the ass. App stores barely help. That's why devs put up with the web's problems.

        Ad: unless you use Conveyor, my company's product, which makes it as easy as shipping a web app (nearly):

        https://hydraulic.dev/

        You are expected to bring your own runtime. It can ship anything but has integrated support for Electron and JVM apps, Flutter works too although Flutter Desktop is a bit weak.

      • agos 11 hours ago ago

        and if you didn't like or cared to learn CoreData? just jam a sqlite db in your application and read from it, it's just C. This was already working before Angular or even Backbone

      • sofixa 6 hours ago ago

        > Because building local first with web technologies is like infinity harder than building local first with native app toolkits.

        You just have to write one for every client, no big deal, right? Just 2-5 (depending on if you have mobile clients and if you decide to support Linux too) times the effort.

        You even say it yourself, you'll have to use Apple's sync and data solutions, and figure it out for Windows, Android and maybe Linux. Should be easy to sync data between the different storage and sync options...

        Oh, and you have to figure out how to build, sign and update for all OSes too. Pay the Apple fee, the Microsoft whatever nonsense to not get your software flagged as malware on installation. It's around a million times easier to develop and deploy a web application, and that's why most developers and companies are defaulting to that, unless they have very good reasons.

    • swsieber 7 hours ago ago

      I think the current crop of sync engines greatly benefit from being web-first because they are still young and getting lots of updates. And mobile updates are a huge pain compared to webapp updates.

      The PWA capabilities of webapps are pretty OK at this point. You can even drive notifications from the iOS pinned PWA apps, so personally, I get all I need from web apps pretending to be mobile apps.

      • owebmaster 4 hours ago ago

        Yes. PWAs only need now Dev adoption. It is up to us to fight the App store monopolies.

    • 946789987649 10 hours ago ago

      In this case it's not about being able to use the product at all, but the joy from using an incredibly fast and responsive product, which therefore you want to use local-first.

    • owebmaster 8 hours ago ago

      Because web apps run in a web browser, which is the opposite of a local first platform.

      Local-first is actually the default in any native app

  • mentalgear 11 hours ago ago

    Local-First & Sync-Engines are the future. Here's a great filterable datatable overview of the local-first framework landscape: https://www.localfirst.fm/landscape

    My favorite so far is Triplit.dev (which can also be combined with TanStack DB); 2 more I like to explore are PowerSync and NextGraph. Also, the recent LocalFirst Conf has some great videos, currently watching the NextGraph one (https://www.youtube.com/watch?v=gaadDmZWIzE).

    • CodingJeebus an hour ago ago

      How is the database migration support for these tools?

      Needing to support clients that don’t phone home for an extended period and therefore need to be rolled forward from a really old schema state seems like a major hassle, but maybe I’m missing something. Trying to troubleshoot one-off front end bugs for a single product user can be real a pain, I’d hate to see what it’s like when you have to factor in the state of their schema as well

    • rogerkirkness 5 hours ago ago

      Reminds me of Meteor back in the day.

      • 8n4vidtmkvmk 4 hours ago ago

        Whatever happened to meteor? They made it sound so great. What I didn't like was the tight coupling to mongodb.

        • explorigin 4 hours ago ago

          For me it was the lack of confirmation with the backend. When it was the next big thing, it sent changes to the backend without waiting for a response. This made the interface crazy fast but I just couldn't take the risk of the FE being out-of-sync with the backend. I hope they grew out of that model but I never took it serious for that one reason.

          • rogerkirkness 3 hours ago ago

            Yeah I built my first startup on Meteor, and the prototype for my second one, but there was so many weird state bugs after it got more complicated that we had to eventually switch back to normal patterns to scale it.

    • tbeseda 4 hours ago ago

      They're also the past...

    • virgil_disgr4ce 5 hours ago ago

      Thank you for this, I'm going to have to check out Triplit. Have you tried InstantDB? It's the one I've been most interested in trying but haven't yet.

  • sergioisidoro 6 hours ago ago

    I really like electric approach and it has been on my radar for a long time, because it just leaves the writing complexity to you and the API.

    Most of the solutions with 2 way sync I see work great in simple rest and hobby "Todo app" projects. Start adding permissions and evolving business logic, migrations, growing product and such, and I can't see how they can hold up for very long.

    Electric gives you the sync for reads with their "views", but all writes still happen normally through your existing api / rest / rpc. That also makes it a really nice tool to adopt in existing projects.

  • nicoritschel 5 hours ago ago

    I've been down this rabbit hole as well. Many of the sync projects seem great at first glance (and are very impressive technically) but perhaps a bit idealistic. Reactive queries are fantastic from a dx perspective, but any of the "real" databases running in the browser like sqlite or pglite store database pages in IndexedDB as there are some data longevity issues with OPFS (IIRC Safari aggressively purges this with a week of inactivity). Maybe the solution is just storing caches in the users' home directory with the filesystem api, like a native application.

    Long story short, if requirements aren't strictly real time collaborative and online-enabled, I've found rolling something yourself more in the vein of a "fat client" works pretty well too for a nice performance boost. I generally prefer using IndexedDB directly— well via Dexie, which has reactive query support.

  • CafeRacer 6 hours ago ago

    We're using dexie+rxjs. A killer combination.

    Described here https://blog-doe.pages.dev/p/my-front-end-state-management-a...

    I've already made improvements to that approach. decoupling of backend and front end actually feels like you're reducing complexity.

    • floydnoel 6 hours ago ago

      Are you using the cloud sync with Dexie? I built an app on it but it seems to have a hard time switching from local to cloud mode and vice versa. I’m not sure they ever thought people would want to but why bother making cloud set up calls for users that didn’t want it.

      • CafeRacer 4 hours ago ago

        Nope, locally. Roughly I'm doing something like that https://gist.github.com/vladmiller/0be83755e65cf5bd942ffba22...

        Example is a bit bad, but roughly shows how we're using. I have built a custom sync API that accepts in the body a list of <object_id>:<short_hash> and returns a kind of json-list in a format

        <id>:<hash>:<json_object>\n <id>:<hash>:<json_object>\n

        API compares what client knows vs. current state and only returns the objects that were updated/created and separately objects that were removed. Not ideal for large collections (but then again why did I store 50mb of historical data on the client in the first place? :D)

  • thruflo 5 hours ago ago

    > Electric’s approach is compelling given it works with existing Postgres databases. However, one gap remains to fill, how to handle mutations?

    Just to note that, with TanStack DB, Electric now has first class support for local writes / write-path sync using transactional optimistic mutations:

    https://electric-sql.com/blog/2025/07/29/local-first-sync-wi...

  • mkarliner 12 hours ago ago

    Meteor was/is a very similar technology. And I did some fairly major projects with it.

    • mentalgear 11 hours ago ago

      Meteor was amazing, I don't understand why it never got sustainable traction.

      • hobofan 7 hours ago ago

        I think this blog post may provide some insight: https://medium.com/@sachagreif/an-open-letter-to-the-new-own...

        Roughly: Meteor required too much vertical integration on each part of the stack to survive the strongly changing landscape at the time. On top of that, a lot of the teams focus shifted to Apollo (which at least from a commercial point of view seems to have been a good decision).

      • h4ch1 9 hours ago ago

        Seems like meteor is still actively developed and is Framework agnostic! https://github.com/meteor/meteor

      • thrown-0825 9 hours ago ago

        Tight coupling to MongoDB, fragmented ecosystem / packages, and react came out soon after and kind of stole its lunch money.

        It also had some pretty serious performance bottlenecks, especially when observing large tables for changes that need to be synced to subscribing clients.

        I agree though, it was a great framework for its day. Auth bootstrapping in particular was absolutely painless.

      • dustingetz 9 hours ago ago

        non-relational, document oriented pubsub architecture based on MongoDB, good for not much more than chat apps. For toy apps (in 2012-2016) – use firebase (also for chat apps), for crud-spectrum and enterprise apps - use sql. And then React happened and consumed the entire spectrum of frontend architectures, bringing us to GraphQL, which didn't, but the hype wave left little oxygen remaining for anything else. (Even if it had, still Meteor was not better.)

  • 10us 8 hours ago ago

    Man why arent couchdb / pouchdb not listed? Still works like a charm!

  • minikomi 10 hours ago ago

    My kingdom for a team organised by org mode files through a got repo

  • sturza 7 hours ago ago

    Local-first buys you instant UX by moving state to the client, and then makes everything else a little harder

    • CharlieDigital 7 hours ago ago

          > instant UX
      
      I do not get the hype. At all.

      "Local first" and "instant UX" are the least of my concerns when it comes to project management. "Easy to find things" and "good visibility" are far more important. Such a weird thing to index on.

      I might interact with the project management tool a few times a day. If I'm so frequently accessing it as an IC or an EM that "instant UX" becomes a selling point, then I'm doing something wrong with my day.

      • virgil_disgr4ce 5 hours ago ago

        UI performance is "a weird thing to index on"?

        • CharlieDigital 5 hours ago ago

          Yes? If that's the primary selling point for a project manager versus being just a really damn good project manager with good visibility?

          I've never used a project manager and thought to myself "I want to switch because this is too slow". Even Jira. But I have thought to myself "It's too difficult to build a good workflow with this tool" or "It's too much work to surface good visibility".

          This is not a first-person shooter. I don't care if it's 8ms vs 50ms or even 200ms; I want a product that indexes on being really great at visibility.

          It's like indexing your buying decision for a minivan on whether it can do the quarter mile at 110MPH @ 12 seconds. Sure, I need enough power and acceleration, but just about any minivan on the market is going to do an acceptable and safe speed and if I'm shopping for a minivan, its 1/4 mile time is very low on the list. It's a minivan; how often am I drag racing in it? The buyer of the minivan has a purpose for buying the minivan (safety, comfort, space, cost, fuel economy, etc.) and trap speed is probably not one of them.

          It's a task manager. Repeat that and see how silly it sounds to sweat a few ms interaction speed for a thing you should be touching only a few times a day max. I'm buying the tool that has the best visibility and requires the least amount of interaction from me to get the information I need.

          • asoneth 4 hours ago ago

            > any minivan on the market is going to do an acceptable and safe speed

            Growing up my folks had an old Winnebago van that took 2+ minutes to hit 60mph which made highway merges a white-knuckle affair, especially uphill. Performance was a criteria they considered when buying their next minivan. Whereas modern minivans all have an acceptable acceleration -- it's still important, it's just no longer one you need to think about.

            However, not all modern interfaces provide an acceptable response time, so it's absolutely a valid criteria.

            As an example, we switched to a SaaS version of Jira recently and things became about an order of magnitude slower. Performing a search now takes >2000ms, opening a filter dropdown takes ~1500ms, filtering the dropdown contents takes another ~1500ms. The performance makes using it a qualitatively different experience. Whereas people used to make edits live during meetings I've noticed more people just jotting changes down in notebooks or Excel spreadsheets to (hopefully remember to) make the updates after the meeting. Those who do still update it live during meetings often voice frustration or sometimes unintentionally perform an operation twice because there was no feedback that it worked the first time.

            Going from ~2000ms to ~200ms per UI operation is an enormous improvement. But past that point there are diminishing returns: from ~200ms to ~20ms is less necessary unless it's a game or drawing tool, and going from 20ms to 2ms is typically overoptimization.

          • tuckerman 2 hours ago ago

            I think there is a mismatch between most commenters on HN and who is making purchasing decisions for something like Linear: it would the PGM/TPM org or leadership pushing it and they are touching the tool a lot more often. Even if a small speed up ultimately doesn't make a difference in productivity, the perceived snapiness makes it feel "better/more modern" than what they currently have.

            That said, I really enjoy Linear (it reminds me a lot of buganizer at Google). The speed isn't something I notice much at all, it's more the workflow/features/feel.

          • ajoseps 4 hours ago ago

            I mostly agree with you on this but JIRA tends to push the envelope in terms of unresponsiveness of its UX. As an IC I only really use it to create/update/search tickets but I find myself waiting a half to couple of seconds for certain flows, especially for finding old tickets.

            Not quite the same as responsiveness but editing text fields in JIRA have a tendency of not saving in progress work if you accidentally escape out. Also hyperlinking between the visual and text mode is pretty annoying since you can easily forget which mode you’re in.

            Honestly as I type these out there are more and more frustrations I can think of with JIRA. Will we ever move away? Not anytime soon. It integrates with everything and that’s hard to replace.

            It’s still frustrating though.

    • JamesSwift 6 hours ago ago

      Id say you are underreporting how much harder everything else becomes but yes, definitely agreed

    • captainregex 7 hours ago ago

      this is such a clean and articulate way of putting it. The discussion around here the last few days about local and the role it is going to play has been phenomenal and really genuine

  • terencege 11 hours ago ago

    I'm also building a local first editor and rolling my own CRDTs. There are enormous challenges to make it work. For example the storage size issue mentioned in the blog, I end up using with yjs' approach which only increase the clock for upsertion, and for deletion remove the content and only remain deleted item ids which can be efficiently compressed since most ids are continuous.

    • jddj 11 hours ago ago

      In case you missed it and it's relevant, there was an automerge v3 announcement posted the other day here which claimed some nice compression numbers as well

      • terencege 11 hours ago ago

        As far as I know, automerge is using DAG history log and garbage collecting by comparing the version clock heads of 2 clients. That is different than yjs. I have not followed their compression approach in v3 yet, will check if having time.

  • antgiant 7 hours ago ago

    I’ve been working on a small browser app that is local first and have been trying to figure out how to pair it with static hosting. It feels like this should be possible but so far the tooling all seems stuck in the mindset of having a server somewhere.

    My use case is scoring live events that may or may not have Internet connection. So normal usage is a single person but sometimes it would be nice to allow for multi person scoring without relying on centralized infrastructure.

    • chr15m 5 hours ago ago

      I was in the same boat and I found Nostr is a perfect fit. You can write a 100% client side no-server app and persist your data to relays.

      Here's the app I built if you want to try it out: https://github.com/chr15m/watch-later

    • swsieber 7 hours ago ago

      Honestly, having used InstantDB (one of the providers listed in their post), I think it'd be a pretty nice fit.

      I've been writing a budget app for my wife and I and I've made it 100% free with 3rd party hosting:

      * InstantDB free tier allows 1 dev. That's the remote sync.

      * Netlify for the static hosting

      * Free private gitlab ci/cd for running some email notification polling, basically a poor mans hosted cron.

      • antgiant 6 hours ago ago

        I may end up doing that, but I really wish there was a true p2p option that doesn’t have me relying on someone not rug pulling their free tier sync server.

        • swsieber 5 hours ago ago

          Yeah... true p2p is pretty hard though, to the point that even stuff like WebRTC requires external servers to setup the data sync portion. It would be nice to develop something that worked at that layer though.

          IIUC, InstantDB is open source with a docker container you can run yourself, but at this point it's designed to run in a more cloud-like environment than I'd like. Last time I checked there was at least one open PR to make it easier to run in a different environment, but I haven't check in recently.

  • b_e_n_t_o_n 10 hours ago ago

    Local first is super interesting and absolutely needed - I think most of the bugs I run into with web apps have to do with sync, exacerbated by poor internet connectivity. The local properties don't interest me as much as request ordering and explicit transactions. You aren't guaranteed that requests resolve in order, and thus can result in a lot of inconsistencies. These local-first sync abstractions are a bit like bringing a bazooka to a water gun fight - it would be interesting to see some halfway approaches to this problem.

  • preaching5271 10 hours ago ago

    Automerge + Keyhive is the future https://www.inkandswitch.com/project/keyhive/

  • madisvain 10 hours ago ago

    Local first is amazing. I have been building a local first application for Invoicing since 2020 called Upcount https://www.upcount.app/.

    First I used PouchDB which is also awesome https://pouchdb.com/ but now switched to SQLite and Turso https://turso.tech/ which seems to fit my needs much better.

  • qweiopqweiop 9 hours ago ago

    It's starting to feel to me that a lot of tech is just converging on other platforms solutions. This for example sounds incredibly similar to how a mobile app works (on the surface). Of course it goes the other way too, with mobile tech taking declarative UIs from the Web.

  • rylan-talerico 6 hours ago ago

    I'm a big fan of local-first. InstantDB has productized it – worth looking into if you're interested in taking a local-first approach.

  • mizzao 7 hours ago ago

    Is this technical architecture so different from Meteor back in the day? Just curious for those who have a deeper understanding.

  • incorrecthorse 12 hours ago ago

    > For the uninitiated, Linear is a project management tool that feels impossibly fast. Click an issue, it opens instantly. Update a status and watch in a second browser, it updates almost as fast as the source. No loading states, no page refreshes - just instant, interactions.

    How garbage the web has become for a low-latency click action being qualified as "impossibly fast". This is ridiculous.

    • mossTechnician 6 hours ago ago

      Hacker News comment sections are the only part of the internet that still feel "impossibly fast" to me. Even on Android, thousands of comments can scroll as fast as the OS permits, and the DOM is so simple that I've reopened day-old tabs to discover the page is still loaded. Even projects like Mastodon and Lemmy, which aren't beholden to modern web standards, have shifted to significant client-side scripting that lacks the finesse to appear performant.

      • bombcar 3 hours ago ago

        The modern webbroswer trick of "you haven't looked at this tab in an hour, so we killed/unloaded it" is infuriating.

        • ndileas 3 hours ago ago

          To be fair... Lots of people just never close their tabs. So there's very real resource limitations. I've seen my partner's phone with a few hundred tabs open.

        • freedomben an hour ago ago

          I would like this feature if I had more control over it. The worst part is when clicking a tab that was unloaded which makes a new (fresh) web request when I don't want it to

        • cobbal 2 hours ago ago

          In firefox it's possible to disable it: https://firefox-source-docs.mozilla.org/browser/tabunloader/ . Enabled is probably the reasonable default for it though.

        • aidenn0 an hour ago ago

          Particularly if you have maybe 40 tabs open and 128GB of ram.

    • o_m 7 hours ago ago

      Back in 2018 I worked for a client that required we used Jira. It was so slow that the project manager set everything up in Excel during our planning meetings. After the meeting she would manually transfer it to Jira. She spent most of her time doing this. Each click in the interface took multiple seconds to respond, so it was impossible to get into a flow.

      • ben_w 7 hours ago ago

        Hm. While I'm not even remotely excited by Jira (or any other PM software), I've never noticed it being that bad. Annoying? Absolutely! But not that painfully slow.

        Were some extras installed? Or is this one of those tools that needs a highly performant network?

        • icedchai 7 hours ago ago

          The problem with Jira (and other tools) is it inevitably gets too many customizations: extra fields, plugins, mandatory workflows, etc. Instead of becoming a tool to manage work, it starts getting in the way of real work and becomes work itself.

        • davorak 4 hours ago ago

          > I've never noticed it being that bad. Annoying? Absolutely! But not that painfully slow.

          I have only seen a few self hosted jira, but all of those were mind numbingly slow.

          Jira cloud, on the other hand, now compared to 2018 is faster from what I remember, I still call it painful any time I am trying to be quick about something, most of the time though it is only annoying.

        • davey48016 7 hours ago ago

          I've seen on perm Jira at large companies get that slow. I'm not sure if it's the plugins or just the company being stingy on hardware.

          • mingus88 6 hours ago ago

            Yeah it’s probably both. Underfunded IT department, probably one or two people who aren’t allowed to say no.

          • ben_w 7 hours ago ago

            I can easily believe either, but I am still curious what the failure mode(s) is (/are).

            • bombcar 3 hours ago ago

              Underconfigured hardware and old installations neglected are the ones I've encountered.

              Large numbers of custom workflows and rules can do it, too, but most have been the first.

        • cloverich 3 hours ago ago

          It is faster than it was back then - I've been using it for 10+ years. Hating every moment of it. But it is definitely better than it was.

      • integralid 5 hours ago ago

        At this point I think I would try to automate this pointless time sink with a script and jira API.

        • victorbjorklund 5 hours ago ago

          100%. Their API isnt even bad. I made a script to pull lots of statistics and stuff from jira.

      • impulsivepuppet 3 hours ago ago

        Looking at the software development today, is as if the pioneers failed to pass on the torch onto the next generation of developers.

        While I see strict safety/reliability/maintainability concerns as a net positive for the ecosystem, I also find that we are dragged down by deprecated concepts at every step of our way.

        There's an ever-growing disconnect. On one side we have what hardware offers ways of achieving top performance, be it specialized instruction sets or a completely different type of a chip, such as TPUs and the like. On the other side live the denizens of the peak of software architecture, to whom all of it sounds like wizard talk. Time and time again, what is lauded as convention over configuration, ironically becomes a maintenance nightmare that it tries to solve as these conventions come with configurations for systems that do not actually exist. All the while, these conventions breed an incompetent generation of people who are not capable of understanding underlying contracts and constraints within systems, myself included. It became clear that, for example, there isn't much sense to learn a sql engine's specifics when your job forces you to use Hibernate that puts a lot of intellectual strain into following OOP, a movement characterized by deliberately departing away from performance, in favor of being more intuitive, at least in theory.

        As limited as my years of experience are, i can't help but feel complacent in the status quo, as long as I don't take deliberate actions to continuously deepen my knowledge and working on my social skills to gain whatever agency and proficiency that I can get my hands on

      • esafak 7 hours ago ago

        Stockholm syndrome

    • andy99 9 hours ago ago

      I also winced at "impossibly fast" and realize that it must refer to some technical perspective that is lost on most users. I'm not a front end dev, I use linear, I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it. (I don't mean to say optimization isn't cool)

      • brailsafe 2 hours ago ago

        > I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it.

        We almost forgot that's the point. Speed is good design, the absence of something being in the way. You notice a janky cross platform app, bad electron implementation, or SharePoint, because of how much speed has been taken away instead of how much has been preserved.

        It's not the whole of good design though, just a pretty fundamental part.

        Sports cars can go fast even though they totally don't need to, their owners aren't necessarily taking them to the track, but if they step on it, they go, it's power.

      • wooque 8 hours ago ago

        Second this. I use Linear as well and I didn't noticed anything close to "impossibly fast", it's faster than Jira for sure, but nothing spectacular.

        • dijit 8 hours ago ago

          If you get used to Jira, especially Ubisofts internally hosted Jira (which was in an oversubscribed 10yo server that was constantly thrashing and hosted half a world away) ... well, it's easy for things to feel "impossibly fast".

          In fact in the Better Software Conference this year there were people discussing the fact that if you care about performance people think your software didn't actually do the work: because they're not used to useful things being snappy.

    • jitl 12 hours ago ago

      A web request to a data center even with a very fast backend server will struggle to beat 8ms (120hz display) or even 16ms (60hz display), the budget for next frame painting a navigation. You need to have the data local to the device and ideally already in memory to hit 8ms navigation.

      • ahofmann 12 hours ago ago

        This is not the point, or other numbers matter more, then yours.

        In 2005 we wrote entire games for browsers without any frontend framework (jQuery wasn't invented yet) and managed to generate responses in under 80 ms in PHP. Most users had their first bytes in 200 ms and it felt instant to them, because browsers are incredibly fast, when treated right.

        So the Internet was indeed much faster then, as opposed to now. Just look at GitHub. They used to be fast. Now they rewrite their frontend in react and it feels sluggish and slow.

        • seanw265 5 hours ago ago

          > Now they rewrite their frontend in react and it feels sluggish and slow.

          I find this is a common sentiment, but is there any evidence to find that React itself is actually the culprit of GH's supposed slowdown? GH has updated their architecture many times over and their scale has increased by orders of magnitude, quite literally serving up over a billion git repos.

          Not to mention that the implementation details of any React application can make or break its performance.

          Modern web tech often becomes a scapegoat, but the web today enables experiences that were simply impossible in the pre-framework era. Whatever frustrations we have with GitHub’s UI, they don’t automatically indict the tools it’s built with.

          • Izkata 4 hours ago ago

            It's more of a "holding it wrong" situation with the datastores used with React, rather than directly with React itself, with updated data being accessed too high in the tree and causing large chunks of the page to be unnecessarily rerendered.

            This was actually the recommended way to do it for years with the atom/molecule/organism/section/page style of organizing React components intentionally moving data access up the tree into organism and higher. Don't know what current recommendations are.

          • do_not_redeem 4 hours ago ago

            I don't see how GH's backend serving a billion repos would affect the speed of their frontend javascript. React is well known to be slow, but if you need numbers, you can look at the js-framework-benchmark and see how many React results are orange and red.

            https://github.com/krausest/js-framework-benchmark

            • seanw265 4 hours ago ago

              Sure, React has overhead. No one disputes that. But pointing to a few red squares on a synthetic benchmark doesn’t explain the actual user experience on GitHub today. Their entire stack has evolved, and any number of architectural choices along the way could impact perceived performance.

              Used properly, React’s overhead isn’t significant enough on its own to cause noticeable latency.

        • Zanfa 11 hours ago ago

          > Now they rewrite their frontend in react and it feels sluggish and slow.

          And decided to drop legacy features such as <a> tags and broke browser navigation in their new code viewer. Right click on a file to open in a new tab doesn’t work.

        • DanielHB 11 hours ago ago

          Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area. And the techniques to "fix" this usually only mitigate the problem in read-scenarios.

          The techniques Linear uses are not so much about backend performance and can be applicable for any client-server setup really. Not a JS/web specific problem.

          • ahofmann 11 hours ago ago

            My take is, that a performant backend gets you so much runway, that you can reduce a lot of complexity in the frontend. And yes, sometimes that means to have globally distributed databases.

            But the industry is going the other way. Building frontends that try to hide slow backends and while doing that handling so much state (and visual fluff), that they get fatter and slower every day.

            • jakelazaroff 6 hours ago ago

              This is an absolutely bonkers tradeoff to me. Globally distributed databases are either 1. a very complex infrastructure problem (especially if you need multiple writable databases), or 2. lock you into a vendor's proprietary solution (like Cloudflare D1).

              All to avoid writing a bit of JavaScript.

          • porker 7 hours ago ago

            > Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

            Many of us don't have to worry about this. My entire country is within 25ms RTT of an in-country server. I can include a dozen more countries within an 80ms RTT. Lots of businesses focus just on their country and that's profitable enough, so for them they never have to think about higher RTTs.

            • nasretdinov 3 hours ago ago

              If you put your server e.g. in Czechia you can provide ~20ms latency for the whole of Europe :)

          • imiric 10 hours ago ago

            > Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

            The bottleneck is not the roundtrip time. It is the bloated and inefficient frontend frameworks, and the insane architectures built around them.

            Here's the creator of Datastar demonstrating a WebGL app being updated at 144FPS from the server: https://www.youtube.com/watch?v=0K71AyAF6E4&t=848

            This is not magic. It's using standard web technologies (SSE), and a fast and efficient event processing system (NATS), all in a fraction of the size and complexity of modern web frameworks and stacks.

            Sure, we can say that this is an ideal scenario, that the server is geographically close and that we can't escape the rules of physics, but there's a world of difference between a web UI updating at even 200ms, and the abysmal state of most modern web apps. The UX can be vastly improved by addressing the source of the bottleneck, starting by rethinking how web apps are built and deployed from first principles, which is what Datastar does.

            • mike_hearn 7 hours ago ago

              To see this first hand try this website if you're in Europe (maybe it's also fast in the US, not sure):

              https://www.jpro.one/?

              The entire thing is a JavaFX app (i.e. desktop app), streaming DOM diffs to the browser to render its UI. Every click is processed server side (scrolling is client side). Yet it's actually one of the faster websites out there, at least for me. It looks and feels like a really fast and modern website, and the only time you know it's not the same thing is if you go offline or have bad connectivity.

              If you have enough knowledge to efficiently use your database, like by using pipelining and stored procedures with DB enforced security, you can even let users run the whole GUI locally if they want to, and just have it do the underlying queries over the internet. So you get the best of both worlds.

              There was a discussion yesterday on HN about the DOM and how it'd be possible to do better, but the blog post didn't propose anything concrete beyond simplifying and splitting layout out from styling in CSS. The nice thing about JavaFX is it's basically that post-DOM vision. You get a "DOM" of scene graph nodes that correspond to real UI elements you care about instead of a pile of divs, it's reactive in the Vue sense (you can bind any attribute to a lazily computed reactive expression or collection), it has CSS but a simplified version that fixes a lot of the problems with web CSS and so on and so forth.

              • electroly 4 hours ago ago

                > Every click is processed server side

                On this site, every mouse move and scroll is sent to the server. This is an incredibly chatty site--like, way more than it needs to be to accomplish this. Check the websocket messages in Dev Tools and wave the mouse around. I suspect that can be improved to avoid constantly transmitting data while the user is reading. If/when mobile is supported, this behavior will be murder for battery life.

              • recursion 7 hours ago ago

                At least for me this site is completely broken on mobile. I'm not saying it's not possible to write sites for mobile using this tech... But it's not a great advert at all.

                • monooso 6 hours ago ago

                  Hardly a surprise, given that:

                  > The entire thing is a JavaFX app (i.e. desktop app)

                  Besides, this discussion is not about whether or not a site is mobile-friendly.

              • integralid 5 hours ago ago

                >the only time you know it's not the same thing is if you go offline or have bad connectivity.

                So, like most of the non-first world? Hell, I'm in a smaller town/village next to my capital city for a month and internet connection is unreliable.

                Having said that, the website was usable for me - I wouldn't say it's noticeably fast, but it was not show either.

                • smj-edison an hour ago ago

                  I feel like it depends a lot on what kind of website you're using. Note taking app? Definitely should work offline. CRUD interface? You already need to be constantly online, since every operation needs to talk to the server.

              • dminik 7 hours ago ago

                I'm not impressed. On mobile, the docs are completely broken and unreadable. Visiting a different docs subpage breaks the back button.

                Firefox mobile seems to think the entire page is a link. This means I can't highlight text for instance.

                Clicking on things feels sluggish. The responses are fast, but still perceptible. Do we really need a delay for opening a hamburger menu?

        • SJC_Hacker 2 hours ago ago

          React isn't the problem. You can write a very fast interface in React. Its (usually) too many calls to the backend that slow everything to a crawl

      • dustingetz 10 hours ago ago

        actually if you live near a city the edge network is 6ms RTT ping away, that’s 3ms each direction, so if e.g. a virtual scroll frontend is windowing over a server array retained in memory, you can get there and back over websocket, inclusive of the windowing, streaming records in and out of the DOM at the edges of the viewport, and paint the frame, all in less than 8ms 120hz frame budget, and the device is idle, with only the visible resultset in client memory. That’s 120hz network. Even if you don’t live near a city, you can probably still hit 60hz. It is not 2005 anymore. We have massively multiplayer video games, competitive multiplayer shooters and can render them in the cloud now. Linear is office software, it is not e-sports, we’re not running it on the subway or in Africa. And AI happens in the cloud, Linear’s website lead text is about agents.

        • Joeri 7 hours ago ago

          Those are theoretical numbers for a small elite. Real world numbers for most of the planet are orders of magnitude worse.

          • dustingetz 7 hours ago ago

            it is my actual numbers from my house in the Philadelphia suburbs right now, 80 miles away from the EWR data center outside NYC. Feel free to double them, you’re still inside the 60hz frame budget with better than e-sports latency

            edit: I am 80 miles from EWR not 200

            • dpflug 5 hours ago ago

              Like they said, for a small elite. If you don't see yourself as such, adjust your view.

              • dustingetz 4 hours ago ago

                what is your ping to fly.io right now?

                • electroly 3 hours ago ago

                  90ms for me. My fiber connection is excellent and there is no jitter--fly.io's nearest POP is just far away. You mentioned game streaming so I'll mention that GeForce Now's nearest data center is 30ms away (which is actually fine). Who is getting 6ms RTT to a data center from their house, even in the USA?

                  More relevantly... who wants to architect a web app to have tight latency requirements like this, when you could simply not do that? GeForce Now does it because there's no other way. As a web developer you have options.

                • hansvm 4 hours ago ago

                  Mine's 167-481ms (high jitter). It's the best internet I can get right now, a few suburbs south of San Francisco. Comcast was okayish, lower mean latency, but it had enough other problems that T-Mobile home internet was a small improvement.

      • andrepd 6 hours ago ago

        What does web request latency have to do with it? News articles or simple forms take 5 seconds to load. Why? This is not bounded by latency.

      • delusional 9 hours ago ago

        I can't help but feel this is missing the point. Ideally, next refresh click latency is a fantastic goal, we're just not even close to that.

        For me, on the web today, the click feedback for a large website like YouTube is 2 seconds for first change and 4 seconds for content display. 4000 milliseconds. I'm not even on some bad connection in Africa. This is a gigebit connection with 12ms of latency according to fast.com.

        If you can bring that down to even 200ms, that'll feel comparatively instantaneous for me. When the whole internet feel like that, we can talk about taking it to 16ms

    • jallmann 9 hours ago ago

      Linear is actually so slow for me that I dread having to go into it and do stuff. I don’t care if the ticket takes 500ms to load, just give me the ticket and not a fake blinking cursor for 10 seconds or random refreshes while it (slowly) tries to re-sync.

      Everything I read about Linear screams over-engineering to me. It is just a ticket tracker, and a rather painful one to use at that.

      This seems to be endemic to the space though, eg Asana tried to invent their own language at one point.

      • presentation 7 hours ago ago

        Yeah their startup times aren’t great. They’re making a trade off by loading a ton of data up front, though to be fair a lot of the local first web tooling didn’t really exist when they were founded - the nascent Zero Sync framework’s example project is literally a Linear clone that they use as their actual bug tracker, it loads way faster and has similarly snappy performance, so seems clear that it can be done better.

        That said at this point Linear has more strengths than just interaction speed, mainly around well thought out integrations.

        • 8n4vidtmkvmk 5 hours ago ago

          Maybe it doesn't scale well then? I syncd my linear with GitHub. Has a few thousand issues. Lightning fast. Perhaps you guys have way more issues?

      • adregan 5 hours ago ago

        I hate to be a hacker news poster who responds to a positive post with negativity, but I was also surprised at the praise in the article.

        I don’t find Linear to be all that quick, but apparently Mac OS thinks it’s a resource hog (or has memory leaks). I leave linear open and it perpetually has a banner that tells me it was killed and restarted because it was using too much memory. That likely colors my experience.

    • wim 7 hours ago ago

      Funny how reasonable performance is now treated as some impossible lost art on the web sometimes.

      I posted a little clip [1] of development on a multiplayer IDE for tasks/notes (local-first+e2ee), and a lot of people asked if it was native, rust, GPU rendered or similar. But it's just web tech.

      The only "secret ingredients" here are using plain ES6 (no frameworks/libs), having data local-first with background sync, and using a worker for off-UI-thread tasks. Fast web apps are totally doable on the modern web, and sync engines are a big part of it.

      [1] https://x.com/wcools/status/1900188438755733857

    • lwansbrough 11 hours ago ago

      Trite remark. The author was referring to behaviour that has nothing to do with “how the web has become.”

      It is specifically to do with behaviour that is enabled by using shared resources (like IndexedDB across multiple tabs), which is not simple HTML.

      To do something similar over the network, you have until the next frame deadline. That’s 8-16ms. RTT. So 4ms out and back, with 0ms budget for processing. Good luck!

    • fleabitdev 10 hours ago ago

      I was also surprised to read this, because Linear has always felt a little sluggish to me.

      I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?

      Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.

      • m-s-y 7 hours ago ago

        150ms is sluggish? 4000ms is normal?

        The comments are absolutely wild in here with respect to expectations.

        • layer8 7 hours ago ago

          150 ms is definitely on the “not instantaneous” side: https://ux.stackexchange.com/a/42688

          The stated 500 ms to 1500 ms are unfortunately quite frequent in practice.

          • fleabitdev 6 hours ago ago

            Interesting fact: the 50ms to 100ms grace period only works at the very beginning of a user interaction. You get that grace period when the user clicks a button, but when they're typing in text, continually scrolling, clicking to interrupt an animation, or moving the mouse to trigger a hover event, it's better to provide a next-frame response.

            This means that it's safe for background work to block a web browser's main thread for up to 50ms, as long as you use CSS for all of your animations and hover effects, and stop launching new background tasks while the user is interacting with the document. https://web.dev/articles/optimize-long-tasks

          • 8n4vidtmkvmk 4 hours ago ago

            I think under 400ms is fast enough for loading a new page or dialog. For loading search suggestions or opening a date picker or similar, probably not.

    • zwnow 8 hours ago ago

      Web applications have become too big and heavy. Corps want to control everything. A simple example would be a simple note taking app which apparently also has to sync throughout devices. They are going to store every note you take on their servers, who knows if they really delete your deleted notes. They'll also track how often you visited your notes for whatever reasons. Wouldn't surprise me if the app also required geolocation and stuff like that for whatever reason. Mix that with lots of users and you will have loading times unheard of with small scale apps. Web apps should scale down but like with everything we need more more more bigger better faster.

      • zem 3 hours ago ago

        > a simple note taking app which apparently also has to sync throughout devices

        that is the entire point of the app, surely! whether or not the actual implementation is bad, syncing across devices is what users want in a note taking app for the most part.

    • tomwphillips 3 hours ago ago

      Indeed. I have been using it for 5-6 months in a new job and I didn't notice it being faster than the typical web app.

      If anything it is slow because it is a pain to navigate. I have browser bookmarks for my most frequented pages.

    • OJFord 4 hours ago ago

      I don't know if 'the web' in general is fair, here the obvious comparison is Jira, which is dog slow & clunky.

    • captainregex 7 hours ago ago

      one of my day to day responsibilities involves using a portal tied to MSFT dynamics on the back end and it is the laggiest and most terrible experience ever. we used to have java apps that ran locally and then moved to this in the name of cloud migration and it feels like it was designed by someone whose product knowledge was limited to the first 2/5 lessons in a free Coursera (RIP) module

    • presentation 7 hours ago ago

      Since it’s so easy then I’m rooting for you to make some millions with performant replacements for other business tools, should be a piece of cake

    • andrepd 6 hours ago ago

      It is definitely ridiculous. It's not just a nitpick too, it's ludicrous how sloooow and laggy typing text in a monstrosity like Jira is, or just reading through an average news site. Makes everything feel like a slog.

  • croes 10 hours ago ago

    But how is conflicting data handled?

    For instance one closes an something and another aborts the same thing.

  • Gravityloss 13 hours ago ago

    Some problem on the site. Too much traffic?

        Secure Connection Failed
        An error occurred during a connection to bytemash.net. PR_END_OF_FILE_ERROR
        Error code: PR_END_OF_FILE_ERROR
    • jcusch 13 hours ago ago

      It looks like I was missing a www subdomain CNAME for the underlying github pages site. I think it's fixed now.

      • Gravityloss 12 hours ago ago

        I still see the same error

        • Gravityloss 11 hours ago ago

          Ok, it works, problem was probably on my end.

  • yanis_t 10 hours ago ago

    I don't get it. You still have to sync the state one way or another, network latency is still there.

    • Aldipower 10 hours ago ago

      Me neither. Considered we are talking about collaborative network applications, you are loosing the single-source-of-thruth (the server database) with the local first approach. And it just adds so much more complexity. Also, as your app grows, you probably end up to implement the business logic twice. On the server and locally. I really do not get it.

      • jitl 8 hours ago ago

        You can use the same business logic code on both the client and server.

        With the Linear approach, the server remains the source of truth.

    • WickyNilliams 9 hours ago ago

      The latency is off the critical path with local first. You sync changes over the network sure, but your local mutations are stored directly and immediately in a local DB.

    • croes 10 hours ago ago

      But the user gets instant results

  • tommoor 7 hours ago ago

    If you want to work on Linear's sync infrastructure or product – we're hiring. The day-to-day DX is incredible.

    • theappsecguy 6 hours ago ago

      You should put pay bands on job listings to save everyone time and sanity.

    • marmalar 2 hours ago ago

      I'm curious what the WLB is like?

  • mbaranturkmen 12 hours ago ago

    How is this approach better than using react-query to persist storage which periodically sync the local storage and the server storage? Perhaps I am missing something.

    • petralithic 12 hours ago ago

      That approach is precisely what the new TanStack DB does, which if you don't know already has the same creator as React Query. The former extends the latter's principles to syncing, via ElectricSQL, both organizations have a partnership with each other.