The Github website is slow everywhere. It is truly a piece of shit software both in terms of performance but also UX/UI and everything in between.
It's a product of many cooks and their brilliant ideas and KPIs, a social network for devs and code being the most "brilliant" of them all. For day to day dev operations is something so mediocre even Gitlab looks like the golden standard compared to Github.
And no, the problem is not "Rails" or [ insert any other tech BS to deflect the real problems ].
The problem is they abandoned rails for react. The old SSR GitHub experience was very good. You could review massive PRs on any machine before they made the move.
I’m pretty sure they used to do syntax highlighting on the server before and it was fast. Now they send down unhighlighted text that seems to choke the browser with anything but the smallest diffs.
Which, unfortunately, cannot be measured :( so no KPIs. Darn!
Its all fun and games until you cut quality over and over so much your customers just leave. Ask Chrysler or GE. I mean they must have saved, what, billions across decades? And for free!
Well... um... not free actually, because those companies have been run into the ground, dragged through hell, revived, and then damned again.
While I agree about devs should be testing on slow devices, the particular target audience of the site generally have pretty decent machines (including two that I have) and there isn't much reason for the site lag. It's not really doing any 3D rendering with complex shaders or video filtering; it just shows changed lines in two files. That shouldn't lag.
Ah the wonders of SPA! I know there are lighter ones, but it is not the first time hearing things like React being slow. Of course, when one start to do sntax highlighting on the client side....
Various comments and links throughout the discussion of this post indicate that the problem is a mix of the sheer number of nodes and css. It has nothing to do with React or being a React SPA, which it's also not, unless you have some proof otherwise.
React and the most commonly used pattern inherently promotes an way to complex html
Node structure and shitty css, especially if stuff like tailwind is used.
Now you CAN so it so that is not the case, but tbh i have never seen that in the wild -
Lol. If anything, Tailwind isn't shitty CSS (because it's a very limited number of classes) unlike the gazillion one-off classes that CSS-in-JS or even BEM encourages
Its not really a trope, the opposite is much more common. People are much more quick to mindlessly blame SSR for slowness like with ROR or PHP.
The reality is both can be slow, it depends on your data access patterns, network usage, and architecture.
But the other reality is that SPAs and REST APIs just usually have less optimal network usage and much worse data access patterns than traditional DB connected SSR monoliths. Same goes for micro service.
Like, you could design a highly scalable and optimal SPA. Who's doing it? Almost nobody.
No, instead they're making basically one endpoint per DB table, recreating SQL queries in client side memory, duplicating complex business logic on the front and back end, and sending 50 requests to load an dashboard.
React also has a class of problems that doesn't really happen in other types of apps: re-renders.
Even other frameworks like Vue.js, Solid or Svelte don't really suffer from it as much. It simply happens a couple order of magnitudes more often in React than any other framework.
> Server-side rendering (SSR) flag has been enabled for each of you. Can you take a look, click around and let me know if this has resolved some of the usability issues that you've reported here?
The fact that they have this ability / awareness and haven't completely reverted by now is shocking to me.
> Writing on the internet can be a two-way thing, a learning experience guided by iteration and feedback. I’ve learned some bad habits from Hacker News. I added Caveats sections to articles to make sure that nobody would take my points too broadly. I edited away asides and comments that were fun but would make articles less focused. I came to expect pedantic, judgmental feedback on everything I wrote, regardless of what it was.
They're not even saving any money. Syntax highlighting is a trivial workload, whereas the average SPA spends a lot of time in pointless roundtrips that have the server send more data down the pipe than the SSR equivalent.
That's a good question, without looking into any of the code id say bandwidth cost goes higher when moving away from server side rendering since you have to send the code for client side rending to each client which connects.
I guess if you say "we've made the UX worse" instead of "we've reduced costs but made the UX worse" to shareholders, they think of cost savings regardless.
React is always the problem, as using it in a performant way requires you to basically eject from using it, relying on it only for syncing state, like video key frames.
There are of course performant react apps out there. What Steve did with tldraw is amazing.
However, the vast majority of the apps out there are garbage since the framework itself is terribly inefficient.
We're not specifically blaming React. we're blaming their approach to React/SPA and how it caused a massive degrade compared to Github's Rails-based UX.
Github's code view page has been unreasonably slow for the last several years ever since they migrated away from Rails for no apparent reason.
I'm pretty sure that if they rendered/updated the same insane amount of nodes with some other technology, for example PJAX like they used to do, performance would not be better
why do you say "easily"? it took them considerable effort to make that atrocity, I'm pretty sure. The fact that tens of people worked on this and yet this is the result is way more telling of the team and company culture than it is of the specific tool.
The front-end is usually just a thin layer on top of a database, sometimes with backend services (queues/processing). Having a bad language on the front-end actually helped. You don't want to write code because of the bad language, you write less code, less code is less bugs. You had to be invested to increase the lines of code. It's like the hard chair of programming languages. If you don't want programmers to dwell there, bring the hard chairs.
I went several years without having to interact with Github, I came back to it this year and it was truly shocking how bad it's gotten.
I had to alter basically every aspect of how I interact with it because of how fucking slow it is! I still can't shake the sense that it's about to go down or that I've done something wrong every time I click something and nothing happens for several seconds.
I once worked with an CTO who was under the impression that Rails was a major reason why the legacy app was so slow and demanded a rewrite in Java (that's what he knew). It deflected the real reasons - of completely incompetent product managers, management and inexperienced developers. The tech stack wouldn't matter if the product was being managed for over a decade by idiots.
I only used phabricator on a side project that other devs, with meta history, had set up. And although not a heavy user, I rather liked it for being very basic which I thought was a very good thing.
My memory is fuzzy but I think it was on phab that I discovered and loved to use stacked merges. This is where you have a merge request into another open merge request etc. Super useful. Miss that in the git world.
clicking around GitHub and checking network panel, it seems to load plenty of server rendered HTML. Some views seem to use React within the page, but it doesn't appear to be a React SPA.
Maybe I should have said pre-react. I don't know what GitHub did specifically, but several years ago it used to be reasonably fast and relatively pleasant to use. It regressed a lot over the last few years, seemingly correlated with attempts at interactive features.
Github's Primer design system was wonderful when it was a pure CSS system that could be used with any framework. Sadly, M$ killed that and made the new Primer design system shitty-React consumption only.
Embedding gists and not fully implementing using dark or light mode annoys me. It's there but it just always has the theme set to light with no way to override the value.
There's things I don't like about it, and there are a looot of long-standing open issues, but I think GitLab is definitely better than GitHub in a number of ways. My org uses both (and also Azure DevOps, joy) and my team expects that the trend will be migrating from GitLab to GitHub. There are a bunch of things for me to grieve in that, much to my own surprise.
We use a self-hosted GitLab for about 6 years now. The only thing that is really awful is if your MR gets too big. Then GitLab will simply stop showing any code changes above a certain line threshold.
I rarely interact with projects hosted on it I'm always getting losts in unintuitive menus, for example: click on the tiny sidebar button > plan > issues just to open the bug tracker. The website also used to be bog slow compared to github, but thanks to microsoft the gap has been closing.
It's probably not related to the speed and I am not entirely certain how Github stores the repository but I noticed Gitlab does something weird to the bare repository so it's not directly usable as a bare repository.
Gitea is an example I like because it stores the repository as a bare repository, the same as if I did git clone --bare. I bring it up because when I stopped running Gitea, I could easily go in to the data and backup all the repositories an easily reuse them somewhere else.
Yes, I came here to say this exact thing. Also github search sucks bad as well as the way it shows diffs. My current client has just moved from bitbucket to GH and all the devs are up in arms.
For example, commandline search and tarball/zipball retrieval from the website, e.g., github.com, raw.githubusercontent.com and codeload.github.com, are not slow for me, certainly not any slower than Gitlab
I do not use a browser nor do I use the git software
As with any website, the HTML provides a guide to the location (URI) of resources some httpd is serving. Generally, I am not after Javascript, CSS, or other non-substantive "resources". I only want the HTML or JSON and any susbstantive resources pointed to therein
I use the Github website as I would any software mirror/repository
I'm not interested in images (mascots or other garbage) or executing code (gratuitous Javascript) when using the Github website, I'm interested in reading and downloading source code
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
You certainly can build slow apps with React, it doesn't make building slow things that hard. But honestly, React primitives (component mounting/unmounting, rendering, virtual DOM diffing, etc.) just aren't that slow/inefficient and using React in a fairly naive way isn't half-bad for data-heavy apps.
I actually have been trying to figure out how to get my React application (unreleased) to perform less laggy in Safari than it does in Firefox/Chrome, and it seems like it is related to all the damn DOM elements. This sucks. Virtualizing viewports adds loads of complexity and breaks some built-in browser features, so I generally prefer not to do it. But, at least in my case, Safari seems to struggle with doing certain layout operations with a shit load of elements more than Chrome and Firefox do.
> You certainly can build slow apps with React, it doesn't make building slow things that hard.
By all means. It sometimes feels like React is more the symptom than the actual issue, though.
Personally I generally just like having less code; generally makes for fewer footguns. But that's an incredibly hard sell in general (and of course not the entire story).
JS is the logical place to start with all the virtualization and fanciness.
But CSS has bit me with heavy pages (causing a few seconds of lag that even devtools debugging/logging didn't point towards). We know wildcard selectors can impact performance, but in my case there were many open ended selectors like `:not(.what) .ever` where the `:not()` not being attached to anything made it act like a wildcard with conditions. Using `:has()` will do the same with additional overhead. Safari was the worst at handling large pages and these types of selectors and I noticed more sluggishness 2-3 years ago.
`:not(.what) .ever` should be fairly fast, unless you have lots of `class="ever"` elements. Not ideal, but not as bad as e.g. `.ever :not(.what)` would be. `:has()` is just inherently slow if there's a significant amount of elements to search, even though browsers have some caching and tricks.
Normally, you be able to debug selector matching performance (and in general, see how much style computation costs you), so it's a bit weird if you have phantom multi-second delays.
Confirmation of priors is a powerful drug. And performance engineering is really hard and often lives at a different layer of the stack than the one you know.
It's just easier to blame the tools (or companies!) you already hate.
Thanks, that is definitely a good sign - given the rendering engine monopoly state of Chrome+derivatives and lack of great momentum behind Firefox adoption we need Apple to actively keep Safari not just viable but great even if only on macOS/iOS.
"Upgrade solution from .NET Framework 4.8 => .NET 8"
"Rename 'CustomerEmailAddress' to 'CustomerEmail'"
"Upgrade 3rd party API from v3 to v4"
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
I pretty frequently have conversations with other engineers where I point out that a piece of code makes an assumption that mostly holds true, but doesn't always hold true. Hence, a user visible bug.
The usual response is something like "if you're correct, wouldn't that mean there are hundreds of cases where this needs to be fixed to resolve this bug?". The answer obviously being yes. Incoming 100+ file PR to resolve this issue. I have no other ideas for how someone is supposed to resolve an issue in this scenario
Ideally, you automate a check like that. Because the answer turns out to actually be "humans are profoundly bad at that kind of pattern recognition."
A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.
You're not just reviewing the individual lines, but also which context, and which files are impacted. And automating that part would still mean reviewing the automation and the 1000+ changes to move to it.
Sure 1000+ changes kills the soul, we're not good at that, but sometimes there's just no other decent choice.
Oh, certainly, didn't mean that you had to avoid using your IDE to autorename a variable yourself (to avoid the boolean issue) and diffed results to those of the PR
Or that you had to avoid Ctrl+F "CustomerEmail" and see whether you had 1000 matches that matches the number of changed files or only 999 due to some typo.
Or using the web interface to filter by file type to batch your reviews.
Or...
Just that in none of those cases there is anything close to our memory/attention capacity.
I envy your IDE being able to do a rename of that scale.
I work in a large C++ codebase and a rename like that will actually just crash my vscode instance straight-up.
(There are good automated tools that make it straightforward to script up a repository-wide mutation like this however. But they still generate PRs that require human review; in the case of the one I used, it'd break the PR up into tranches of 50-ish files per tranche and then hunt down individuals with authority to review the root directory of the tranche and assign it to them. Quite useful!)
I've always thought those kinds of large-scale search-and-replace diffs should not generally be expected to be reviewed. If a review is 1000's of lines of identical changes (or newly-vendored code), literally nobody is actually reading it, even if they are somehow able to convince themselves that they are.
I would rather just see the steps you ran to generate the diff and review that instead.
A very simple example: migrating from JavaEE to JakartaEE. Every single Java source file has to have the imports changed from "javax." to "jakarta.", which can easily be thousands of files. It's also easy to review (and any file which missed that change will fail when compiling on the CI).
If the project you're working on vendors dependencies it's pretty easy to end up with that many files being changed when adding or updating, even when trying to make as narrow updates as possible in one PR.
Can’t speak for the person above but we keep a lot of configuration files in git and could easily write a thousand new configs in a single PR, or adding a new key to all the configs for example.
How long until those improvements reach users? I assume it requires an OS update or does Safari use something similar to Firefox and Chrome for faster updates?
I had to download STP for a specific case I don't even remember. Ever since, I get frequent OS Update notifications with new STP versions. Updates without a fully system which means no rebooting necessary. About as easy any other software typically does it, only this is using the OS' upgrade so it does make it those extra steps instead of clicking the update->relaunch button
GitHub moved to a JavaScript rendering mode almost as soon as Microsoft bought it. Previously, I had been able to browse it with JavaScript disabled on my 2011 Mac Mini which Apple stopped allowing upgrades on past macOS 10.13. So even if I enable JavaScript, I can no longer browse GitHub, because they didn't bother to make their build compatible with browser versions as old as mine.
It's hard to know which member of the duopoly is more guilty for breaking GitHub for me, but I find that blaming both often guarantees success.
I could like, buy a new computer and stuff. But you know, the whole Turing complete thing feels like a lie in the age of planned obsolescence. So web standards are too.
Firefox 115 is the last version that runs on 10.12, 10.13, and 10.14 (also Windows 7 and 8). At this point 115 is 2 years old and GitHub is only tested on bleeding edge browsers, apparently.
So GitHub is usable but there are a number of UI layout issues and searching within a file is sometimes a mess (eg, highlighting the wrong text, rendering text incorrectly, etc. maybe that's true for all browsers. you're better off viewing a file as text in raw mode)
Does that change anything compared to running an old version of Safari that's also extremely slow because of a bug to boot? Modern websites are broken on abandoned browsers anyway, but if you install a less-abandoned browser you'll have a better experience on average.
Firefox doesn't work on Windows 7 anymore but installing Firefox is still a hell of a lot better than sticking to IE.
Planned obsolescence is some of it, some of it is abstractions making it easier for more people to make software (at the cost of using significantly more compute) and Moore’s law being able to support those abstraction layers. Just imagine if every piece of software had to be written in C, the world would look a whole lot different.
I also think we’ve gone a bit too far into abstraction land, but hey, that’s where we are and it’s unlikely we are going back.
Turing completeness is almost an unrelated concept in all of this if you ask me, and if anything it’s because of completeness that has driven higher and higher memory and compute requirements.
> on my 2011 Mac Mini which Apple stopped allowing upgrades on past macOS 10.13
I know some people feel like Apple is aggressive in this respect, but that's an 8 year old version of a browser. That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
But Apple is also the one locking Safari to the OS, IE style. Having to buy a new machine to get the latest and secure version of a browser is a pretty heavy requirement.
or use a supported OS (linux, or hilariously probably Windows), or install a still-suppored browser (I'd guess Firefox likely still runs latest on there).
I'd put it on the end user for not updating software on 15 y/o hardware and still expecting the outside world to interact cleanly.
It's a matter of expectations, many laptops that old still work decently enough with a refreshed battery. Funnily enough win10 was released 15 ago, and one can still get support for it for at least another 3 years until 2028, even on the customer license.
Will modern versions of those other browsers still work on an 8 year old OS, or has it been updated where it is no longer compatible? So much effort has been put into hardware rendering, and the mechanisms for the browser to interact with that hardware has changed within those OS versions. Forcing the user to download an older compatible version of the browser to work with the older OS is also tossing away potential security fixes.
> That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
Depending on where you live (or what websites you visit) it's not unreasonable.
Can someone who's worked in an org this large help me understand how this happens? They surely do testing against major browsers and saw the performance issues before releasing. Is there really someone who gave the green light?
The way it works in tech today is that there are three groups:
- Project managers putting constant pressure on developers to deliver as fast as possible. It doesn't even matter if velocity will be lost in the future, or if the company might lose customers, or even if it breaks the law.
- Developers pushing back on things that can backfire and burning political capital and causing constant burnout. And when things DO backfire, the developer is to blame for letting it happen and not having pushed it more in the first place.
- Developers who learned that the only way to win is by not giving a single fuck, and just trucking on through the tasks without much thought.
This might sound highly cynical, but unfortunately this is what it has become.
Developers are way too isolated from the end result, and accountability is non-existent for PMs who isolate devs from the result, because "isolating developers" is seem as their only job.
EDIT: This is a cultural problem that can't be solved by individual contributors or by middle management without raising hell and putting a target on their backs. Only cultural change enforced by C-Levels is able to change this, but this is *not* in the interest of most CEOs or CTOs.
What's that, a Github employee? Not really, I'm in an YC startup.
But I guess the problem is that every single development position has been converging into this.
The only times in my career as a developer where I was 100% happy was when there was no career PM. Sales, customers, end-users, an engineering manager, another manager, a business owner, a random employee, some rando right out the street... All of those were way better product owners than career PMs in my 25 years of experience.
This is not exactly about competence of the category, it's just about what fits and doesn't. Software development ONLY work when there is a balance of power. PMs have leverage that developers rarely have.
I come from Electrical Engineering. Engineering requires responsibility, but responsibility requires the ability to say "no". PMs, when part of a multi-disciplinary team, make this borderline impossible, and make "being an engineer" synonymous with putting a target on your back.
If the PM was also an ex-developer and has both product management and development skills this happens a lot less. When the PM knows the Engineering and complexity and code debt cost of shipping a feature then they can self-triage with that additional information and choose not to send it to the developers or to consult with dev and scale it back to something more managable.
Its these professional PM's that have done nothing else other than project mangement or PMP that don't have an understanding of the long term dev. cost of features that cause these systemic issues.
What about PMs that were developers but were awful at it and just played the politics game to get that promotion and never have to see code ever again?
I worked with a few of those where it was horrible, because they were incompetent and unwilling to work to improve across all disciplines. But that says more about the individuals.
IMO "Knowing enough to do damage" is the worst possible situation.
A regular user who's a domain expert is 100x a better PO.
I'm still a big believer in "separation of powers" a la Scrum.
There should be a "Product Owner" that can be anyone really, and in the other side there is a self-managed development team that doesn't include this participant. This gives the team leverage to do things their way and act as a real engineering team.
The reason scrum was killed is because of PMs trying to get themselves into those teams and hijacking the process. Developers hated "PM-based scrum", which is not really scrum at all.
It's pretty much the same in every tech firm. When I worked at Facebook this same dynamic was playing out really badly. Amazon on the other hand had somewhat greater resilience against it due to a much tighter feedback loop with the c-suite.
The primary goal in deciding upon a tech stack is how easily the organization can hire/fire the people who write the code. The larger an organization becomes the more true this becomes. There are more developers writing React than Rails.
Don't listen to the opinions of the developers writing this code. Listen to the opinions of the people making these tech stack decisions.
Everything else is a distant second, which is why you get shitty performance, developers who cannot measure things. It also explains why when you ask the developers about any of this you get bizarre cognitive complexity for answers. The developers, in most cases, know what they need to do to be hired and cannot work outside those lanes and yet simultaneously have an awareness of various limitations of what they release. They know the result is slow, likely has accessibility problems, and scales poorly, and so on but their primary concern is retaining employment.
> Good developers looks at "what is the best and simplest (KISS) tool for this?"
Good ol’ SSR - but eventually users and PMs start requesting features that can only be implemented with an SPA system, and I (begrudgingly) accept their arguments.
In my role (of many) as technical architect for my org, and as an act of resistance (and possibly to intentionally sabotage LLMs taking over), I opted for hybrid SSR + Svelte - it’s working well for us.
The end-user experience is not of any concern in modern tech. None at all. The only thing that matters is engagement hacking and middle managers desperately trying to look like they're doing anything with any value or meaning at all.
As someone who has worked in and with large orgs, the better question is "why does this always happen?". In large organizations "ownership" of a product becomes more nebulous from a product and code standpoint due to churn and a focus on short-sighted goals.
If you put a lot of momentum behind a product with that mentality you get features piled on tech debt, no one gets enthusiastic about paying that down because it was done by some prior team you have no understanding of and it gets in the way of what management wants, which is more features so they can get bonuses.
Speaking up about it gets you shouted down and thrown on a performance improvement plan because you aren't aligned with your capitalist masters.
At this point "ownership" is just a buzzword thrown around by management types that has no meaning.
If a developer has to put up a fight in order to push back against the irresponsibility of a non-technical person, they by definition don't have ownership.
I've seen shops where ownership is used as a cudgel to punish unruly developers.
If the task isn't done as specified and on time,
the developer is faulted for not taking ownership,
but that "ownership" is meaningless,
as you note,
because it does not extend to pushing back against irresponsible or unreasonable demands.
> because you aren't aligned with your capitalist masters.
Is it your theory that working on large projects was better when you had communist masters? That seems inconsistent with everything we know, e.g. quotas enforced my mass murder.
My guess is that it's more about organizations (your first paragraph) and less about capitalism (your last paragraph).
That the optimization pressure imposed by "capitalist masters" can lead to perverse outcomes does not imply that the optimization pressure imposed by communist ones doesn't, surely?
For instance, the GP could be a proponent of self-management, and the statement would be coherent (an indictment of leaders within capitalism) without supposing anything about communism.
Yet another new account that has only a single comment replying to me. I've noticed this is a pattern.
At any rate your point doesn't make any sense. The same point indicts all leaders, it has nothing to do with capitalism. It's like saying something indicts a specific race of people when it applies to all people equally.
The short answer is: no, they don't. Google Cloud relied upon some Googlers happening to be Firefox users. We definitely didn't have a "machine farm" of computers running relevant OS and browser versions to test the UI against (that exists in Google for some teams and some projects, but it's not an "every project must have one" kind of resource). When a major performance regression was introduced (in Firefox only) in a UI my team was responsible for once, we had a ticket filed that was about as low-priority as you can file a ticket. The solution? Mozilla patched their rendering engine two minor versions later and the problem went away.
I put more than zero effort into fixing it, but tl;dr I had to chase the problem all the way to debugging the browser rendering engine itself via a build-from-source, and since nobody had set one of those up for the team and it was the first time I was doing it myself, I didn't get very far; Google's own in-house security got in the way of installing the relevant components to make it happen, I had to understand how to build Firefox from source in the first place, my personal machine was slow for the task (most of Google's builds are farm-based; compilation happens on servers and is cached, not on local machines).
I simply ran out of time; Mozilla fixed the issue before I could. And, absolutely, I don't expect it would have been promotion-notable that I'd pursued the issue (especially since the solution of "procrastinating until the other company fixes it" would have cost the company 0 eng-hours).
I can't speak for GitHub / Microsoft, but Google nominally supports the N (I think N=2) most recent browser versions for Safari, Edge, Chrome, Firefox, but "supports" can, indeed, mean "if Firefox pushes a change that breaks our UI... Well, you've got three other browsers you could use instead. At least." And of course, yes, issues with Chrome performance end up high priority because they interfere with the average in-house developer experience.
Maybe somewhat off topic but my GitHub app on an iPhone haven’t been updating the feed for a few months now, so no have relogged earlier but still the same.
Edit: could it have something to do with lockdown mode or how is it called now.
This thread has really opened my eyes to how much the world hates react developers, I am one.
Unrealistic timelines, implementing what should be backend logic in frontend, there's a bunch of ways SPA's tend to be a trap. Was react a bad idea? Can anyone point to a single well made react app?
Been a huge React/SPA fan for many years, until the realization began to creep on me that building them was actually harder than building C++ MFC desktop applications (which I did back in the 2000's). Declarative markup was supposed to reduce cognitive load, but it now feels like the interplay between the declarative part of UI development (component markup) and the procedural part (event handling and state) has slowly morphed into something more complex than simplly developing the UI procedurally.
Back in the day (I was a junior dev) this was easier than grappling with React hooks today:
Last couple years I built a largish javafx app and this was the entire way I structured it. A little tedious, but if I have a state management issue it's just logic on my side and not ten layers of abstraction.
That's a good way to put it. A bit tedious, but the mental model is relatively simple. Reading and writing code feels a little closer to riding a bicycle than operating a Rube-Goldberg device.
It's one of those cases where it only get's noticed when it's bad. It's also easy to hate on web technologies since everyone get's to use them everyday (larger user base). But most important of all, it makes people feel good about themselves hating on a technology used lot of times by people who are just starting out with programming. Gatekeeping at its best.
The hate is more geared towards SPAs in general, but there are some shining examples that show that a well-made React/Angular/whatever app can have great UX - Clockify being one of them.
I don’t think the culprit apps would have substantially better UX if they were rendered on the server, because these issues tend to be a consequence of devs being pressured to rapidly release new features without regard to quality.
And to be fair, the problems that Facebook had when they introduced React are not common problems at all.
As an aside, I was an employee around then and I vividly remember that the next half there was a topline goal to improve web speed. Hmmmm, I wonder what could have happened?
> And to be fair, the problems that Facebook had when they introduced React are not common problems at all.
That’s one of my favorites. The exact bug they described during React launch presentation, that React was supposed to help fix with the unidirectional dataflow. You know the one where unread message badges were showing up inconsistently in the UI in different places. They never managed to fix that bug in the 10 years since React was announced and I eventually left Facebook for good.
After having worked on React for a while, I can tell you that the problem remains between monitor and chair.
React can have all the niceties and optimization in the world, but that fails when its users insist on using it incorrectly, building huge tangled messy components and then wondering why a click takes 1.3 seconds to deliver feedback.
As someone who's worked with React professionally for years, it's honestly shocking how few React developers really understand memoization and when it needs to be used
IMO it's the MAIN thing to understand about React—how it renders.
Regardless, now I'm the one with egg on my face since the new compiler promises to eventually remove the need for manual memoization almost entirely. The "almost" still fills me with fear
Is there a good article or something you could point us younglings to? I get that in react almost everything is reactive by default unlike other frameworks. I tend to add useCallback and memo to everything nowadays.
The problem that is that react doesn't have a pit of success. Because it's poorly conceived, poorly designed and poorly written software made by people more interested in getting the word "homomorphism" onto their CV than solving real problems. You knew when they started using terms like "monad" and "functor" in order to attach a click handler to a button that something had gone badly wrong.
In this very thread there's some asshole using the word "memoization" when "caching" would have been fine.
React is a terrible idea. Everything about it is garbage. The api. State. How they do rendering. The “vdom”. It’s unnecessarily complicated and Byzantine. Like it was designed by someone trying to bill a large company many hours.
Svelte is ok. It could have been great but the api for their version of observables is a disaster (which I hope they eventually fix). Sveltekit is half baked and convoluted and I strongly advise not touching it.
React is a good idea compared to having to do SPA's without it. Try doing a SPA with only jQuery.
VDOM is also a good idea that simplifies the mental model tremendously. Of course these days we can do better than a VDOM. Svelte in fact doesn't use a VDOM. You can say that VDOM is a terrible idea in comparison with Svelte, but that's just anachronistic.
Yes, SPAs are inherently a very niche concept that has been applied to too many things for the wrong reasons.
On react, it's funny that sites where the frontend part is really crucial tend to move away from generic frameworks and do really custom stuff to optimize. I'm thinking about Notion, or Google Sheets, or Figma, where the web interface is everything and pretty early on they just bypass the frontend stacks generally used by the industry.
I don't hate React developers. I hate developers who build consumer facing software and use top of the line hardware and networks to test it while being ignorant to the fact that most of their users will be using their products on 8+ year old consumer grade hardware over spotty 3G
React feels like magic the first time you try it, specially if you don't have any experience with JSX. Then you need to prop drill and you regret everything.
The main problem is that it tries to do away with a view model layer so you can get the data and render it directly in the components, but that makes managing multiple components from a high level perspective literally impossible. Instead of one view model, you end up with 50 React-esque utilities to achieve the same result.
It's also misapplied here. If anything, it appears from the changes being made to WebKit that the issue is detailed interactions with DOM change logic and with CSS, not JavaScript. JavaScript may tickle the issue, but that's like blaming the mouse for allowing you to click on a button that has expensive operations attached to it.
I've definitely managed to make a page that uses almost no JavaScript and is dog-slow on Firefox (until Mozilla updated the rendering engine) just by building a table out of flexboxes. There's plenty of places for browsers to chug and die in the increasingly-complicated standard they adhere to.
Slack on my machine is currently taking ~1GiB of memory and 3% cpu to do nothing.
My irc client is taking 60MiB of memory and 0.01% cpu. My IRC client is responsive and faster, it has more configurable notification settings. I like the irc client more.
> Bandcamp
I just went to the bandcamp page and it indeed loaded very quickly. As far as I can tell, there's no react in use anywhere so I guess that's why.
On my machine Slack is taking 100MB of memory and 0.1% CPU to do nothing. Maybe we are using different Slack or one of us is lying about the "doing nothing" thing.
It's possible I'm wrong about bandcamp using react but your guess is far from reality as well – react itself does not prevent or discourage loading pages very quickly.
Ah, since Atlassian has been increasingly messing with Trello over the past couple years, it has really gone to shit. I currently have a Firefox profile dedicated solely to it, using >2 gigs of memory and about 1/3 of an M1 core. It has cumulatively used about a day's worth of CPU time in since I booted in 6 days ago. In contrast, the profile dedicated to Slack is using 750 MB and has burned about 27 minutes of CPU time.
Isn't the most common complaint against Slack that it's not optimized enough for what it does ? That's how I read the rants against its electron app, and people are already choosing the electron app against using it straight in the browser (as they'd do for Gmail or Calendar for instance)
Slack is one the most slick and pleasant pieces of software to use. It's big success as well as the fact that it's acquisition cost was one of the largest software deals ever – tells of itself – it's certainly a fine piece of software made by fine engineers who used react and electron with a certain amount of dignity. People who rant about tools like react or electron affecting their performance just don't want to understand that it's organisation and people behind the tools who are responsible for their performance.
Slack is the best of a bunch of trash options. That doesn't make it good. I shouldn't be able to accidentally select every widget in an app as though it were text. But with Electron apps, that's just normal.
Slack puts a nicer shade of lipstick on the pig than Teams does, but the lips still belong to the same thing.
> I shouldn't be able to accidentally select every widget in an app as though it were text.
I absolutely should. I hate how many applications have a UI that won't let me copy-paste an error message to search for, much less a menu item; who could possibly have thought that was a good idea?
I'd make an argument about the need for Slack to go beyond.
As you point out it's wildly successful and is the backbone of many groups internal communication. Many companies would just stop working without Slack, that's a testament to the current team's efforts, but also something that critical would merit better perfs.
I'd make the comparison with Figma, which went the extra mile to bring a level of smoothness that just wouldn't be there otherwise.
Discord is well-known to be very buggy, e.g. the search function. Spotify is also very slow with thousands of placeholder skeletons. Remember that Spotify once had a very fast native player.
> Spotify is also very slow with thousands of placeholder skeletons. Remember that Spotify once had a very fast native player.
Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
> Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
Regardless of how, the fact remains that the previous implementation of their UI did fetch and render the data from the backend significantly faster than the current React-based one does.
Everything is slower across every single facet of computing. Something is happening. I have a brand new Mac Studio M4 Max with 64gb of ram and every site is still slower than it was on a 2011 Mac Book Pro.
I remember using the internet 15 years aho, and things were definitely slower. I also wasn’t using the internet to run full-blown spreadsheets and design tools back then. My M series Macs are the snappiest computers I’ve used (minus my desktop when it runs Linux, but not windows).
Web developers should be forced to use hardware that's roughly at the 10th percentile in performance of their user base, not the 90th. Alternatively, make performance a WCAG concern.
Chrome Dev tools, and hopefully others, have a performance monitor option that lets you throttle the CPU and throttle the network. It should be plenty possible to test performance of sites on simulated 10th percentile systems, but this just seems low priority.
I don't think this would help, if a site or SPA performs terribly on a high end machine, the only conclusion I can draw is performance testing isn't tested or validated at all.
I've read comments online (here on HN) that Github has been rewriting their UI in React and that it's got slower since. I have no knowledge if this is true or not (ie React -> speed direct correlation), and my own projects are small enough not to see any performance impact.
I came across a blog post[1] (HN thread[2]) recently that sheds some light on the issue. The tl;dr is that the PR view can render over 100 000 DOM nodes, many of which are invisible inline SVG nodes, and SPA routing makes navigation a lot slower.
That blog post discovered that hard refreshing the page is faster than GitHub's SPA navigation, which led me to make this browser extension which makes GitHub navigation twice as fast:
I am such a masochist that I actually click those buttons. If it's good, great, if it's shit, I have time to adjust before they foist it upon me anyway
I’m usually a fan of going the SPA route. But for something like version control of a code base, the mission critical nature of it, I think should have less frills and serve plain html and css with optional js enhancements
I use Safari in my daily life, and I feel like 90% of the web apps I access are the worst crap in the world. At work, they decided to use Jira. Besides being slow, it consumes up to 2GB of RAM. Two gigabytes of RAM just for tickets? Ridiculous.
The GCP tools are a performance disaster in both Chrome and Safari in my experience. It can be actively painful at times on some screen like the log viewer.
Something did change with Safari when handling lots of DOM nodes around the last major release of all Apple's operating systems.
I have an ever growing directory listing using SolidJS, and it's up to about 25,000 items. Safari macOS and iOS two major versions ago actually handled it well. After the last major update, my phone rendered it faster than an m1 MacBook Pro.
I was reminded how fucked the modern web is a couple years ago when I encountered a so-fast-it-felt-like-local-static-html website dashboard that could have been a "web app", but wasn't.
It was being hosted on another continent. It was written in PHP. It was rendering server-side with just some light JS on my end.
When you mention that you're used to rendering HTML on the server side and don't use React on the frontend to do things, modern web people just look at you like you committed a crime or something (VanillaJS! the horror! Those thirty lines of Javascript would be unmaintainable without a deployment tool!!!!).
It's really hard to fight the trend especially in larger orgs.
Wait until you plug it into JIRA, strap copilot and actions on it. Then you can have all flavours of hell at once. Our org has ground to a halt.
A lot of the time we just break the branch permissions on the repo we are using and run release branches without PRs and ignore the entire web interface.
Oh man. Start a manual job, wait for it to appear in the UI. 10 minutes later, it finally appears. Or just refresh the page manually immediately after starting the job, and there it is...
We used to use bitbucket Web hooks that used to trigger Jenkins jobs. This was almost instant. Now after migrating to GH actions it can take minutes before jobs start on push for example...
"Modern" Web UIs to make backpack-portable supercomputers feel slow operating on text files that wouldn't have been challenging to work with by 1990 standards.
The Cloud to make single-digit-seconds operations on a local Raspberry Pi 2 and home Internet take a few minutes.
I experienced the same since I turned on the "new files changed experience". The fun part is that the first few weeks of the preview it was _worse_ then now. I am truly baffled at the lack of quality on such an important change
> The solution is a test that fails when Chrome and Safari have substantially different render times.
That test will be disabled for being flaky in under a week because the CI runners have contention with other jobs, causing them to randomly be slower and flake, and the frontend team does not want to waste time investigating flakes.
"Just have dedicated runners with guaranteed CPU performance", but that's the CI platform team's issue, the frontend and testing teams can't fix it, and the CI infra team won't prioritize it for a minimum of 5 years.
Yeah, it is! Even for simple things, like opening a PR and searching in the combo box for the name of the branch to merge into. We only have like 40 branches. It should not freeze the tab for 30 seconds to search a list of 40 items.
same except 64GB and M3 Max smh... takes literally minutes to open the "Labels" popup and make a pr... its completely unacceptable for a product like this...
Unfortunately this is the fate of most modern sites, they start off simple then they start bloating the website with social media and analytics. SV blokes don't care or notice on their $5k+ top of the line laptops but for everyone else it's an issue
It has to present text lists, tables and small icons. Makes mobile Safari crawl to a halt. With multi gigahertz, multi core cpus and hardware accelerated js. It is pathetic.
GitHub has a great GraphQL API but a subpar UI. It's a great fit for Isograph! Anyway, if folks are interested, feel free to check out this conference talk (https://www.youtube.com/watch?v=sf8ac2NtwPY), where vibe code an Isograph app that consumes the GitHub API. TLDR, it is a lot easier to replace GitHub than you think, and it would make for a hell of a splashy side project.
Another website that is so slow it's unusable is Stripe.
My CPU goes to 100% and fans roaring every time I load the dashboard and transactions. I can barely click on customers/subscriptions/etc. I can't be the only one...
Glad I’m not the only one experiencing this. The Stripe dashboard constantly freezes up for me, even registering a click takes 10-20 seconds. Often it will just go white. Incredible annoying.
It's been very clear to me for quite awhile that they have to be doing this to push users to their mobile app, at least on iOS. I used to review PRs on my phone at night, but now I have to use the app because anything over a thousand lines will crash iOS Safari or cause scrolling to misbehave. Reddit has done the same over the years, as have countless other web apps.
This is likely happening in the new Pull Request experience only. If so, it's due to React. This is what happens when you use React for such large pages. "JavaScript is fast!" No, it really isn't. Especially not when you pile abstraction layer on top of abstraction layer on top of abstraction layer on top of abstraction layer.
Isn't the opposite? No one in this thread even cogitating how bad Safari is in terms of performance and supporting web standards? There's in one even partially blaming both. Github isn't the best example of a fast website, but if you can run it in Chrome and Firefox, even on rudimentary browsers like Palemoon (I tested) on decent hardware (even mobile), there's something clearly wrong on Safari.
Safari is behind on web standards,
but often those standards are things designed and implemented by the Chrome team and pushed into standards later.
It's the Chromification of the web,
where the standard is "whatever chrome does".
It's much like the era of "Designed For IE" or "Works best in Netscape 2.3",
but now there's a thrice-convicted monopolist in de facto control of the standard.
The GitHub website reminds me of the first video in the Clean Coders series, where he points out that eventually devs want a total rewrite to "Fix" all the shortcomings, but GitHub from the perspective of most users had nothing UI wise that needed fixing. We all would have been happy with the UI as is.
Clean code argues that instead of total rewrites you should focus on gradual improvements over time, refactor code so that overtime you pay off the dividends, without re-living through all the bugs you lived through 5 years ago that you don't recall the resolution of. Every rewrite project I've ever worked on, we run into bugs we had already fixed years prior, or the team before me has.
There are times when a total rewrite might be the best and only options such as deprecated platforms (think of like Visual Basic 6 apps that will never get threading).
What frustrates me more is that GitHub used to be open to browse, and the search worked, now in their effort to force you to make an account (I HAVE LIKE TEN ALREADY) and force you to login, they include a few "dark patterns" where parts of search don't work at all.
Rewrite is usually about learning about all the past mistakes and problems and designing your architecture in a way that you prevent all the previously known issues. It is iterative process on the design level. If you end up repeating all the same bugs, it went very wrong from the beginning. So if you don’t have the information about all the previous problems, then it is likely mistake.
It reminds me also of the original head of development of the Safari browser talking about at least the early days of building the browser. They had a rule that no commit of code could cause the browser benchmarks to get slower. And apparently he was maniacal about the rule.
I don’t know if that’s a good or realistic rule for most projects, but I imagine for performant types of applications, that’s exactly what it takes to prevent eventual slowdown.
The Github website is slow everywhere. It is truly a piece of shit software both in terms of performance but also UX/UI and everything in between.
It's a product of many cooks and their brilliant ideas and KPIs, a social network for devs and code being the most "brilliant" of them all. For day to day dev operations is something so mediocre even Gitlab looks like the golden standard compared to Github.
And no, the problem is not "Rails" or [ insert any other tech BS to deflect the real problems ].
> And no, the problem is not "Rails"
The problem is they abandoned rails for react. The old SSR GitHub experience was very good. You could review massive PRs on any machine before they made the move.
I’m pretty sure they used to do syntax highlighting on the server before and it was fast. Now they send down unhighlighted text that seems to choke the browser with anything but the smallest diffs.
Whoever had a KPI for improving server performance and decreasing cost got their promotion that quarter, that is for sure.
Servers cost money, the client is free (and pays you sometimes)!
The client costs money too, opportunity cost.
Which, unfortunately, cannot be measured :( so no KPIs. Darn!
Its all fun and games until you cut quality over and over so much your customers just leave. Ask Chrysler or GE. I mean they must have saved, what, billions across decades? And for free!
Well... um... not free actually, because those companies have been run into the ground, dragged through hell, revived, and then damned again.
MBA's ruin everything
the problem is developers having fast modern machines.
if they were forced to use slow machines, they would not be able to put out crap like that
While I agree about devs should be testing on slow devices, the particular target audience of the site generally have pretty decent machines (including two that I have) and there isn't much reason for the site lag. It's not really doing any 3D rendering with complex shaders or video filtering; it just shows changed lines in two files. That shouldn't lag.
It is garbage even on my extremely high end desktop PC.
I don't think so, I think the problem is their devs work on tiny play pretend codebases or micro service architecture.
GitHub is big software, but not that big. Huge monorepos and big big diffs grind GitHub to a pulp.
GitHub runs a mostly monolithic architecture
So? You can still do a PR of 1 line and the diff will only show that 1 line.
M4 macbook pro and almost unusably slow
Ah the wonders of SPA! I know there are lighter ones, but it is not the first time hearing things like React being slow. Of course, when one start to do sntax highlighting on the client side....
Various comments and links throughout the discussion of this post indicate that the problem is a mix of the sheer number of nodes and css. It has nothing to do with React or being a React SPA, which it's also not, unless you have some proof otherwise.
React and the most commonly used pattern inherently promotes an way to complex html Node structure and shitty css, especially if stuff like tailwind is used.
Now you CAN so it so that is not the case, but tbh i have never seen that in the wild -
Lol. If anything, Tailwind isn't shitty CSS (because it's a very limited number of classes) unlike the gazillion one-off classes that CSS-in-JS or even BEM encourages
Edit: here's a good investigation on a real-enough app https://www.developerway.com/posts/tailwind-vs-linaria-perfo...
Yes what's with his comment?
Tailwind is probably one of the best considering you can use Vite to literally strip out all unused css easily.
And I think tailwind v4 does this automatically
> When native software is slow, it's bad software. When web software is slow, react is bad software.
This is such a tired trope.
Its not really a trope, the opposite is much more common. People are much more quick to mindlessly blame SSR for slowness like with ROR or PHP.
The reality is both can be slow, it depends on your data access patterns, network usage, and architecture.
But the other reality is that SPAs and REST APIs just usually have less optimal network usage and much worse data access patterns than traditional DB connected SSR monoliths. Same goes for micro service.
Like, you could design a highly scalable and optimal SPA. Who's doing it? Almost nobody.
No, instead they're making basically one endpoint per DB table, recreating SQL queries in client side memory, duplicating complex business logic on the front and back end, and sending 50 requests to load an dashboard.
React also has a class of problems that doesn't really happen in other types of apps: re-renders.
Even other frameworks like Vue.js, Solid or Svelte don't really suffer from it as much. It simply happens a couple order of magnitudes more often in React than any other framework.
Only in the ideal world, where developers test results of their work on their machines. Test for real, with real db, not with empty or mocks.
We were very few to rant about it, 1 year ago: https://github.com/orgs/community/discussions/62372
Their "solution" was to enable SSR for us ranters' accounts.
> Server-side rendering (SSR) flag has been enabled for each of you. Can you take a look, click around and let me know if this has resolved some of the usability issues that you've reported here?
The fact that they have this ability / awareness and haven't completely reverted by now is shocking to me.
Honestly that's wild. This should be an option in their settings.
This is actually unreal. Wow.
> The problem is they abandoned rails for react.
Which, it seems, was a result of the M$ acquisition: https://muan.co/posts/javascript
fyi this page detects a hacker news referrer and sends you in an infinite loop. Have to open the link via copy-paste.
Firefox has a setting in about:config to only send referrer headers when navigating to links on the same base domain.
network.http.referer.XOriginPolicy = 1
I believe this is enabled by default when using Enhanced Tracking Protection.
Lol I respect that https://muan.co/no-yc/
Based tbh
I love the explanation in the linked site:
> Writing on the internet can be a two-way thing, a learning experience guided by iteration and feedback. I’ve learned some bad habits from Hacker News. I added Caveats sections to articles to make sure that nobody would take my points too broadly. I edited away asides and comments that were fun but would make articles less focused. I came to expect pedantic, judgmental feedback on everything I wrote, regardless of what it was.
https://macwright.com/2022/09/15/hacker-news
Which is true. Pedantism is the lowest form of pseudo-intelligence.
Well yeah, but just imagine how much money they’re saving by delivering a subpar experience!
They're not even saving any money. Syntax highlighting is a trivial workload, whereas the average SPA spends a lot of time in pointless roundtrips that have the server send more data down the pipe than the SSR equivalent.
I'll play devils advocate - does it save them some storage space or bandwidth in the CDN that delivers Github?
That's a good question, without looking into any of the code id say bandwidth cost goes higher when moving away from server side rendering since you have to send the code for client side rending to each client which connects.
Sending data is what’s trivial compared to compute… syntax highlighting is not trivial workload compared to that, you don’t know what you’re saying.
Or how much money they are capturing in investiments or corporate deals because of the tech stack
I guess if you say "we've made the UX worse" instead of "we've reduced costs but made the UX worse" to shareholders, they think of cost savings regardless.
You can easily do this very fast in React if you don‘t fuck it up. They did fuck it up a bit.
React is always the problem, as using it in a performant way requires you to basically eject from using it, relying on it only for syncing state, like video key frames.
There are of course performant react apps out there. What Steve did with tldraw is amazing.
However, the vast majority of the apps out there are garbage since the framework itself is terribly inefficient.
The problem is that they deprioritized everything for more copilot bullshit.
if you look at the thread, the explanation is not this easy, as much as it's satisfying to blame React (or any other single tech)
Not once have I seen a site go from SSR to SPA and been pleasantly surprised. It always trends towards worse in responsiveness and overall UX.
I’m sure you could make something work better as a SPA, but nobody does.
We're not specifically blaming React. we're blaming their approach to React/SPA and how it caused a massive degrade compared to Github's Rails-based UX.
Github's code view page has been unreasonably slow for the last several years ever since they migrated away from Rails for no apparent reason.
That comment was about overall slowness of the site, not a specific issue on a specific browser.
Available data confirms that SPA tends to perform worse than classic SSR.
I'm pretty sure that if they rendered/updated the same insane amount of nodes with some other technology, for example PJAX like they used to do, performance would not be better
Agree you can shoot yourself in the foot with pretty much any technology. By design, it's much easier to be inefficient with SPA frameworks.
You're right. The technology is not necessarily flawed. It is more about the people who decided to use it and the way in which they used it.
exactly. I don't want to do a "no true scotsman" to defend React, but circumstantial evidence suggests that they wildly misused the tool
A tool that lends itself to misuse so easily is a bad tool, period.
why do you say "easily"? it took them considerable effort to make that atrocity, I'm pretty sure. The fact that tens of people worked on this and yet this is the result is way more telling of the team and company culture than it is of the specific tool.
So PHP <6 was a great language?
The front-end is usually just a thin layer on top of a database, sometimes with backend services (queues/processing). Having a bad language on the front-end actually helped. You don't want to write code because of the bad language, you write less code, less code is less bugs. You had to be invested to increase the lines of code. It's like the hard chair of programming languages. If you don't want programmers to dwell there, bring the hard chairs.
Have you not seen the internet these past decades?
I went several years without having to interact with Github, I came back to it this year and it was truly shocking how bad it's gotten.
I had to alter basically every aspect of how I interact with it because of how fucking slow it is! I still can't shake the sense that it's about to go down or that I've done something wrong every time I click something and nothing happens for several seconds.
I once worked with an CTO who was under the impression that Rails was a major reason why the legacy app was so slow and demanded a rewrite in Java (that's what he knew). It deflected the real reasons - of completely incompetent product managers, management and inexperienced developers. The tech stack wouldn't matter if the product was being managed for over a decade by idiots.
After 10 years of using Phabricator at a previous company I am still shocked how bad GitHub is. This the industry standard?!
Too bad Phabricator is maintenance-only now https://en.m.wikipedia.org/wiki/Phabricator
I only used phabricator on a side project that other devs, with meta history, had set up. And although not a heavy user, I rather liked it for being very basic which I thought was a very good thing.
My memory is fuzzy but I think it was on phab that I discovered and loved to use stacked merges. This is where you have a merge request into another open merge request etc. Super useful. Miss that in the git world.
Can't you simply make a PR against the other PR's branch?
Looks like a new community developed fork of Phabricator is up! I've never used it but glad to see the project continues.
https://we.phorge.it/
I tried poking around but it looks like you have to be logged into to view the source, and registration requires manual approval. :/
I assume this is fallout from dealing with LLM content scrapers.
Yes, exactly. Even though you can clone the git repos anonymously, or look at the Github mirror.
https://we.phorge.it/phame/post/view/8/anonymous_cloning_dis... https://we.phorge.it/phame/post/view/9/anonymous_cloning_has...
I, for one, despised phabricator (in comparison to GitHub) when I had to use it last. But that was at least 50% from also having to use svn
It's frustrating, because GitHub used to perform quite well, before it was a single page app.
clicking around GitHub and checking network panel, it seems to load plenty of server rendered HTML. Some views seem to use React within the page, but it doesn't appear to be a React SPA.
Maybe I should have said pre-react. I don't know what GitHub did specifically, but several years ago it used to be reasonably fast and relatively pleasant to use. It regressed a lot over the last few years, seemingly correlated with attempts at interactive features.
It used to be jQuery + PJAX
More recently - and still - it was web components. React is gradually creeping in.
Github's Primer design system was wonderful when it was a pure CSS system that could be used with any framework. Sadly, M$ killed that and made the new Primer design system shitty-React consumption only.
Embedding gists and not fully implementing using dark or light mode annoys me. It's there but it just always has the theme set to light with no way to override the value.
At the very least, I wish they set it to auto.
Have you used gitlab every day in anger? I don't think you'd feel the same if you have.
There's things I don't like about it, and there are a looot of long-standing open issues, but I think GitLab is definitely better than GitHub in a number of ways. My org uses both (and also Azure DevOps, joy) and my team expects that the trend will be migrating from GitLab to GitHub. There are a bunch of things for me to grieve in that, much to my own surprise.
We use a self-hosted GitLab for about 6 years now. The only thing that is really awful is if your MR gets too big. Then GitLab will simply stop showing any code changes above a certain line threshold.
I both use gitlab, and run our gitlab instance for our company, with as many as 700 users.
I still love it! Works great, makes sense, is fast...
I use GitLab daily and feel like it’s a joy to use. What do you dislike?
I rarely interact with projects hosted on it I'm always getting losts in unintuitive menus, for example: click on the tiny sidebar button > plan > issues just to open the bug tracker. The website also used to be bog slow compared to github, but thanks to microsoft the gap has been closing.
Ok so what's a good example?
Gerrit
truly worthy of an acquisition from MS then
It is faster than GitLab, at least to me.
Is your deployment SaaS or running on your company servers?
Gitlab is anything but light, by default tends to be slow, but surprisingly fast with a good server ( nothing crazy, but big ) and caching.
It's probably not related to the speed and I am not entirely certain how Github stores the repository but I noticed Gitlab does something weird to the bare repository so it's not directly usable as a bare repository.
Gitea is an example I like because it stores the repository as a bare repository, the same as if I did git clone --bare. I bring it up because when I stopped running Gitea, I could easily go in to the data and backup all the repositories an easily reuse them somewhere else.
GitLab and GitHub both use custom storage backends for git for reasons of scale.
GitLab: https://docs.gitlab.com/administration/gitaly/praefect/
GitHub: https://github.blog/engineering/infrastructure/stretching-sp...
Just gitlab.com.
Yes, I came here to say this exact thing. Also github search sucks bad as well as the way it shows diffs. My current client has just moved from bitbucket to GH and all the devs are up in arms.
Where is it good?
It is quite snappy in my Firefox on Windows.
Never had any issues with it.
The page the person on the issue had loading for 10s, takes almost 2s here.
"The Github website is slow everywhere."
Perhaps it depends what software one is using
For example, commandline search and tarball/zipball retrieval from the website, e.g., github.com, raw.githubusercontent.com and codeload.github.com, are not slow for me, certainly not any slower than Gitlab
I do not use a browser nor do I use the git software
The website is fast if you don't use the website
As with any website, the HTML provides a guide to the location (URI) of resources some httpd is serving. Generally, I am not after Javascript, CSS, or other non-substantive "resources". I only want the HTML or JSON and any susbstantive resources pointed to therein
I use the Github website as I would any software mirror/repository
I'm not interested in images (mascots or other garbage) or executing code (gratuitous Javascript) when using the Github website, I'm interested in reading and downloading source code
Improvements merged within the last two days by the WebKit team: https://github.com/orgs/community/discussions/170922#discuss...
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
Interesting how _everyone_ here blames JS and React, yet the fixes you linked are about CSS performance.
You certainly can build slow apps with React, it doesn't make building slow things that hard. But honestly, React primitives (component mounting/unmounting, rendering, virtual DOM diffing, etc.) just aren't that slow/inefficient and using React in a fairly naive way isn't half-bad for data-heavy apps.
I actually have been trying to figure out how to get my React application (unreleased) to perform less laggy in Safari than it does in Firefox/Chrome, and it seems like it is related to all the damn DOM elements. This sucks. Virtualizing viewports adds loads of complexity and breaks some built-in browser features, so I generally prefer not to do it. But, at least in my case, Safari seems to struggle with doing certain layout operations with a shit load of elements more than Chrome and Firefox do.
> You certainly can build slow apps with React, it doesn't make building slow things that hard.
By all means. It sometimes feels like React is more the symptom than the actual issue, though.
Personally I generally just like having less code; generally makes for fewer footguns. But that's an incredibly hard sell in general (and of course not the entire story).
JS is the logical place to start with all the virtualization and fanciness.
But CSS has bit me with heavy pages (causing a few seconds of lag that even devtools debugging/logging didn't point towards). We know wildcard selectors can impact performance, but in my case there were many open ended selectors like `:not(.what) .ever` where the `:not()` not being attached to anything made it act like a wildcard with conditions. Using `:has()` will do the same with additional overhead. Safari was the worst at handling large pages and these types of selectors and I noticed more sluggishness 2-3 years ago.
`:not(.what) .ever` should be fairly fast, unless you have lots of `class="ever"` elements. Not ideal, but not as bad as e.g. `.ever :not(.what)` would be. `:has()` is just inherently slow if there's a significant amount of elements to search, even though browsers have some caching and tricks.
Normally, you be able to debug selector matching performance (and in general, see how much style computation costs you), so it's a bit weird if you have phantom multi-second delays.
These performance problems are new since a rewrite which also added react. Could be just a coincidence, but that is why people blame react.
Confirmation of priors is a powerful drug. And performance engineering is really hard and often lives at a different layer of the stack than the one you know.
It's just easier to blame the tools (or companies!) you already hate.
Thanks, that is definitely a good sign - given the rendering engine monopoly state of Chrome+derivatives and lack of great momentum behind Firefox adoption we need Apple to actively keep Safari not just viable but great even if only on macOS/iOS.
That seems essentially unreviewable. If you can share without violating an NDA, what kind of PR would involve that many files?
"Upgrade solution from .NET Framework 4.8 => .NET 8"
"Rename 'CustomerEmailAddress' to 'CustomerEmail'"
"Upgrade 3rd party API from v3 to v4"
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
It's not GitHub-specific advice, it's about reviewability of the PR vs. human working memory/maximum attention span.
I pretty frequently have conversations with other engineers where I point out that a piece of code makes an assumption that mostly holds true, but doesn't always hold true. Hence, a user visible bug.
The usual response is something like "if you're correct, wouldn't that mean there are hundreds of cases where this needs to be fixed to resolve this bug?". The answer obviously being yes. Incoming 100+ file PR to resolve this issue. I have no other ideas for how someone is supposed to resolve an issue in this scenario
How much working memory/attention span is required to look through 1000 identical lines "-CustomerEmailAddress +CustomerEmail"?
Ideally, you automate a check like that. Because the answer turns out to actually be "humans are profoundly bad at that kind of pattern recognition."
A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.
You're not just reviewing the individual lines, but also which context, and which files are impacted. And automating that part would still mean reviewing the automation and the 1000+ changes to move to it.
Sure 1000+ changes kills the soul, we're not good at that, but sometimes there's just no other decent choice.
Oh, certainly, didn't mean that you had to avoid using your IDE to autorename a variable yourself (to avoid the boolean issue) and diffed results to those of the PR
Or that you had to avoid Ctrl+F "CustomerEmail" and see whether you had 1000 matches that matches the number of changed files or only 999 due to some typo.
Or using the web interface to filter by file type to batch your reviews.
Or...
Just that in none of those cases there is anything close to our memory/attention capacity.
I envy your IDE being able to do a rename of that scale.
I work in a large C++ codebase and a rename like that will actually just crash my vscode instance straight-up.
(There are good automated tools that make it straightforward to script up a repository-wide mutation like this however. But they still generate PRs that require human review; in the case of the one I used, it'd break the PR up into tranches of 50-ish files per tranche and then hunt down individuals with authority to review the root directory of the tranche and assign it to them. Quite useful!)
Yeah VSCode is pretty terrible at refactorings that Jetbrains or Zed will do basically instantly.
We already automate that, it's called a compiler. The human review is just for kicks for this type of thing.
Of course some languages... PHP... aren't so lucky. $customer->cusomerEmail? Good luck dealing with that critical in production, fuckheads!
I've always thought those kinds of large-scale search-and-replace diffs should not generally be expected to be reviewed. If a review is 1000's of lines of identical changes (or newly-vendored code), literally nobody is actually reading it, even if they are somehow able to convince themselves that they are.
I would rather just see the steps you ran to generate the diff and review that instead.
> what kind of PR would involve that many files?
A very simple example: migrating from JavaEE to JakartaEE. Every single Java source file has to have the imports changed from "javax." to "jakarta.", which can easily be thousands of files. It's also easy to review (and any file which missed that change will fail when compiling on the CI).
If the project you're working on vendors dependencies it's pretty easy to end up with that many files being changed when adding or updating, even when trying to make as narrow updates as possible in one PR.
Ones where you have a lot of generated files you commit into Git, and you change the output of the generator tool.
Convert space indents to tabs, as god intended.
as god indented, you mean?
Can’t speak for the person above but we keep a lot of configuration files in git and could easily write a thousand new configs in a single PR, or adding a new key to all the configs for example.
How long until those improvements reach users? I assume it requires an OS update or does Safari use something similar to Firefox and Chrome for faster updates?
There is a developer version you can install. There is beta, but that overrides your existing Safari and rollback might be tricky sometimes.
But there is also the Safari Technology Preview, which installs as a separate app, but is also a bit more unstable. Similar to Chrome Canary.
I had to download STP for a specific case I don't even remember. Ever since, I get frequent OS Update notifications with new STP versions. Updates without a fully system which means no rebooting necessary. About as easy any other software typically does it, only this is using the OS' upgrade so it does make it those extra steps instead of clicking the update->relaunch button
STP is a great thing if you wished you had two different Safaris. Profiles just don't work as well as a completely different app.
GitHub moved to a JavaScript rendering mode almost as soon as Microsoft bought it. Previously, I had been able to browse it with JavaScript disabled on my 2011 Mac Mini which Apple stopped allowing upgrades on past macOS 10.13. So even if I enable JavaScript, I can no longer browse GitHub, because they didn't bother to make their build compatible with browser versions as old as mine.
It's hard to know which member of the duopoly is more guilty for breaking GitHub for me, but I find that blaming both often guarantees success.
I could like, buy a new computer and stuff. But you know, the whole Turing complete thing feels like a lie in the age of planned obsolescence. So web standards are too.
> …on my 2011 Mac Mini which Apple stopped allowing upgrades on past macOS 10.13.
In case you're one of today's lucky 10,000, OpenCore Legacy Patcher supports Macs going to back as far as 2007: https://github.com/dortania/OpenCore-Legacy-Patcher
The newer versions of macOS are also slower than the older ones, so that doesn't solve the actual problem.
Couldn’t you install Chrome or Firefox?
Firefox 115 is the last version that runs on 10.12, 10.13, and 10.14 (also Windows 7 and 8). At this point 115 is 2 years old and GitHub is only tested on bleeding edge browsers, apparently.
So GitHub is usable but there are a number of UI layout issues and searching within a file is sometimes a mess (eg, highlighting the wrong text, rendering text incorrectly, etc. maybe that's true for all browsers. you're better off viewing a file as text in raw mode)
Does that change anything compared to running an old version of Safari that's also extremely slow because of a bug to boot? Modern websites are broken on abandoned browsers anyway, but if you install a less-abandoned browser you'll have a better experience on average.
Firefox doesn't work on Windows 7 anymore but installing Firefox is still a hell of a lot better than sticking to IE.
How does Turing completeness feel like a lie?
Planned obsolescence is some of it, some of it is abstractions making it easier for more people to make software (at the cost of using significantly more compute) and Moore’s law being able to support those abstraction layers. Just imagine if every piece of software had to be written in C, the world would look a whole lot different.
I also think we’ve gone a bit too far into abstraction land, but hey, that’s where we are and it’s unlikely we are going back.
Turing completeness is almost an unrelated concept in all of this if you ask me, and if anything it’s because of completeness that has driven higher and higher memory and compute requirements.
Turing completeness never says anything about performance. Hypothetically, sure, you could emulate a newer computer on your current computer.
That implies having infinite memory.
> on my 2011 Mac Mini which Apple stopped allowing upgrades on past macOS 10.13
I know some people feel like Apple is aggressive in this respect, but that's an 8 year old version of a browser. That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
But Apple is also the one locking Safari to the OS, IE style. Having to buy a new machine to get the latest and secure version of a browser is a pretty heavy requirement.
or use a supported OS (linux, or hilariously probably Windows), or install a still-suppored browser (I'd guess Firefox likely still runs latest on there).
I'd put it on the end user for not updating software on 15 y/o hardware and still expecting the outside world to interact cleanly.
> hilariously probably Windows
That's probably true.
> 15 y/0
It's a matter of expectations, many laptops that old still work decently enough with a refreshed battery. Funnily enough win10 was released 15 ago, and one can still get support for it for at least another 3 years until 2028, even on the customer license.
W10 was released 10 years ago, not 15. https://en.wikipedia.org/wiki/Windows_10_version_history
Sorry, I just have pattern matched the 2015 release year instead of properly counting.
i mean there are also lots of browser options to be fair.
should they be locking safari to the OS, definitely not. but users can just go download another browser if they are actually concerned.
Will modern versions of those other browsers still work on an 8 year old OS, or has it been updated where it is no longer compatible? So much effort has been put into hardware rendering, and the mechanisms for the browser to interact with that hardware has changed within those OS versions. Forcing the user to download an older compatible version of the browser to work with the older OS is also tossing away potential security fixes.
i mean you're also throwing away security fixed running an OS that out of date, but the person doing this probably doesn't care about security anyway.
> That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
Depending on where you live (or what websites you visit) it's not unreasonable.
Attacks via ad networks mean that is likely limited more than people would expect.
Just migrate to Forgejo/Codeberg[1][2] or SourceHut[3]. Both are like a light speed compared to GitHub and GitLab.
[1] https://forgejo.org
[2] https://codeberg.org
[3] https://sourcehut.org
Can someone who's worked in an org this large help me understand how this happens? They surely do testing against major browsers and saw the performance issues before releasing. Is there really someone who gave the green light?
The way it works in tech today is that there are three groups:
- Project managers putting constant pressure on developers to deliver as fast as possible. It doesn't even matter if velocity will be lost in the future, or if the company might lose customers, or even if it breaks the law.
- Developers pushing back on things that can backfire and burning political capital and causing constant burnout. And when things DO backfire, the developer is to blame for letting it happen and not having pushed it more in the first place.
- Developers who learned that the only way to win is by not giving a single fuck, and just trucking on through the tasks without much thought.
This might sound highly cynical, but unfortunately this is what it has become.
Developers are way too isolated from the end result, and accountability is non-existent for PMs who isolate devs from the result, because "isolating developers" is seem as their only job.
EDIT: This is a cultural problem that can't be solved by individual contributors or by middle management without raising hell and putting a target on their backs. Only cultural change enforced by C-Levels is able to change this, but this is *not* in the interest of most CEOs or CTOs.
This is shockingly accurate - are you a Hubber? :)
What's that, a Github employee? Not really, I'm in an YC startup.
But I guess the problem is that every single development position has been converging into this.
The only times in my career as a developer where I was 100% happy was when there was no career PM. Sales, customers, end-users, an engineering manager, another manager, a business owner, a random employee, some rando right out the street... All of those were way better product owners than career PMs in my 25 years of experience.
This is not exactly about competence of the category, it's just about what fits and doesn't. Software development ONLY work when there is a balance of power. PMs have leverage that developers rarely have.
I come from Electrical Engineering. Engineering requires responsibility, but responsibility requires the ability to say "no". PMs, when part of a multi-disciplinary team, make this borderline impossible, and make "being an engineer" synonymous with putting a target on your back.
If the PM was also an ex-developer and has both product management and development skills this happens a lot less. When the PM knows the Engineering and complexity and code debt cost of shipping a feature then they can self-triage with that additional information and choose not to send it to the developers or to consult with dev and scale it back to something more managable.
Its these professional PM's that have done nothing else other than project mangement or PMP that don't have an understanding of the long term dev. cost of features that cause these systemic issues.
What about PMs that were developers but were awful at it and just played the politics game to get that promotion and never have to see code ever again?
I worked with a few of those where it was horrible, because they were incompetent and unwilling to work to improve across all disciplines. But that says more about the individuals.
IMO "Knowing enough to do damage" is the worst possible situation.
A regular user who's a domain expert is 100x a better PO.
Yep, that works 100%.
I'm still a big believer in "separation of powers" a la Scrum.
There should be a "Product Owner" that can be anyone really, and in the other side there is a self-managed development team that doesn't include this participant. This gives the team leverage to do things their way and act as a real engineering team.
The reason scrum was killed is because of PMs trying to get themselves into those teams and hijacking the process. Developers hated "PM-based scrum", which is not really scrum at all.
It's pretty much the same in every tech firm. When I worked at Facebook this same dynamic was playing out really badly. Amazon on the other hand had somewhat greater resilience against it due to a much tighter feedback loop with the c-suite.
Exactly this.
The primary goal in deciding upon a tech stack is how easily the organization can hire/fire the people who write the code. The larger an organization becomes the more true this becomes. There are more developers writing React than Rails.
Don't listen to the opinions of the developers writing this code. Listen to the opinions of the people making these tech stack decisions.
Everything else is a distant second, which is why you get shitty performance, developers who cannot measure things. It also explains why when you ask the developers about any of this you get bizarre cognitive complexity for answers. The developers, in most cases, know what they need to do to be hired and cannot work outside those lanes and yet simultaneously have an awareness of various limitations of what they release. They know the result is slow, likely has accessibility problems, and scales poorly, and so on but their primary concern is retaining employment.
In the old days we had the saying: "Nobody ever got fired for buying IBM"
Todays version is: "You will get fired unless you use React".
So every site now uses React no matter if the end result is a dog slow Github.
Bad developers looks at "what are everybody else using?".
Good developers looks at "what is the best and simplest (KISS) tool for this?"
> Good developers looks at "what is the best and simplest (KISS) tool for this?"
Good ol’ SSR - but eventually users and PMs start requesting features that can only be implemented with an SPA system, and I (begrudgingly) accept their arguments.
In my role (of many) as technical architect for my org, and as an act of resistance (and possibly to intentionally sabotage LLMs taking over), I opted for hybrid SSR + Svelte - it’s working well for us.
The end-user experience is not of any concern in modern tech. None at all. The only thing that matters is engagement hacking and middle managers desperately trying to look like they're doing anything with any value or meaning at all.
As someone who has worked in and with large orgs, the better question is "why does this always happen?". In large organizations "ownership" of a product becomes more nebulous from a product and code standpoint due to churn and a focus on short-sighted goals.
If you put a lot of momentum behind a product with that mentality you get features piled on tech debt, no one gets enthusiastic about paying that down because it was done by some prior team you have no understanding of and it gets in the way of what management wants, which is more features so they can get bonuses.
Speaking up about it gets you shouted down and thrown on a performance improvement plan because you aren't aligned with your capitalist masters.
At this point "ownership" is just a buzzword thrown around by management types that has no meaning.
If a developer has to put up a fight in order to push back against the irresponsibility of a non-technical person, they by definition don't have ownership.
I've seen shops where ownership is used as a cudgel to punish unruly developers. If the task isn't done as specified and on time, the developer is faulted for not taking ownership, but that "ownership" is meaningless, as you note, because it does not extend to pushing back against irresponsible or unreasonable demands.
> because you aren't aligned with your capitalist masters.
Is it your theory that working on large projects was better when you had communist masters? That seems inconsistent with everything we know, e.g. quotas enforced my mass murder.
My guess is that it's more about organizations (your first paragraph) and less about capitalism (your last paragraph).
That the optimization pressure imposed by "capitalist masters" can lead to perverse outcomes does not imply that the optimization pressure imposed by communist ones doesn't, surely?
For instance, the GP could be a proponent of self-management, and the statement would be coherent (an indictment of leaders within capitalism) without supposing anything about communism.
Yet another new account that has only a single comment replying to me. I've noticed this is a pattern.
At any rate your point doesn't make any sense. The same point indicts all leaders, it has nothing to do with capitalism. It's like saying something indicts a specific race of people when it applies to all people equally.
I've had some experience with Google here.
The short answer is: no, they don't. Google Cloud relied upon some Googlers happening to be Firefox users. We definitely didn't have a "machine farm" of computers running relevant OS and browser versions to test the UI against (that exists in Google for some teams and some projects, but it's not an "every project must have one" kind of resource). When a major performance regression was introduced (in Firefox only) in a UI my team was responsible for once, we had a ticket filed that was about as low-priority as you can file a ticket. The solution? Mozilla patched their rendering engine two minor versions later and the problem went away.
I put more than zero effort into fixing it, but tl;dr I had to chase the problem all the way to debugging the browser rendering engine itself via a build-from-source, and since nobody had set one of those up for the team and it was the first time I was doing it myself, I didn't get very far; Google's own in-house security got in the way of installing the relevant components to make it happen, I had to understand how to build Firefox from source in the first place, my personal machine was slow for the task (most of Google's builds are farm-based; compilation happens on servers and is cached, not on local machines).
I simply ran out of time; Mozilla fixed the issue before I could. And, absolutely, I don't expect it would have been promotion-notable that I'd pursued the issue (especially since the solution of "procrastinating until the other company fixes it" would have cost the company 0 eng-hours).
I can't speak for GitHub / Microsoft, but Google nominally supports the N (I think N=2) most recent browser versions for Safari, Edge, Chrome, Firefox, but "supports" can, indeed, mean "if Firefox pushes a change that breaks our UI... Well, you've got three other browsers you could use instead. At least." And of course, yes, issues with Chrome performance end up high priority because they interfere with the average in-house developer experience.
I cannot fully explain to you how little companies care about quality and performance. Feature-mills are a real place.
The answer is enshittification: https://news.ycombinator.com/item?id=41277484
Maybe somewhat off topic but my GitHub app on an iPhone haven’t been updating the feed for a few months now, so no have relogged earlier but still the same. Edit: could it have something to do with lockdown mode or how is it called now.
This thread has really opened my eyes to how much the world hates react developers, I am one.
Unrealistic timelines, implementing what should be backend logic in frontend, there's a bunch of ways SPA's tend to be a trap. Was react a bad idea? Can anyone point to a single well made react app?
Been a huge React/SPA fan for many years, until the realization began to creep on me that building them was actually harder than building C++ MFC desktop applications (which I did back in the 2000's). Declarative markup was supposed to reduce cognitive load, but it now feels like the interplay between the declarative part of UI development (component markup) and the procedural part (event handling and state) has slowly morphed into something more complex than simplly developing the UI procedurally.
Back in the day (I was a junior dev) this was easier than grappling with React hooks today:
Last couple years I built a largish javafx app and this was the entire way I structured it. A little tedious, but if I have a state management issue it's just logic on my side and not ten layers of abstraction.
That's a good way to put it. A bit tedious, but the mental model is relatively simple. Reading and writing code feels a little closer to riding a bicycle than operating a Rube-Goldberg device.
what does MFC mean?
The Microsoft Foundation Class library: https://learn.microsoft.com/en-us/cpp/mfc/reference/creating...
It's one of those cases where it only get's noticed when it's bad. It's also easy to hate on web technologies since everyone get's to use them everyday (larger user base). But most important of all, it makes people feel good about themselves hating on a technology used lot of times by people who are just starting out with programming. Gatekeeping at its best.
The hate is more geared towards SPAs in general, but there are some shining examples that show that a well-made React/Angular/whatever app can have great UX - Clockify being one of them.
I don’t think the culprit apps would have substantially better UX if they were rendered on the server, because these issues tend to be a consequence of devs being pressured to rapidly release new features without regard to quality.
And to be fair, the problems that Facebook had when they introduced React are not common problems at all.
As an aside, I was an employee around then and I vividly remember that the next half there was a topline goal to improve web speed. Hmmmm, I wonder what could have happened?
> And to be fair, the problems that Facebook had when they introduced React are not common problems at all.
That’s one of my favorites. The exact bug they described during React launch presentation, that React was supposed to help fix with the unidirectional dataflow. You know the one where unread message badges were showing up inconsistently in the UI in different places. They never managed to fix that bug in the 10 years since React was announced and I eventually left Facebook for good.
To be fair, that bug increases engagement so it'll never be fixed. All must kneel before Deltoid/QRT!
That's a good example.
After having worked on React for a while, I can tell you that the problem remains between monitor and chair.
React can have all the niceties and optimization in the world, but that fails when its users insist on using it incorrectly, building huge tangled messy components and then wondering why a click takes 1.3 seconds to deliver feedback.
As someone who's worked with React professionally for years, it's honestly shocking how few React developers really understand memoization and when it needs to be used
IMO it's the MAIN thing to understand about React—how it renders.
Regardless, now I'm the one with egg on my face since the new compiler promises to eventually remove the need for manual memoization almost entirely. The "almost" still fills me with fear
Is there a good article or something you could point us younglings to? I get that in react almost everything is reactive by default unlike other frameworks. I tend to add useCallback and memo to everything nowadays.
That only works if the inputs don’t change often. If they do, it’s actually a performance hit.
The problem that is that react doesn't have a pit of success. Because it's poorly conceived, poorly designed and poorly written software made by people more interested in getting the word "homomorphism" onto their CV than solving real problems. You knew when they started using terms like "monad" and "functor" in order to attach a click handler to a button that something had gone badly wrong.
In this very thread there's some asshole using the word "memoization" when "caching" would have been fine.
React was not a bad idea. SPA's tend to be a bad idea. React is just a tool to make SPA's easier to write.
React is a terrible idea. Everything about it is garbage. The api. State. How they do rendering. The “vdom”. It’s unnecessarily complicated and Byzantine. Like it was designed by someone trying to bill a large company many hours.
Svelte is ok. It could have been great but the api for their version of observables is a disaster (which I hope they eventually fix). Sveltekit is half baked and convoluted and I strongly advise not touching it.
React is a good idea compared to having to do SPA's without it. Try doing a SPA with only jQuery.
VDOM is also a good idea that simplifies the mental model tremendously. Of course these days we can do better than a VDOM. Svelte in fact doesn't use a VDOM. You can say that VDOM is a terrible idea in comparison with Svelte, but that's just anachronistic.
but SPA are a terrible idea
Tredict is a webapp written in React that works for me since years. It is fast, stable and useful.
The problem isn't React. The problem are KPIs and unrealistic timeline. It is the same then ever. Not a fault of React at all.
Yes, SPAs are inherently a very niche concept that has been applied to too many things for the wrong reasons.
On react, it's funny that sites where the frontend part is really crucial tend to move away from generic frameworks and do really custom stuff to optimize. I'm thinking about Notion, or Google Sheets, or Figma, where the web interface is everything and pretty early on they just bypass the frontend stacks generally used by the industry.
I don't hate React developers. I hate developers who build consumer facing software and use top of the line hardware and networks to test it while being ignorant to the fact that most of their users will be using their products on 8+ year old consumer grade hardware over spotty 3G
And then there's the devs and PMs that have an irrational fear of the back button -- enough so that they never ever use it on their SPA.
https://front.com is an example of a React app done right
https://github.com/ethanniser/NextFaster
https://t3.chat/
React feels like magic the first time you try it, specially if you don't have any experience with JSX. Then you need to prop drill and you regret everything.
The main problem is that it tries to do away with a view model layer so you can get the data and render it directly in the components, but that makes managing multiple components from a high level perspective literally impossible. Instead of one view model, you end up with 50 React-esque utilities to achieve the same result.
It's also misapplied here. If anything, it appears from the changes being made to WebKit that the issue is detailed interactions with DOM change logic and with CSS, not JavaScript. JavaScript may tickle the issue, but that's like blaming the mouse for allowing you to click on a button that has expensive operations attached to it.
I've definitely managed to make a page that uses almost no JavaScript and is dog-slow on Firefox (until Mozilla updated the rendering engine) just by building a table out of flexboxes. There's plenty of places for browsers to chug and die in the increasingly-complicated standard they adhere to.
> a single well made react app
What about Slack, the messenger?
Umm, Discord? SoundCloud? Trello? Bandcamp? Spotify?
If I keep going there are actually hundreds and thousands of well-made react apps.
Slack on my machine is currently taking ~1GiB of memory and 3% cpu to do nothing.
My irc client is taking 60MiB of memory and 0.01% cpu. My IRC client is responsive and faster, it has more configurable notification settings. I like the irc client more.
> Bandcamp
I just went to the bandcamp page and it indeed loaded very quickly. As far as I can tell, there's no react in use anywhere so I guess that's why.
What do you mean by bandcamp using react?
On my machine Slack is taking 100MB of memory and 0.1% CPU to do nothing. Maybe we are using different Slack or one of us is lying about the "doing nothing" thing.
It's possible I'm wrong about bandcamp using react but your guess is far from reality as well – react itself does not prevent or discourage loading pages very quickly.
I use localslackirc, so I can be on battery rather long.
> What about Slack, the messenger?
You call it well made? I'm sorry for you, you must really live a really harsh life.
Ah, since Atlassian has been increasingly messing with Trello over the past couple years, it has really gone to shit. I currently have a Firefox profile dedicated solely to it, using >2 gigs of memory and about 1/3 of an M1 core. It has cumulatively used about a day's worth of CPU time in since I booted in 6 days ago. In contrast, the profile dedicated to Slack is using 750 MB and has burned about 27 minutes of CPU time.
Isn't the most common complaint against Slack that it's not optimized enough for what it does ? That's how I read the rants against its electron app, and people are already choosing the electron app against using it straight in the browser (as they'd do for Gmail or Calendar for instance)
Slack is one the most slick and pleasant pieces of software to use. It's big success as well as the fact that it's acquisition cost was one of the largest software deals ever – tells of itself – it's certainly a fine piece of software made by fine engineers who used react and electron with a certain amount of dignity. People who rant about tools like react or electron affecting their performance just don't want to understand that it's organisation and people behind the tools who are responsible for their performance.
Slack is the best of a bunch of trash options. That doesn't make it good. I shouldn't be able to accidentally select every widget in an app as though it were text. But with Electron apps, that's just normal.
Slack puts a nicer shade of lipstick on the pig than Teams does, but the lips still belong to the same thing.
> I shouldn't be able to accidentally select every widget in an app as though it were text.
I absolutely should. I hate how many applications have a UI that won't let me copy-paste an error message to search for, much less a menu item; who could possibly have thought that was a good idea?
I'd make an argument about the need for Slack to go beyond.
As you point out it's wildly successful and is the backbone of many groups internal communication. Many companies would just stop working without Slack, that's a testament to the current team's efforts, but also something that critical would merit better perfs.
I'd make the comparison with Figma, which went the extra mile to bring a level of smoothness that just wouldn't be there otherwise.
> Slack is one the most slick and pleasant pieces of software to use
I've never heard anyone say that before!
Discord is well-known to be very buggy, e.g. the search function. Spotify is also very slow with thousands of placeholder skeletons. Remember that Spotify once had a very fast native player.
> Spotify is also very slow with thousands of placeholder skeletons. Remember that Spotify once had a very fast native player.
Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
> Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
Regardless of how, the fact remains that the previous implementation of their UI did fetch and render the data from the backend significantly faster than the current React-based one does.
I’ve been using Spotify for 10+ years, and it’s DEFINITELY faster today than it was when I first used it.
Everything is slower across every single facet of computing. Something is happening. I have a brand new Mac Studio M4 Max with 64gb of ram and every site is still slower than it was on a 2011 Mac Book Pro.
I remember using the internet 15 years aho, and things were definitely slower. I also wasn’t using the internet to run full-blown spreadsheets and design tools back then. My M series Macs are the snappiest computers I’ve used (minus my desktop when it runs Linux, but not windows).
Web developers should be forced to use hardware that's roughly at the 10th percentile in performance of their user base, not the 90th. Alternatively, make performance a WCAG concern.
Chrome Dev tools, and hopefully others, have a performance monitor option that lets you throttle the CPU and throttle the network. It should be plenty possible to test performance of sites on simulated 10th percentile systems, but this just seems low priority.
I don't think this would help, if a site or SPA performs terribly on a high end machine, the only conclusion I can draw is performance testing isn't tested or validated at all.
I've read comments online (here on HN) that Github has been rewriting their UI in React and that it's got slower since. I have no knowledge if this is true or not (ie React -> speed direct correlation), and my own projects are small enough not to see any performance impact.
Does anyone have concrete information?
I came across a blog post[1] (HN thread[2]) recently that sheds some light on the issue. The tl;dr is that the PR view can render over 100 000 DOM nodes, many of which are invisible inline SVG nodes, and SPA routing makes navigation a lot slower.
[1]: https://yoyo-code.com/why-is-github-ui-getting-so-much-slowe...
[2]: https://news.ycombinator.com/item?id=44799861
That blog post discovered that hard refreshing the page is faster than GitHub's SPA navigation, which led me to make this browser extension which makes GitHub navigation twice as fast:
https://chromewebstore.google.com/detail/make-github-great-a...
They just pushed a new redesigned page for pull request diffs- must have bloated the DOM.
I still see a little "try the new experience" link on the PR diff page (top right of page) so the rollout might be gradual. I won't click.
I tried it! I like it generally, but it’s too buggy. The whole diff explodes if you expand to more lines (for example). It’s easy to switch back.
I am such a masochist that I actually click those buttons. If it's good, great, if it's shit, I have time to adjust before they foist it upon me anyway
I prefer to delay the pain as much as possible instead
I am on insider previews and betas for all apps I use. You're not alone.
It not just safari, in firefox its slow too.
I see loading spanner everywhere and even the page transition take ages compared to before.
I am not sure what metric they are using justify ditching the perfectly working SSR they used before.
I’ve been having issues even in Chrome lately. All three browsers are dying evening the PR isn’t huge.
I’m usually a fan of going the SPA route. But for something like version control of a code base, the mission critical nature of it, I think should have less frills and serve plain html and css with optional js enhancements
I use Safari in my daily life, and I feel like 90% of the web apps I access are the worst crap in the world. At work, they decided to use Jira. Besides being slow, it consumes up to 2GB of RAM. Two gigabytes of RAM just for tickets? Ridiculous.
Noticed a similar slowdown when opening the GCP console in Safari. Especially the BigQuery editor. It's completely unusable.
The GCP tools are a performance disaster in both Chrome and Safari in my experience. It can be actively painful at times on some screen like the log viewer.
Something did change with Safari when handling lots of DOM nodes around the last major release of all Apple's operating systems.
I have an ever growing directory listing using SolidJS, and it's up to about 25,000 items. Safari macOS and iOS two major versions ago actually handled it well. After the last major update, my phone rendered it faster than an m1 MacBook Pro.
It truly feels like Jira.
It’s afflicted by the same disease: overuse of JavaScript and the need to give JS developers something to do.
If you actually load up a ~2015 version of Jira on today’s hardware it’s basically instant.
I was reminded how fucked the modern web is a couple years ago when I encountered a so-fast-it-felt-like-local-static-html website dashboard that could have been a "web app", but wasn't.
It was being hosted on another continent. It was written in PHP. It was rendering server-side with just some light JS on my end.
That used to be the norm.
When you mention that you're used to rendering HTML on the server side and don't use React on the frontend to do things, modern web people just look at you like you committed a crime or something (VanillaJS! the horror! Those thirty lines of Javascript would be unmaintainable without a deployment tool!!!!).
It's really hard to fight the trend especially in larger orgs.
Haha I used to explain the complexity of a previous employer's tech stack that way: they had all these devs and they needed to do _something_!
Honestly f them.
GitHub issues was so simple and now they keep shoving features into it.
Why has no one learned to not become Jira? You gotta say no sometimes.
Wait until you plug it into JIRA, strap copilot and actions on it. Then you can have all flavours of hell at once. Our org has ground to a halt.
A lot of the time we just break the branch permissions on the repo we are using and run release branches without PRs and ignore the entire web interface.
Just because I went to look it up, I thought I'd share. Looks like Atlassian removed the bit from the Terms of Service where you were prohibited from:
> publicly disseminate information regarding the performance of the Cloud Products
https://web.archive.org/web/20210624221204/https://www.atlas...
I didn't buy it or agree to them anyway :)
GitHub web used to be great.
Then some charlatan thought to embrace the React hype and it became terrible to say the least.
As a hater of React, I don’t think React itself is to blame.
Old GitHub was very light on features, whereas the new UIs are way more curated on the surface.
Unfortunately all of this brings in tons of complexity. It doesn't help that there are a lot of junior developers working on it, clearly.
What new features has the new UI brought to justify this complexity and slowness?
I haven't been able to load it yet to actually check out these hip new features, it just crashes my browser, but I'm sure they must be great?
GitHub Actions is such a pain to use just because of how sluggish it feels. I hope they’ll improve performance.
It also continuously fails to scroll down in the log view when watching output of a ci job live, and has done for years. It's so annoying that I made a userscript to force scrolling: https://github.com/wheybags/userscripts/blob/main/github_act...
I feel you. The UX is a gigantic mess. Navigating between jobs and builds is also a terrible experience.
Oh man. Start a manual job, wait for it to appear in the UI. 10 minutes later, it finally appears. Or just refresh the page manually immediately after starting the job, and there it is...
We used to use bitbucket Web hooks that used to trigger Jenkins jobs. This was almost instant. Now after migrating to GH actions it can take minutes before jobs start on push for example...
How big are these jobs? I’ve never seen an action take more than 15s to start
> I hope they’ll improve performance.
it's Microsoft, so the answer is: buy a new computer
(which comes with a bundled Windows license)
We are at a point where buying a new computer doesn’t actually help.
yeah I'll keep my M3
Yup. I tried to find something in this 120 KB file today on Safari on a M3: https://github.com/JetBrains/kotlin/blob/master/compiler/fro...
Slow as hell and the Safari search function stopped working. I loaded the same url on Firefox and it was insta-fast.
"Modern" Web UIs to make backpack-portable supercomputers feel slow operating on text files that wouldn't have been challenging to work with by 1990 standards.
The Cloud to make single-digit-seconds operations on a local Raspberry Pi 2 and home Internet take a few minutes.
What a time to be alive.
Good grief, you can't even scroll that thing
Lately. Everything Microsoft touches has bad UX.
A regular GitHub annoyance for me is the propensity to lose the browser history for the main repo page.
On random site, Navigate to GitHub repo, navigate to file in repo, and hit back, and I'm on the random site, hit forward and I'm on the file.
So annoying.
One of a large handful of issues I've encountered post react conversion
I experienced the same since I turned on the "new files changed experience". The fun part is that the first few weeks of the preview it was _worse_ then now. I am truly baffled at the lack of quality on such an important change
The diff view on large PRs is pretty much unusable on all browsers.
Putting on eng manager hat, the problem to solve is that this regression went undetected, not that Safari is slow.
The solution is a test that fails when Chrome and Safari have substantially different render times.
> The solution is a test that fails when Chrome and Safari have substantially different render times.
That test will be disabled for being flaky in under a week because the CI runners have contention with other jobs, causing them to randomly be slower and flake, and the frontend team does not want to waste time investigating flakes.
"Just have dedicated runners with guaranteed CPU performance", but that's the CI platform team's issue, the frontend and testing teams can't fix it, and the CI infra team won't prioritize it for a minimum of 5 years.
Related: https://bugs.webkit.org/show_bug.cgi?id=247782
I wondered if it was something new, or that it was just the larger than average pull requests these days I have with AI coding.
Good to know others are feeling it too, hopefully it can get resolved soon. In the mean time, i'll try my PR reviews on FF.
Update: Just tested my big PR (+8,661, -1,657) on FF and it worked like a charm!
Yeah, it is! Even for simple things, like opening a PR and searching in the combo box for the name of the branch to merge into. We only have like 40 branches. It should not freeze the tab for 30 seconds to search a list of 40 items.
Ok, so it's not just me. I was just struggling to assign a PR to a couple of colleagues and select a label (on a M2 Pro with 32 GB RAM!)
same except 64GB and M3 Max smh... takes literally minutes to open the "Labels" popup and make a pr... its completely unacceptable for a product like this...
My browser crashed 10 times today trying to copy code in Safari. It's unbearable bad.
While there may be a weird bug affecting Github, the browser crashing is always a browser bug. Github can't fix Safari.
Correction: just the tab crashed
And this is something browsers don't treat as bugs. You can crash any browser's tab by just exhausting its allocated memory
Unfortunately this is the fate of most modern sites, they start off simple then they start bloating the website with social media and analytics. SV blokes don't care or notice on their $5k+ top of the line laptops but for everyone else it's an issue
This has all to do with JS devs and everyone converging on this terrible language and ecosystem and nothing with analytics/social media.
except it's slow on top of the line laptops, too, so they there's zero excuses
Just microsoft sites.
It has to present text lists, tables and small icons. Makes mobile Safari crawl to a halt. With multi gigahertz, multi core cpus and hardware accelerated js. It is pathetic.
GitHub has a great GraphQL API but a subpar UI. It's a great fit for Isograph! Anyway, if folks are interested, feel free to check out this conference talk (https://www.youtube.com/watch?v=sf8ac2NtwPY), where vibe code an Isograph app that consumes the GitHub API. TLDR, it is a lot easier to replace GitHub than you think, and it would make for a hell of a splashy side project.
Another website that is so slow it's unusable is Stripe.
My CPU goes to 100% and fans roaring every time I load the dashboard and transactions. I can barely click on customers/subscriptions/etc. I can't be the only one...
Glad I’m not the only one experiencing this. The Stripe dashboard constantly freezes up for me, even registering a click takes 10-20 seconds. Often it will just go white. Incredible annoying.
It's been very clear to me for quite awhile that they have to be doing this to push users to their mobile app, at least on iOS. I used to review PRs on my phone at night, but now I have to use the app because anything over a thousand lines will crash iOS Safari or cause scrolling to misbehave. Reddit has done the same over the years, as have countless other web apps.
You really can't escape the enshittification.
Forget slowness, it basically answers any search with "try another time."
huh? I never felt it to be slow on Safari, using em for years now.
This is likely happening in the new Pull Request experience only. If so, it's due to React. This is what happens when you use React for such large pages. "JavaScript is fast!" No, it really isn't. Especially not when you pile abstraction layer on top of abstraction layer on top of abstraction layer on top of abstraction layer.
fix your wifi
There are even some famous names on those comments, guess it is pretty bad.
Isn't the opposite? No one in this thread even cogitating how bad Safari is in terms of performance and supporting web standards? There's in one even partially blaming both. Github isn't the best example of a fast website, but if you can run it in Chrome and Firefox, even on rudimentary browsers like Palemoon (I tested) on decent hardware (even mobile), there's something clearly wrong on Safari.
Because a lot of apple fanboys everywhere, they'd rather blame the whole web than the shitty apple software.
Safari is behind on web standards, but often those standards are things designed and implemented by the Chrome team and pushed into standards later. It's the Chromification of the web, where the standard is "whatever chrome does". It's much like the era of "Designed For IE" or "Works best in Netscape 2.3", but now there's a thrice-convicted monopolist in de facto control of the standard.
The GitHub website reminds me of the first video in the Clean Coders series, where he points out that eventually devs want a total rewrite to "Fix" all the shortcomings, but GitHub from the perspective of most users had nothing UI wise that needed fixing. We all would have been happy with the UI as is.
Clean code argues that instead of total rewrites you should focus on gradual improvements over time, refactor code so that overtime you pay off the dividends, without re-living through all the bugs you lived through 5 years ago that you don't recall the resolution of. Every rewrite project I've ever worked on, we run into bugs we had already fixed years prior, or the team before me has.
There are times when a total rewrite might be the best and only options such as deprecated platforms (think of like Visual Basic 6 apps that will never get threading).
What frustrates me more is that GitHub used to be open to browse, and the search worked, now in their effort to force you to make an account (I HAVE LIKE TEN ALREADY) and force you to login, they include a few "dark patterns" where parts of search don't work at all.
Rewrite is usually about learning about all the past mistakes and problems and designing your architecture in a way that you prevent all the previously known issues. It is iterative process on the design level. If you end up repeating all the same bugs, it went very wrong from the beginning. So if you don’t have the information about all the previous problems, then it is likely mistake.
It reminds me also of the original head of development of the Safari browser talking about at least the early days of building the browser. They had a rule that no commit of code could cause the browser benchmarks to get slower. And apparently he was maniacal about the rule.
I don’t know if that’s a good or realistic rule for most projects, but I imagine for performant types of applications, that’s exactly what it takes to prevent eventual slowdown.