We all dodged a bullet

(xeiaso.net)

721 points | by WhyNotHugo 21 hours ago ago

416 comments

  • anon7000 19 hours ago ago

    The nx supply chain attack via npm was the bullet many companies did not doge. I mean, all you needed was to have the VS Code nx plugin installed — which always checked for the latest published nx version on npm. And if you had a local session with GitHub (eg logged into your company’s account via the GH CLI), or some important creds in a .env file… that was exfiltrated.

    This happened even if you had pinned dependencies and were on top of security updates.

    We need some deeper changes in the ecosystem.

    https://github.com/nrwl/nx/security/advisories/GHSA-cxm3-wv7...

    • kardianos 15 hours ago ago

      > We need some deeper changes in the ecosystem.

      I avoid anything to do with NPM, except for the typescript compiler, and I'm looking forward to the rewrite in Go where I can remove even that. For this reason.

      As a comparison, in Go, you have minimum version spec, and it takes great pains to never execute anything you download, even during compilation stage.

      NPM will often have different source then the github repo source. How does anyone even trust the system?

      • strogonoff 3 hours ago ago

        Before we all conclude that supply chain attacks only happen on NPM, last time I used VS Code I discovered that it auto-installed, with no apparent opt-out, Python typing stubs for any package (e.g., Django in my case) from whatever third-party, unofficial PyPI accounts it saw fit. (Yes, this is why it was the last time I used VS Code.)

        The obscurity of languages other than JavaScript will only work as a security measure for so long.

      • homebrewer 13 hours ago ago

        It's already solved by pnpm, which refuses to execute any postinstall scripts except those you whitelist manually. In most projects I don't enable any and everything works fine, in the worst case I had to enable two scripts (out of two dozen or so) that download prebuilt native components, although even those aren't really necessary and it could have been solved through other means (proven by typescript-go, swc, and other projects led by competent maintainers).

        None of it will help you when you're executing the binaries you built, regardless of which language they were written in.

        • jvanderbot 12 hours ago ago

          I could be wrong but I believe Pnpm would not have helped with the supply chain attach that brings us here. It's simply a problem with deploying new code rapidly and automatically without verification to a billion machines at a time.

        • danielheath 6 hours ago ago

          That doesn't help you if anyone on your team installs a vscode plugin which uses npm in the background & executes postinstall scripts.

        • ryukafalz 8 hours ago ago

          > None of it will help you when you're executing the binaries you built

          Lavamoat would, if you get to the point of running your program with lavamoat-node or built with the lavamoat webpack plugin: https://lavamoat.github.io/guides/getting-started/

        • hdjrudni 8 hours ago ago

          > None of it will help you when you're executing the binaries you built, regardless of which language they were written in.

          Sure it would... isn't that the whole point of Deno? The binary can't exfiltrate anything if you don't let it connect to the net.

      • h1fra 3 hours ago ago

        You are lying to yourself. In this attack, nothing was executed by npm; it "just" replaced some global functions. A Go package can't do that, but you can definitely execute malware at runtime anyway. It can also expose new imports that will be imported by mistake when using an IDE.

      • austin-cheney 13 hours ago ago

        I am now using the type remover in Node to run TupeScript natively. It’s great and so fast. Even still I continue to include the TypeScript compiler in my projects so that I can run TSC with the no compile option just for the type auditing.

      • RVuRnvbM2e 14 hours ago ago

        Fucking this.

        I have seen so many takes lamenting how this kind of supply chain attack is such a difficult problem to fix.

        No it really isn't. It's an ecosystem and cultural problem that npm encourages huge dependency trees that make it impractical to review dependency updates so developers just don't.

        • Yoric 5 hours ago ago

          The thing is, having access to such dependencies is also a huge productivity boost. It's not by accident that every single language whose name isn't C or C++ has pretty much moved to this model (or had it way before npm, in the case of Perl or Haskell).

          The alternative is C++, where every project essentially starts by reinventing the wheel, which comes with its own set of vulnerabilities.

          I'm saying this without a clear idea of how to fix this very real problem.

          • fdsfdsfdsaasd 3 hours ago ago

            It's more like capex vs opex. Some languages and frameworks - you have to maintain the same level of effort, just to keep your apps working.

        • WD-42 6 hours ago ago

          I would say Javascript's lack of a standard library is at least in part responsible for encouraging npm use, things just spiraled out of control from there.

          • raffraffraff 5 hours ago ago

            [not a dev] why isn't there the equivalent of "Linux distributions" for npm? I know I know: because developers all need a different set of libs. But if there were thousands of packages required to provide basic "stdlib-like functionality" couldn't there be an npm distribution that you can safely use as a starting point, avoiding importing asinine stuff like 'istrue' (yea I'm kinda joking there). Or is that just what bloated Frameworks all start out as?

        • alehlopeh 13 hours ago ago

          “It’s not difficult to fix, just change the entire culture”

          The difficulty comes in trying to change the entire culture.

          • aspenmayer 11 hours ago ago

            “Doctor, it hurts when I do this!”

            “Stop doing that!”

            “But I wanna!”

        • zer00eyz 8 hours ago ago

          > It's an ecosystem and cultural problem that npm encourages huge dependency trees

          It is an ecosystem and culture that learned nothing from the debacle of left pad. And it is an affliction that many organizations face and it is only going to get worse with the advent of AI assisted coding (and it does not have to be).

          There simply arent enough adults in the room with the ability to tell the children (or VC's and business people) NO. And getting an "AI" to say no is next to impossible unless you're probing it on a "social issue".

    • captn3m0 17 hours ago ago

      Yeah, Editor extensions are both auto-updated and installed in high risk dev environments. Quite a juicy target and I am surprised we haven’t seen large scale purchases by bad actors similar to browser extensions yet. However, I remember reading that the VsCode team puts a lot of effort in catching malware. But do all editors (with auto-updates) such as Sublime have such checks?

    • oezi 7 hours ago ago

      The key thing needed is a standard library which includes 100000 of these tiny one function libraries (has-ansi, color-name).

      • lynnharry 3 hours ago ago

        I checked has-ansi. What's the reason that this library would exist and be popular? Most of the work is done by the library it imports, ansi-regex and then it just return ansi-regex.test(string), yet it has 5% of the weekly downloads of ansi-regex. ansi-regex also has fewer than 10 lines of code.

        I don't know anything about the npm ecosystem, what's the benefit of importing these libraries compared to including these code in the project?

        • herewulf 3 hours ago ago

          The benefit is getting your secrets stolen and pointing the blame at someone else? Yeah...

    • zenmac 16 hours ago ago

      I usually make sure all the packages and db are local, so my dev machine can run in Airplane mode. And only turn on internet when use git push

      • pmontra 7 hours ago ago

        All docs are local too, like we used to do with man pages and paper reference books or do you use another system for them? A second computer, a tablet, a phone?

    • 15 hours ago ago
      [deleted]
  • whiplash451 26 minutes ago ago

    Not a security expert but I don’t think that requesting a reset of your 2FA credentials is reasonable.

    I would be very worried about my 2FA provider if they asked me to do this.

    And so I would not rate this phishing email a 10/10 at all.

  • jFriedensreich 3 hours ago ago

    That post fails to address the main issue, its not that we don't have time to vet dependencies, its that nodejs s security and default package model is absurd and how we use it even more. Even most deno posts i see use “allow all” for laziness which i assume will be copy pasted by everyone because its a major pain of UX to get to the right minimal permissions. The only programming model i am aware if that makes it painful enough to use a dependency, encourages hard pinning and vetted dependency distribution and forces explicit minimal capability based permission setup is cloudflares workerd. You can even set it up to have workers (without changing their code) run fully isolated from network and only communicate via a policy evaluator for ingress and egress. It is apache licensed so it is beyond me why this is not the default for use-cases it fits.

    • berkes 2 hours ago ago

      Another main issue is how large (deep and wide) this "supply chain" is in some communities. JavaScript and python notable for their giant reliance on libs.

      If I compare a typical Rust project, with a same JavaScript one, JavaScript project itself often has magnitudes more direct dependencies (wide supply chain?). The rust tool will have three or four, the JavaScript over ten, sometimes ten alone to help with just building the typescript in dev. Worsened by the JavaScript dependencies own deps (and theirs, and theirs, all the way down to is_array or left_pad). Easily getting in the hundreds. In rust, that graph will list maybe ten more. Or, with some complex libraries, a total of several tens.

      This attitude difference is also clear in Python community. Where the knee-jerk reaction is to add an import, rather than think it through, maybe copy paste a file, and in any case, being very conservative. Do we really need colors in the terminal output? We do? Can we not just create a file with some constants that hold the four ANSI escape codes instead?

      I'm trying to argue that there's also an important cultural problem with supply chain attacks to be considered.

    • mb2100 3 hours ago ago

      To be fair, the advantage of Deno here is really the standard library that includes way more functionality than Node.

      But in the end, we should all rely on fewer dependencies. It's certainly the philosophy I'm trying to follow with https://mastrojs.github.io – see e.g. https://jsr.io/@mastrojs/mastro/dependencies

  • mikewarot 19 hours ago ago

    >Saved by procrastination!

    Seriously, this is one of my key survival mechanisms. By the time I became system administrator for a small services company, I had learned to let other people beta test things. We ran Microsoft Office 2000 for 12 years, and saved soooo many upgrade headaches. We had a decade without the need to retrain.

    That, and like other have said... never clicking links in emails.

    • mesofile 17 hours ago ago

      This is how I feel about my Honda, and to some extent, Kubernetes. In the former case I kept a 2006 model in good order for so long I skipped at least two (automobile) generation's worth of car-to-phone teething problems, and after years of hearing people complain about their woes I've found the experience of connecting my iphone to my '23 car pretty hassle-free. In the latter, I am finally moving a bunch of workloads out of EC2 after years of nudging from my higher-ups and, while it's still far from a simple matter I feel like the managed solutions in EKS and GKE have matured and greatly lessen the pain of migrating to K8S. I can only imagine what I would have gotten bogged down with had I promptly acted on my bosses' suggestion to do this six or seven years ago. (I also feel very lucky that the people I work for let me move on these things in my own due time.)

      • cirelli94 5 hours ago ago

        In the meantime you had for years a car without connecting your iphone, so you completely didn't have that feature! There are pros and cons everywhere, but I'm more prone to change often and fix things that wait for feature to be stable and meantime do without them. Of course, when I can afford it, e.g. not in changing my car every two years :')

    • nottorp 19 hours ago ago

      Not in the "npm ecosystem". You're hopelessly behind there if you haven't updated in the last 54 seconds.

      • ainiriand 17 hours ago ago

        Well in this case it makes sense to update fast isn't it?

      • ohdeargodno 18 hours ago ago

        Sorry, the "npm ecosystem" command has been deprecated. You can instead use npm environment (or npm under-your-keyboard because we helpfully decided it should autocorrect and be an alias)

        • efilife 17 hours ago ago

          this seems to be a clever joke. sad to see it dead

    • pixl97 13 hours ago ago

      Works great for new exploited packages. Not so great for already compromised software getting hit by a worm.

    • blamestross 18 hours ago ago

      "Just wait 2 weeks to use new versions by default" is an amazing defense method against supply chain attacks.

      • kevinrineer 18 hours ago ago

        Its also really ineffective defense against 0 days!

        • easterncalculus 12 hours ago ago

          In the context of a single system, there is no such thing as an "effective defense against 0 days" - that's marketing babble. A zero day by definition is an exploit with no defense. That's literally what that means.

          • hdjrudni 7 hours ago ago

            That doesn't sound right.

            > A zero-day exploit is a cyberattack vector that takes advantage of an unknown or unaddressed security flaw in computer software, hardware or firmware. "Zero day" refers to the fact that the software or device vendor has zero days to fix the flaw because malicious actors can already use it to access vulnerable systems.

            If I never install the infected software, I'm not vulnerable, even if no one knows of its existence.

            That said, you could argue that because it's a zero day and no one caught it, it can lie dormant for >2 weeks so your "just wait awhile" strategy might not work if no one catches it in that period.

            But if you're a hacker, sitting on a goldmine of infected computers... do you really want to wait it out to scoop up more victims before activating it? It might be caught.

            • saberience an hour ago ago

              Yeah but zero days usually refers to some software which is commonly installed. E.g. a zero day in the version of windows or mac os that most people are using.

              No one bothers finding 0-days in software which no one has installed.

        • ozim 15 hours ago ago

          IF I put my risk management hat on - 0 days in npm ecosystem are not that much of a problem.

          They stop working before can use them.

        • blamestross 17 hours ago ago

          Sadly we don't have any defense against 0 days if an emergency patch is indistinguishable from an attack itself.

          Better defense would be to delete or quarantine the compromised versions, fail to build and escalate to a human for zero-day defense.

          • minitech 13 hours ago ago

            > Sadly we don't have any defense against 0 days if an emergency patch is indistinguishable from an attack itself.

            Reading the code content of emergency patches should be part of the job. Of course, with better code trust tools (there seem to have been some attempts at that lately, not sure where they’re at), we can delegate that and still do much better than the current state of things.

      • booi 12 hours ago ago

        Is there some sort of easy operational way to do this? There are well known tech companies that do this internally but afaik this isn't a feature of OSS registries like verdaccio

    • RedShift1 19 hours ago ago

      I'll reply to you tomorrow

      • TYPE_FASTER 19 hours ago ago

        ...by then it might be working again anyway, or the user figured out what they were doing wrong.

        "Hey, is it still broken? No? Great!"

  • sebstefan 19 hours ago ago

    Dodged a bullet indeed

    I find it insane that someone would get access to a package like this, then just push a shitty crypto stealer.

    You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?

    You can exfiltrate API keys, add your SSH public key to the server then exfiltrate the server's IP address so you can snoop in there manually, if you're on a dev's machine maybe the browser's profiles, the session tokens common sales websites? My personal desktop has all my cards saved on Amazon. My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.

    You don't even need to do anything with those, there's forums to sell that stuff.

    Surely there's an explanation, or is it that all the good cybercriminals have stable high paying jobs in tech, and this is what's left for us?

    • com2kid 19 hours ago ago

      > You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?

      Because the way this was pulled off, it was going to be found out right away. It wasn't a subtle insertion, it was a complete account take over. The attacker had only hours before discovery - so the logical thing to do is a hit and run. They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.

      Unless the back doors were so good they weren't going to be discovered even though half the world would be dissecting the attack code, there was no point in even trying.

      • pluto_modadic 19 hours ago ago

        "found out right away"... by people with time to review security bulletins. There's loads of places I could see this slipping through the cracks for months.

        • andrewstuart2 17 hours ago ago

          I'm assuming they meant the account takeover was likely to be found out right away. You change your password on a major site like that and you're going to get an email about it. Login from a new location also triggers these emails, though I admit I haven't logged onto NPM in quite a long time so I don't know that they do this.

          It might get missed, but I sure notice any time account emails come through even if it's not saying "your password was reset."

        • benoau 18 hours ago ago

          There's probably already hundreds of thousands of Jira tickets to fix it with no sprint assigned....

          • brazzy 13 hours ago ago

            I feel attacked.

            And very, very happy that we're proxying all access to npm through Artifactory, which allowed us to block the affected versions and verify that they were in fact never pulled by any of our builds.

            • Aeolun 8 hours ago ago

              Only problem is the artifactory instance is on the other side if the world instead of behind the convenient npmjs CDN, so installing packages takes 5x longer..

            • pixl97 13 hours ago ago

              About to say, if you're in a company of any size and you're not doing it this way, you're doing it wrong.

        • zahlman 18 hours ago ago

          Yes, but this is an ecosystem large enough to include people who have that time (and inclination and ability); and once they have reported a problem, everyone is on high alert.

          • wongarsu 18 hours ago ago

            If you steal the cookies from dev machines or steal ssh keys along with a list of recent ssh connections or do any other credential theft there are going to be lots of people left impacted. Yes, lots of people reading tech news or security bulletins is going to check if they were compromised and preemptively revoke those credentials. But that's work, meaning even among those informed there will be many who just assume they weren't impacted. Lots of people/organisations are going to be complacent and leave you with valid credentials

            • ameliaquining 18 hours ago ago

              If a dev doesn't happen to run npm install during the period between when the compromised package gets published and when npm yanks it (which for something this high-profile is generally measured in hours, not days), then they aren't going to be impacted. So an attacker's patience won't be rewarded with many valid credentials.

              • giveita 5 hours ago ago

                Dev, or their IDE, agent, etc.

                • komali2 3 hours ago ago

                  Their build chain, CI environment, server...

                  • ameliaquining 3 hours ago ago

                    npm ci wouldn't trigger this, it doesn't pick up newly published package versions. I suppose if you got a PR from Dependabot updating you to the compromised package, and happened to merge it within the window of vulnerability, then you'd get hit, but that will likewise not affect all that many developers. Or if you'd configured Dependabot to automatically merge all updates without review; I'm not sure how common that is.

            • com2kid 17 hours ago ago

              But that is dumb luck. Release an exploit, hope you can then gain further entry into a system at a company that is both high value and doesn't have any basic security practices in place.

              That could have netted the attacker something much more valuable, but it is pure hit or miss and it requires more skill and patience for a payoff.

              VS blast out some crypto stealing code and grab as many funds as possible before being found out.

              > Lots of people/organisations are going to be complacent and leave you with valid credentials

              You'd get non-root credentials on lots of dev machines, and likely some non-root credentials on prod machines, and possibly root access to some poorly configured machines.

              Two factor is still in place, you only have whatever creds that NPM install was ran with. Plenty of the really high value prod targets may very well be on machines that don't even have publicly routable IPs.

              With a large enough blast radius, this may have worked, but it wouldn't be guaranteed.

        • joshuat 18 hours ago ago

          The window of installation time would be pretty minimal, and the operating window would only be as long as those who deployed while the malicious package was up waited to do another deploy.

      • bobbylarrybobby 18 hours ago ago

        If they'd waited a week before using their ill-gotten credentials to update the packages, would they have been detected in that week?

      • nialv7 13 hours ago ago

        > it was a complete account take over

        is that so? from the email it looks like they MITM'd the 2FA setup process, so they will have qix's 2FA secret. they don't have to immediately start taking over qix's account and lock him out. they should have had all the time they need to come up with a more sophisticated payload.

      • jowea 13 hours ago ago

        To be fair, this wasn't a super demanding 0-day attack, it was a slightly targeted email phish. Maybe the attacker isn't that sophisticated and just went with what is familiar?

      • nurettin 8 hours ago ago

        > They asked what is the most money that can be extracted in just a few hours in an automated fashion (no time to investigate targets manually one at a time) and crypto is the obvious answer.

        A decade ago my root/123456 ssh password got pwned in 3-4 days. (I was gonna change to certificate!)

        Hetzner alerted me saying that I filled my entire 1TB/mo download quota.

        Apparently, the attacker (automation?) took over and used it to scrape alibaba, or did something with their cloud on port 443. It took a few hours to eat up every last byte. It felt like this was part of a huge operation. They also left a non-functional crypto miner in there that I simply couldn't remove.

        So while they could cryptolock, they just used it for something insidious and left it alone.

    • root_axis 19 hours ago ago

      Stolen cryptocurrency is a sure thing because fraudulent transactions can't be halted, reversed, or otherwise recovered. Things like a random dev's API and SSH keys are close to worthless unless you get extremely lucky, and even then you have to find some way to sell or otherwise make money from those credentials, the proceeds of which will certainly be denominated in cryptocurrency anyway.

      • buu700 18 hours ago ago

        Agreed. I think we're all relieved at the harm that wasn't caused by this, but the attacker was almost certainly more motivated by profit than harm. Having a bunch of credentials stolen en masse would be a pain in the butt for the rest of us, but from the attacker's perspective your SSH key is just more work and opsec risk compared to a clean crypto theft.

        Putting it another way: if I'm a random small-time burglar who happens to find himself in Walter White's vault, I'm stuffing as much cash as I can fit into my bag and ignoring the barrel of methylamine.

      • jimbo808 18 hours ago ago

        And it's probably the lowest risk way to profit from this attack

      • babypuncher 18 hours ago ago

        Ultimately, stolen cryptocurrency doesn't cause real world damage for real people, it just causes a bad day for people who gamble on questionable speculative investments.

        The damage from this hack could have been far worse if it was stealing real money people rely on to feed their kids.

        • aspenmayer 11 hours ago ago

          You have the context sort of wrong. To do a comparable “real money” heist en masse, you would be stealing from the banks or from the customers of one, or via debit or credit cards. It’s real enough money, but those fraudulent transactions would be covered by existing protections, like FDIC insurance or chargebacks. I don’t think anyone could steal much cash from a single heist from a bank or other hard target, so your analogy is confusing. There is no analogous situation in which “real money” could be stolen from customers or financial institutions or the interchange system that would impinge end users. That’s the whole reason people use them. Even in friendly fraud situations, the money isn’t gone, it’s just frozen, so you might have to wait a month or so to get it unfrozen after the FBI et al clear the source of funds.

          Sure, if someone takes my grocery money, that’s a real loss, and that’s why I don’t carry large sums of cash. But that isn’t what happened here.

          Can you explain what you meant so I can understand? I think you had a point, I just don’t think that the risk of the kind of attack in TFA is comparable to someone getting their grocery money stolen, because the financial situation for that individual in-person theft can’t really occur on the same scale as the attack in TFA, and even if it could, that’s kind of on the end user for carrying more cash than they can defend.

          • lmm 7 hours ago ago

            > It’s real enough money, but those fraudulent transactions would be covered by existing protections, like FDIC insurance or chargebacks.

            Not always. Many banks will claim e.g. they don't have to cover losses from someone who opened a phishing email, never mind that the bank themselves sends out equally suspicious "real" emails on the regular.

            Also even if it's covered that money comes from somewhere - ultimately out of the pockets of regular folks who were just using their bank accounts, even if the insurance mechasims mean it's spread out more widely.

            • aspenmayer 6 hours ago ago

              Good points all around. I don’t mean to blame the victim, as they usually don’t know what they don’t know and aren’t party to the fraud, so they couldn’t begin to know, but informed users ought to know the failure modes. Insurance rates are surely a factor in the industry push for KYC, which is mandated federally for good reasons, but in edge cases like loss of funds, the little people are often blamed for being victims by faceless corporations because they aren’t able to say what caused the issue, due to federal regulations against fraud. It’s a conundrum.

    • jeroenhd 18 hours ago ago

      Get in, steal a couple hundred grand, get out, do the exact same thing a few months later. Repeat a few times and you can live worry free until retirement if you know to evade the cops.

      Even if you steal other stuff, you're going to need to turn it all into cryptocurrency anyway, and how much is an AWS key really going to bring in.

      There are criminals that focus on extracting passwords and password manager databases as well, though they often also end up going after cryptocurrency websites.

      There are probably criminals out there biding their time, waiting for the perfect moment to strike, silently infiltrating companies through carefully picked dependencies, but those don't get caught as easily as the ones draining cryptocurrency wallets.

      • spir 11 hours ago ago

        Earlier this year, a crypto app web UI attack stole $1.5 billion.

        A couple hundred grand is not what these attackers are after.

      • zingababba 16 hours ago ago
        • sebstefan 4 hours ago ago

          I think at this point just retire from cybercrime

      • dylan604 15 hours ago ago

        > if you know to evade the cops.

        step 1: live in a place where the cops do not police this type of activity

        step 2: $$$$

      • scubbo 16 hours ago ago

        > do the exact same thing a few months later

        > one-in-a-million opportunity

    • WhyNotHugo 18 hours ago ago

      The pushed payload didn't generate any new traffic. It merely replaced the recipient of a crypto transaction to a different account. It would have been really hard to detect. Ex-filtrating API keys would have been picked up a lot faster.

      OTOH, this modus operandi is completely inconsistent with the way they published the injected code: by taking over a developer's account. This was going to be noticed quickly.

      If the payload had been injected in a more subtle way, it might have taken a long time to figure out. Especially with all the levenshtein logic that might convince a victim they'd somehow screwed up.

      • SchemaLoad 10 hours ago ago

        Not only that, but it picked an address from a list which had similar starting/ending characters so if you only checked part of the wallet address, you'd still get exploited.

    • boznz 16 hours ago ago

      It is not a one-in-a-million opportunity though. I hate to take this to the next level, but as criminal elements wake up to the fact that a few "geeks" can possibly get them access to millions of dollars expect much worse to come. As a maintainer of any code that could gain bad guys access, I would be seriously considering how well my physical identity is hidden on-line.

      • SchemaLoad 10 hours ago ago

        This is why banks make you approve transactions on your phone now. The fact that a random NPM package can redirect your money is a massive issue

      • pixl97 13 hours ago ago

        As foretold by the prophet

        https://xkcd.com/538/

      • jongjong 12 hours ago ago

        I just made a very similar comment. Spot on. It's laughable to think that this trivial opportunity that literally any developer could pull off with a couple of thousand dollars is a one-in-a-million. North Korea probably has enough money to buy up a significant percentage of all popular npm dependencies and most people would sell willingly and unwittingly.

        In the case of North Korea, it's really crazy because hackers over there can do this legally in their own country, with the support of their government!

        And most popular npm developers are broke.

        • tonyhart7 7 hours ago ago

          actually, unless you are billionaire or high profile individual

          You wouldn't get targeted not because they cant but its not worth it

          many state sponsored attack is well documented in a lot of book that people can read they don't want to add much record because its create buzz

    • hombre_fatal 18 hours ago ago

      You give an example of an incredibly targeted attack of snooping around manually on someone's machine so you can exfiltrate yet more sensitive information like credit card numbers (how, and then what?)

      But (1) how do you do that with hundreds or thousands of SSH/API keys and (2) how do you actually make money from it?

      So you get a list of SSH or specific API keys and then write a crawler that can hopefully gather more secrets from them, like credit card details (how would that work btw?) and then what, you google "how to sell credentials" and register on some forum to broker a deal like they do in movies?

      Sure sounds a hell of a lot more complicated and precarious than swapping out crypto addresses in flight.

    • balls187 19 hours ago ago

      > You're a criminal with a one-in-a-million opportunity. Wouldn't you invest an extra week pushing a more fledged out exploit?

      The plot of Office Space might offer clues.

      Also isn't it crime 101 that greedy criminals are the ones who are more likely to get caught?

    • alexvitkov 18 hours ago ago

      API/SSH keys can easily be swapped, it's more hassle than it's worth. Be glad they didn't choose to spread the payload of one of the 100 ransomware groups with affiliate programs.

    • 16 hours ago ago
      [deleted]
    • thewebguyd 19 hours ago ago

      > My work laptop, depending on the period of my life, you could have had access to stuff you wouldn't believe either.

      What gets me is everyone acknowledges this, yet HN is full of comments ripping on IT teams for the restrictions & EDR put in place on dev laptops.

      We on the ops side have known these risks for years and that knowledge of those risks are what drives organizational security policies and endpoint configuration.

      Security is hard, and it is very inconvenient, but it's increasingly necessary.

      • dghlsakjg 18 hours ago ago

        I think people rip on EDR and security when 1. They haven’t had it explained why it does what it does or 2. It is process for process sake.

        To wit: I have an open ticket right now from an automated code review tool that flagged a potential vulnerability. I and two other seniors have confirmed that it is a false alarm so I asked for permission to ignore it by clicking the ignore button in a separate security ticket. They asked for more details to be added to the ticket, except I don’t have permissions to view the ticket. I need to submit another ticket to get permission to view the original ticket to confirm that no less than three senior developers have validated this as a false alarm, which is information that is already on another ticket. This non-issue has been going on for months at this point. The ops person who has asked me to provide more info won’t accept a written explanation via Teams, it has to be added to the ticket.

        Stakeholders will quickly treat your entire security system like a waste of time and resources when they can plainly see that many parts of it are a waste of time and resources.

        The objection isn’t against security. It is against security theater.

        • MichaelZuo 17 hours ago ago

          This sounds sensible for the “ops person”?

          It might not be sensible for the organization as a whole, but there’s no way to determine that conclusively, without going over thousands of different possibilities, edge cases, etc.

          • dghlsakjg 17 hours ago ago

            What about this sounds sensible?

            I have already documented, in writing, in multiple places, that the automated software has raised a false alarm, as well as providing a piece of code demonstrating that the alert was wrong. They are asking me to document it in an additional place that I don't have access to, presumably for perceived security reasons? We already accept that my reasoning around the false alarm is valid, they just have buried a simple resolution beneath completely stupid process. You are going to get false alarms, if it takes months to deal with a single one, the alarm system is going to get ignored, or bypassed. I have a variety of conflicting demands on my attention.

            At the same time, when we came under a coordinated DDOS attack from what was likely a political actor, security didn't notice the millions of requests coming from a country that we have never had a single customer in. Our dev team brought it to their attention where they, again, slowed everything down by insisting on taking part in the mitigation, even though they couldn't figure out how to give themselves permission to access basic things like our logging system. We had to devote one of our on calls to walking them through submitting access tickets, a process presumably put in place by a security team.

            I know what good security looks like, and I respect it. Many people have to deal with bad security on a regular basis, and they should not be shamed for correctly pointing out that it is terrible.

            • MichaelZuo 16 hours ago ago

              If your sufficiently confident there can be no negative consequences whatsoever… then just email that person’s superiors and cc your superiors to guarantee in writing you’ll take responsibility?

              The ops person obviously can’t do that on your behalf, at least not in any kind of organizational setup I’ve heard of.

              • dghlsakjg 15 hours ago ago

                As the developer in charge of looking at security alerts for this code base, I already am responsible, which is why I submitted the exemption request in the first place. As it is, this alert has been active for months and no one from security has asked about the alert, just my exemption request, so clearly the actual fix (disregarding or code changes) are less important than the process and alert itself.

                So the solution to an illogical, kafkaesque security process is to bypass the process entirely via authority?

                You are making my argument for me.

                This is exactly why people don’t take security processes seriously, and fight efforts to add more security processes.

                • MichaelZuo 14 hours ago ago

                  So you agree with me the ops person is behaving sensibly given real life constraints?

                  Edit: I didn’t comment on all those other points, so it seems irrelevant to the one question I asked.

                  • dghlsakjg 13 hours ago ago

                    Absolutely not.

                    Ops are the ones who imposed those constraints. You can't impose absurd constraints and then say you are acting reasonable by abiding by your own absurd constraints.

      • the8472 18 hours ago ago

        At least at $employer a good portion of those systems are intended to stop attacks on management and the average office worker. The process is not geared towards securing dev(arbitrary code execution)-ops(infra creds). They're not even handing out hardware security keys for admin accounts. I use my own, some other devs just use TOTP authenticator apps on their private phones.

        All their EDR crud runs on Windows, but as a dev I'm allowed to run WSL but the tools do not reach inside WSL so if that gets compromised they would be none the wiser.

        There is some instrumentation for linux servers and cloud machines, but that too is full of blind spots.

        And as a sibling comment says, a lot of the policies are executed without anyone being able to explain their purpose, being able to grant "functionally equivalent security" exceptions or them even making sense in certain contexts. It feels like dealing with mindless automatons, even though humans are involved. For example a thing that happened a while ago: We were using scrypt as KDF, but their scanning flagged it as unknown password encryption and insisted that we should use SHA2 as a modern, secure hashing function. Weeks of long email threads, escalation and several managers suggesting "just change it to satisfy them" followed. That's a clear example of mindless rule-following making a system less secure.

        Blocking remote desktop forwarding of security keys also is a fun one.

      • balls187 19 hours ago ago

        Funny, I read that quote, and assumed it meant something unsavory, and not say, root access to an AWS account.

    • paradite 18 hours ago ago

      Because it's North Korea and crypto currency is the best assets they can get for pragmatic reasons.

      For anything else you need a fiat market, which is hard to deal with remotely.

    • pianopatrick 18 hours ago ago

      Seems possible to me that someone has done an attack exactly like you describe and just was never caught.

    • deepanwadhwa 15 hours ago ago

      What makes you so sure that the exploit is over? Maybe they wanted their secondary exploit to get caught to give everyone a sense of security? Their primary exploit might still be lurking somewhere in the code?

      • pixl97 13 hours ago ago

        Well, because it is really easy to diff an npm package.

        The attacker had access to the user's npm repository only.

    • doubleorseven 19 hours ago ago

      i fell for this malware once. had the malware on my laptop even with mb in the background. i copy paste and address and didn't even check it. my bad indeed. those guys makes a lot of money from this "one shot" moments

    • jmull 16 hours ago ago

      There's nothing wrong with staying focused (on grabbing the money).

      Your ideas are potentially lubricative over time, but first it creates more work and risk for the attacker.

    • 42lux 18 hours ago ago

      As long as we get lucky nothing is going to change.

    • jongjong 13 hours ago ago

      Maybe their goal was just surviving, not getting rich.

      Also, you underestimate how trivial this 'one-in-a-million opportunity' is; it's definitely not a one-in-a-million! Almost anybody with basic coding ability and a few thousand dollars could pull off this hack. There are thousands of libraries which are essentially worthless with millions of downloads and the author who maintains is basically broke and barely uses their npm account anymore. Anybody could just buy those npm accounts under false pretenses for a couple of thousands and then do whatever they want with tens of thousands (or even hundreds of thousands) of compromised servers. The library author is legally within their rights to sell their digital assets and it's not their business what the acquirer does with them.

    • ignoramous 17 hours ago ago

      > find it insane that someone would get access to a package like this, then just push a shitty crypto stealer

      Consumer financial fraud is quite big and relatively harmless. Industrial espionage, otoh, can potentially put you in the cross hairs of powerful and/or rouge elements, and so, only the big actors get involved, but in a targeted way, preferring to not leave much if any trace of compromise.

    • yieldcrv 18 hours ago ago

      yeah a shitty crypto stealer is more lucrative, more quickly monetized, has less OPSEC issues for the thief if done right, easier to launder

      nobody cares about your trade secrets, or some nation's nuclear program, just take the crypto

    • sim7c00 18 hours ago ago

      one in a million opportunity? the guy registered a domain and sent some emails dude. its cheap as hell

      • heywoods 18 hours ago ago

        Maybe one in a million is hyperbolic but that’s sorta the game with these attacks isn’t it? Registering thousands upon thousands of domains + tens of thousands of emails until you catch something from the proverbial pond.

      • k4rnaj1k 18 hours ago ago

        [dead]

  • mlinksva 17 hours ago ago

    As the post mentions wallets like MetaMask being the targets, AFAIK MetaMask in particular might be one of the best protected (isolated) applications from this kind of attack due to their use of LavaMoat https://x.com/MetaMask/status/1965147403713196304 -- though I'd love to read a detailed analysis of whether they actually are protected. No affiliation with MetaMask, just curious about effectiveness of seemingly little adopted measures (relative to scariness of attacks).

    Added: story dedicated to this topic more or less https://news.ycombinator.com/item?id=45179889

  • _fat_santa 20 hours ago ago

    I know this isn't really possible for smaller guys but larger players (like NPM) really should buy up all the TLD versions of "npm" (that is: npm.io, npm.sh, npm.help, etc). One of the reasons this was so effective is that the attacker managed to snap up "npm.help"

    • quectophoton 19 hours ago ago

      Then you have companies like AWS, they were sending invoices from `no-reply-aws@amazon.com` but last month they changed it to `no-reply@tax-and-invoicing.us-east-1.amazonaws.com`.

      That looks like a phishing attempt from someone using a random EC2 instance or something, but apparently it's legit. I think. Even the "heads-up" email they sent beforehand looked like phishing, so I was waiting for the actual invoice to see if they really started using that address, but even now I'm not opening these attached PDFs.

      These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.

      • simoncion 19 hours ago ago

        > These companies tell customers to be suspicious of phishing attempts, and then they pull these stunts.

        Yep. At every BigCo I've worked at, nearly all of the emails from Corporate have been indistinguishable from phishing. Sometimes, they're actual spam!

        Do the executives and directors responsible for sending these messages care? No. They never do, and get super defensive and self-righteous when you show them exactly how their precious emails tick every "This message is phishing!" box in the mandatory annual phishing-detection-and-resistance training.

        • cyphar 18 hours ago ago

          A few years ago our annual corporate phishing training was initiated by an email sent from a random address asking us to log in with our internal credentials on a random website.

          A week later some executive pushing the training emailed the entire company saying that it was unacceptable that nobody from engineering had logged into the training site and spun some story about regulatory requirements. After lots of back and forth they still wouldn't accept that it obviously looked like a phishing email.

          Eventually when we actually did the training, it literally told us to check the From address of emails. I sometimes wonder if it was some weird kind of performance art.

          • ornornor 15 hours ago ago

            It’s all just box ticking and CYA compliance.

            “We got pwned but the entire company went through a certified phishing awareness program and we have a DPI firewall. Nothing more we could have done, we’re not liable.”

            • cyphar 5 hours ago ago

              I agree, but I really wonder where on earth they find these people.

              • simoncion 2 hours ago ago

                If you're talking about the companies who provide the "training", either they're the lowest bidder, closely linked to someone who is buddies with someone important in the company [0], or both.

                [0] ...so the payments serve the social function of enriching your buddy and improving your status in the whole favor economy thing...

          • apple1417 13 hours ago ago

            I once got a "log into phishing training" email which spoofed the company address. No one even saw the email, it instantly hit the spam filter.

            Our infra guy then had to argue with them for quite a while to just email from their own domain, and that no, we're weren't going to add their cert to our DNS, and let a third party spoof us (or however that works, idk). Absolutely shocking lack of self awareness.

          • lovich 15 hours ago ago

            If Kevin mitnick shows up or is referenced then I’m pretty sure it’s performance art

            • cyphar 5 hours ago ago

              If only, it would've been an honour to get phished by Mitnick. Rest in peace...

        • Macha 14 hours ago ago

          I remember an email I once got.

          Title: "Expense report overdue - Please fill now"

          Subject:

          <empty body>

          <Link to document trying it's best to look like google's attachment icon but was actually a hyperlink to a site that asked me to log in with my corporate credentials>

          ---

          So like, obviously this is a stupid phishing email, right? Especially as at this time, I had not used my corporate card.

          A few weeks later I got the finance team reaching out threatening to cancel my corporate card because I had charges on it with no corresponding expense report filed.

          So on checking the charge history for the corporate card, it was the annual tax payment that all cards are charged in my country every year, and finance should have been well aware of. Of course, then the expense system initially rejected my report because I couldn't provide a receipt, as the card provider automatically deducts this charge with no manual action on the card owner's side...

        • mhh__ 13 hours ago ago

          Yielding to anything you say is a no-no because part of the deal is that you, as a geek, must bend over to their unilateral veto over everything in the company

      • charlieyu1 17 hours ago ago

        I thought facebookmail.com was fake. No, it is actually legit

        • jowea 13 hours ago ago

          Is that for user email? I think that is semi-understandable as Facebook wouldn't want to mix their authority with that of the users, like github.com vs github.io.

          Edit: nvm it seems it's not the case

    • VectorLock 19 hours ago ago

      There's like 1500 TLDs, now some of them are restricted and country-code TLDs but now it makes me wonder how much it would actual cost per year to maintain registration of every non-restricted TLD. I'm sure theres some SaaS company that'll do it.

      • saghm 19 hours ago ago

        OTOH, doesn't ICANN already sometimes restrict who has access to a given TLD? Would it really be that crazy for them to say "maybe we shouldn't let registrars sell npm.<TLD> regardless of the TLD", and likewise for a couple dozen of the most obvious targets (google., amazon., etc.)? No one needs to pay for these domains if no one is selling them in the first place. I don't love the idea of special treatment for giant companies in terms of domains, but we're already kind of there with the whole process they did when initially allowing companies to compete for exclusive access to TLDs, so we might as well use that process for something actually useful (unlike, say, letting companies apply for exclusive ownership of ".music" and have a whole legal process to determine that maybe that isn't actually beneficial for the internet as whole: https://en.wikipedia.org/wiki/.music)

        • VectorLock 19 hours ago ago

          The TLDs run the whole gamut from completely open to almost impossible to get.

        • ohdeargodno 18 hours ago ago

          >maybe we shouldn't let registrars sell npm.<TLD> regardless of the TLD

          Cool, get big enough, become friends with the right people and you can squat an entire name on the internet. What, you're the Nepalese Party for Marxists, you've existed for 70 years and you want to buy npm.np ? Nope, tough luck, some random dude pushes shitty javascript packages over there. Sorry for the existing npm.org address too, we're going to expropriate the National Association of Pastoral Musicians. Dare I remind you that the whole left-pad situation was because Kik, the company, stole (with NPM's assistance because they were big enough and friends with the right people) the kik package ?

          At least they're paying dozens of millions to buy a shitty ass .google that noone cares about because more and more browsers are hiding the URL bar. I'm glad ICANN can use it to buy drinks, hookers instead of being useful.

    • osmsucks 15 hours ago ago

      There are way too many TLDs for this to be even practical: https://data.iana.org/TLD/tlds-alpha-by-domain.txt

      I agree that especially larger players should be proactive and register all similar-sounding TLDs to mitigate such phishing attacks, but they can't be outright prevented this way.

    • jacobsenscott 17 hours ago ago

      This won't work - npm.* npmjs.* npmjs-help.* npm-help.* node.* js.* npmpackage.*. The list is endless.

      You can't protect against people clicking links in emails in this way. You might say `npmjs-help.ph` is a phishy domain, but npmjs.help is a phishy domain and people clicked it anyway.

      • eddythompson80 16 hours ago ago

        there is also the more recent style of phising domains that look like healthcare.gov-profile.co/user

    • karmakaze 13 hours ago ago

      First thing I do is check any domain that I don't recognize as official.

        Domain: NPMJS.HELP (85 similar domains)
        Registrar: Porkbun, LLC (4.84 million domains)
        Query Time: 8 Sep 2025 - 4:14 PM UTC  [1 DAY BACK] [REFRESH]
      
        Registered: 5th September 2025  [4 days back]
        Expiry: 5th September 2026  [11 months, 25 days left]
      
      I'd be suspicious of anything registered with Porkbun discount registrar. 4 days ago, means it's fake.

      > It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link.

      Any time I feel like I'm being rushed, I check deeper. It would help if everyone's official communications only came from the most well known domain (or subdomain).

      • galaxy_gas 8 hours ago ago

        while other is reasonable, Porkbun is not "discount" registrar. They often more expensive, and on addition of that, they run quite a number of TLDs

    • IncreasePosts 19 hours ago ago

      That seems like a bad idea compared to just having a canonical domain - people might become used to seeing "npm.<whatever>" and assuming it is legit. And then all it takes is one new TLD where NPM is a little late registering for someone to do something nefarious with the domain.

      • macintux 19 hours ago ago

        Just because you buy them doesn't mean that you have to use them. Squatting on them is no more harmful (except financially) than leaving them available for potentially hostile 3rd parties.

        • IncreasePosts 18 hours ago ago

          Sure, I guess buying up every npm.* you can find and then having a message "never use this, only use npm.com" could work. I thought OP was saying have every npm.* site be a mirror of the canonical site

          • barnas2 18 hours ago ago

            Looks like it costs ~$200,000 to get your own TLD. If a bunch of companies started doing the "register every TLD of our brand", I wonder what the breakeven point would be where just registering a TLD is profitable.

    • ozim 17 hours ago ago

      That’s like insane proportion.

    • joe_the_user 9 hours ago ago

      I don't think that particular measure would help but NPM are the people who brought us the LPad crisis and their wikipedia page has a long string of security failures mentioned on it. Given this, it seems likely their attitude is "we don't care, we don't have to" and their relative success as the world's largest package manager seems to echo that (not that I have any idea whether they make any money).

    • croemer 17 hours ago ago

      npmjs.help not npm.help - the typo is also in the article.

  • karel-3d 17 hours ago ago

    "there is no way to prevent this", says the only ecosystem where this regularly happens

    • dzogchen 13 hours ago ago

      Exactly! Extremely lazy conclusion.

  • Havoc 20 hours ago ago

    Really feels like these big open packages repos need a better security solution. Or at least a core subset of carefully vetted ones.

    Same issue with python, rust etc. It’s all very trust driven

    • cgh 19 hours ago ago

      Is the fundamental problem with npm still a lack of enforced namespacing?

      In the Java world, I know there’s been griping from mostly juniors re “why isn’t Maven easy like npm?” (I work with some of these people). I point them to this article: https://www.sonatype.com/blog/why-namespacing-matters-in-pub...

      Maven got a lot of things right back in the day. Yes POM files are in xml and we all know xml sucks etc, but aside from that the stodgy focus on robustness and carefully considered change gets more impressive all the time.

      • hyperpape 19 hours ago ago

        Nothing about this attack would be solved by namespacing, but it might have been solved by maven's use of GPG keys.

        • zenmac 16 hours ago ago

          isn't time NPM start to use that? Why has this taken soo long?

    • lpln3452 19 hours ago ago

      In a case like this, the package maintainer's account itself has been hacked, so I'm not sure if that would be meaningful.

      The only solution would be to prevent all releases from being applied immediately.

      • dherls 19 hours ago ago

        A solution could be enforcing hardware keys for 2FA for all maintainers if a package has more than XX thousand weekly downloads.

        No hardware keys, no new releases.

        • ozim 14 hours ago ago

          Passkeys - no need for hardware key.

          They have it implemented.

          I created NPM account today and added passkey from my laptop and hardware key as secondary. As I have it configured it asked my for it while publishing my test package.

          So the guy either had TOTP or just the pw.

          Seems like should be easy to implement enforcement.

        • winkelmann 11 hours ago ago

          Crucially, it would have to be set up so they need to use the hardware key when pushing any changes. Just requiring a hardware key as a login method does nothing to protect against token stealing, which I believe is the most common form of supply chain attack right now.

      • dsff3f3f3f 19 hours ago ago

        There needs to be a massive push from the larger important packages to eliminate these idiotic transitive dependencies. Core infrastructure shouldn't rely on trivial packages maintained by a single random person from who knows where that can push updates without review. It's absolutely insane.

    • ozim 15 hours ago ago

      Linux distributions packages are also very trust driven — but you have to earn trust to publish. Then there is whole system to verify trust. NPM is more like „everything goes”.

      • euLh7SM5HDFY 4 hours ago ago

        The sheer volume is the issue. Recent XZ backdoor shows it can happen to everyone. I am pretty sure JS has most packages, updates and contributors - and it makes it the best ecosystem to target. That anemic standard library doesn't help of course, but 2FA and package signing is required for all package repositories, here and now.

      • johnny22 14 hours ago ago

        It woudl'nt have solved this, because this publisher would have been trusted.

  • Zak 17 hours ago ago

    > If you were targeted with such a phishing attack, you'd fall for it too and it's a matter of when not if. Anyone who claims they wouldn't is wrong.

    I like to think I wouldn't. I don't put credentials into links from emails that I didn't trigger right then (e.g. password reset emails). That's a security skill everyone should be practicing in 2025.

    • chrismorgan 9 hours ago ago

      Yeah, I feel that bit is just wrong, in three ways for me:

      1. Like you, I never put credentials into links from emails that I didn’t trigger/wasn’t expecting. This is a generally-sensible practise.

      2. Updating 2FA credentials is nonsense. I don’t expect everyone to know this, this is the weakest of the three.

      3. If my credentials don’t autofill due to origin mismatch, I am not filling it manually. Ever. I would instead, if I thought it genuine, go to their actual site and log in there, and then see nothing about what the phish claimed. I’ve heard people talking about companies using multiple origins for their login forms and how having to deal with that undermines this aspect, but for myself I don’t believe I’ve ever seen that, not even once. It’s definitely not common, and origin-locked second factors should make that practice disappear altogether.

      Now these three are not of equal strength. The second requires specific knowledge, and a phish could conceivably use something similar that isn’t such nonsense anyway. The first is a best practice that seems to require some discipline, so although everyone should do it, it is unfortunately not the strongest. But the third? When you’re using a password manager with autofill, that one should be absolutely robust. It protects you! You have to go out of your way to get phished!

    • gcau 15 hours ago ago

      "'such' a phishing attack" makes it sound like a sophisticated, indepth attack, when in reality it's a developer yet again falling for a phishing email that even Sally from finance wouldn't fall for, and although anyone can make mistakes, there is such a thing as negligent, amateur mistakes. It's astonishing to me.

      • greycol 14 hours ago ago

        Every time I bite my tongue (literal not figurative) it's also astonishing to me. Last time I did was probably 3 years ago and it was probably 10 years earlier for the time before that. Would it be fair to call me a negligent eater? Have you been walking and tripped over nothing? Humans are fallible and unless you are in an environment where the productivity loss of a rigorous checklist and routine system makes sense these mistakes happen.

        It would be just as easy to argue that anyone who uses software and hasn't confirmed their security certifications include whatever processes you imagine avoids 'human makes 1 mistake and continues with normal workflow' error or holds updates until evaluated is negligent.

        • gcau 13 hours ago ago

          Humans are imperfect and anyone can make mistakes, yes. I would argue there's different categories of mistakes though, in terms of potential outcomes and how preventable they are. A maintainer with potentially millions of users falling for a simple phishing email is both preventable and has a very bad potential outcome. I think all parties involved could have done better (the maintainer/npm/the email client/etc) to prevent this.

      • jowea 13 hours ago ago

        I feel that most everyone has some 0.0001% chance of falling for a stupid trick. And at scale, a tiny chance means someone will fall for it.

        • foxglacier 10 hours ago ago

          That's true but it's like saying most everyone has a small chance of crashing their car. Yet when someone crashes their car because they were texting while driving, speeding, or drunk, we justifiably blame them for it instead of calling them unlucky. We can blame them because there are clear rules they are supposed to know for safety when driving, just as there are for electronic security. The rule for avoid phishing is called "hang up, look up, call back".

          • jowea 2 hours ago ago

            Yeah but society doesn't act as if it's an unthinkable event we never planned for when a car crash happens. Blame someone or don't, but there are going to be emergency responders used to dealing with car crashes coming, because we know that car crashes happen (a lot) and we need to be ready for it.

    • foxglacier 10 hours ago ago

      Yes, that was a bit defeatist about phishing and tolerant of poor security. Anyone employing the "hang up, look up, call back" technique would be safe. It sounds like the author doesn't even know that technique and avoids phishing by using intuition.

      I've had emails like that from various places, probably legitimate, but I absolutely never click the bloody link from an email and enter my credentials into it! That's internet safety 101.

  • benreesman 14 hours ago ago

    Now imagine if someone combined Jia Tan patience with swiss-cheese security like all of our editor plugins and nifty shell user land stuff and all that.

    Developer stuff is arguably the least scrutinized thing that routinely runs as mega root.

    I wish I could say that I audit every elisp, neovim, vscode plugin and every nifty modern replacement for some creaky GNU userland tool. But bat, zoxide, fzf, atuin, starship, viddy, and about 100 more? Nah, I get them from nixpkgs in the best case, and I've piped things to sh.

    Write a better VSCode plugin for some terminal panel LLM gizmo, wait a year or two?

    gg

    • jowea 13 hours ago ago

      Someday, someone, hopefully, will fix xkcd 1200.

  • YeGoblynQueenne 2 hours ago ago

    >> It sets a deadline a few days in the future. This creates a sense of urgency, and when you combine urgency with being rushed by life, you are much more likely to fall for the phishing link.

    Procrastination is a security strategy.

    • kitd an hour ago ago

      When we do the phishing awareness training at $WORK, we are told any sense of urgency is suspicious, especially from an established org. Most would give you at least a month before something as draconian as locking your account.

  • duxup 21 hours ago ago

    Is it possible to do the thing proposed in the email without clicking the link?

    I just try to avoid clicking links in emails generally...

    • loloquwowndueo 20 hours ago ago

      Should be - open another browser window and manually log into npm whatever, and update your 2fa there.

      Definitely good practice .

      • Dilettante_ 20 hours ago ago

        This is the Way. To minimize attack surface, the senders of authentic messages should straight-up avoid putting links to "do the thing" in the message. Just tell the user to update their credentials via the website.

        • viraptor 19 hours ago ago

          That's what the Australian Tax Office does. Just a plaintext message that's effectively "you've got a new message. Go to the website to read it."

          • duxup 18 hours ago ago

            All my medical places I use do that, with the note that you can also use their app. Good system.

            • foxglacier 10 hours ago ago

              Unfortunately, my doctor's office texts me their bank account number saying "please pay $75 to this account". It told them that's putting people at risk of phishing but they didn't care.

          • amysox 18 hours ago ago

            My doctor's office does the same thing. So do some financial services companies.

        • Roguelazer 15 hours ago ago

          For most users, that'll just result in them going to Google, searching for the name of your business, and then clicking the first link blindly. At that point you're trusting that there's no malicious actors squatting on your business name's keyword -- and if you're at all an interesting target, there's definitely malvertising targeting you.

          The only real solution is to have domain-bound identities like passkeys.

      • hu3 20 hours ago ago

        That's what I always do. Never click these kinds of links in e-mail.

        Always manually open the website.

        This week Oracle Cloud started enforcing 2FA. And surely I didn't click their e-mail link to do that.

      • ares623 12 hours ago ago

        But won’t someone think of the friction? /s

        My theory is that if that companies start using that workflow in the future, it’ll become even _easier_ for users to click a random link, because they’d go “wow! That’s so convenient now!”

    • 0cf8612b2e1e 17 hours ago ago

      The Microsoft ecosystem certainly makes this challenging. At work, I get links to Sharepoint hosted things with infinitely long hexadecimal addresses. Otherwise finding resources on Sharepoint is impossible.

    • JohnFen 20 hours ago ago

      > I just try to avoid clicking links in emails generally...

      I don't just generally try, I _never_ click links in emails from companies, period. It's too dangerous and not actually necessary. If a friend sends me a link, I'll confirm it with them directly before using it.

  • sega_sai 18 hours ago ago

    It seems to me that having an email client that simply disables all the links in the email is probably a good idea. Or maybe, there should be explicit white-listing of domains that are allowed to be hyperlinks.

    • SahAssar 17 hours ago ago

      And who would control that whitelist? How would it be any different than the domain system or PKI CA system we have now?

      Do you think there would be the time to properly review applications to get on the whitelist?

      • 0xDEAFBEAD 8 hours ago ago

        Presumably Gmail already has anti-spam features which trigger based on domain name etc.

        They could add anti-phish features which force confirmation before clicking a link to an uncommon domain. Startups could pay a nominal fee to get their domain reviewed and whitelisted.

      • sega_sai 13 hours ago ago

        A user for example. By default nothing would be in the whitelist. Then you would add things to the whitelist manually. Since it's not that frequent this needs to be done, that probably would be a useful extra step to stop phishing.

      • toast0 14 hours ago ago

        In a world where those sending email were consistent, the user could control the whitelist. 'This link is from a domain you've clicked through X times, do you want to click through? Yes / Yes and don't ask again'

        If it's new, you should be more cautious. Except even those companies that should know better need you to link through 7 levels of redirect tracking, and they're always using a new one.

    • 2OEH8eoCRo0 18 hours ago ago

      I've always thought it's insane that anyone on the planet with a connection can drop a clickable link in front of you. Clickable links in email should be considered harmful. Force the user to copy/paste

      URLs are also getting too damn long

      • falcor84 18 hours ago ago

        How would copy-pasting help in this scenario?

  • mdavid626 6 hours ago ago

    How would any normal person know that npmjs.help is phising, but npmjs.com is valid?

    • DecoySalamander 2 hours ago ago

      It wasn't a "normal person" it was a developer that put this into a README of his package

      > But beyond the technical aspects, there's something more critical: trust and long-term maintenance. I have been active in open source for over a decade, and I'm committed to keeping Chalk maintained. Smaller packages might seem appealing now, but there's no guarantee they will be around for the long term, or that they won't become malicious over time.

      I expect him to know better.

      • mdavid626 an hour ago ago

        Does this mean you verify EVERY domain you use? How to even do that?

        Shouldn’t this be solved some other ways?

        • DecoySalamander 2 minutes ago ago

          I do it by reading domain name and comparing it to what I expect it to be. It's not hard and when in doubt I can easily check WHOIS info or search online for references.

          This is also easily avaidable by using password manager which will not autofill credentials on a page with a wrong domain.

    • creesch 6 hours ago ago

      To state the obvious, one ends with "help" on with "com". It effectively is phishing awareness 101 that domains need to match.

      You still don't know then of course. When in doubt you shouldn't do the action that is asked through clicking on links in the mail. Instead go to the domain you know to be legit and execute the action there.

      Having said all that, even the most aware people are only human. So it is always possible to overlook a detail like that.

      • giveita 5 hours ago ago

        Corollary: dont click on any emails links. (Most use some dumb domain name that could be phishing)

  • nedt 3 hours ago ago

    We haven't been saved by procrastination. We literally were saying "oh that's a new version, we are always behind anyway". Of course everything was still checked, but actually having the latest version on packages is almost never needed and we rather update when we have to (because version is old) instead of when there is a new version. Nothing new is that awesome.

  • dang 14 hours ago ago

    Related. Others?

    DuckDB NPM packages 1.3.3 and 1.29.2 compromised with malware - https://news.ycombinator.com/item?id=45179939 - Sept 2025 (209 comments)

    NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (719 comments)

  • bob1029 18 hours ago ago

    "Batteries included" ecosystems are the ultimate defense against the dark arts. Your F100 first party vendor might get it wrong every now and then, but they have so much more to lose than a random 3rd party asshole who decides to deploy malicious packages.

    The worst thing I can recall from the enterprisey ecosystems is the log4j exploit, which was easily one of the most attended to security problems I am aware of. Every single beacon was lit for that one. It seems like when an NPM package goes bad, it can take a really long time before someone starts to smell it.

    • ameliaquining 17 hours ago ago

      Log4Shell didn't light up all the beacons because Java is "enterprisey", it was because it was probably the worst security vulnerability in history; not only was the package extremely widely used, the vulnerability existed for nearly a decade and was straightforwardly wormable, so basically everybody running Java code anywhere had to make sure to update and check that they hadn't been compromised. Which is just a big project requiring an all-out response, since it's hard to know where you might have something running. By contrast, this set of backdoors only existed for a few hours, and the scope of the vulnerability is well-understood, so most developers can be pretty sure they weren't impacted and will have quite reasonably forgotten about it by next week. It's getting attention because it's a cautionary tale, not because it's causing a substantial amount of real damage.

      I do think it's worth reducing the number of points of failure in an ecosystem, but relying entirely on a single library that's at risk of stagnating due to eternal backcompat obligations is not the way; see the standard complaints about Python's "dead batteries". The Debian or Stackage model seems like it could be a good one to follow, assuming the existence of funding to do it.

    • SahAssar 17 hours ago ago

      Heartbleed? Solarwinds? Spectre/Meltdown? Stuxnet? Eternal Blue? CVE-2008-0166 (debian predictable private keys)?

    • dghlsakjg 18 hours ago ago

      Solarwinds?

  • leviathant 10 hours ago ago

    I don't know what series of events transpired that resulted in common, slightly irregular use of the word "kindly" by scammers, but I'm glad it happened. Immediate red flag, every time.

  • dsff3f3f3f 20 hours ago ago

    > These kinds of dependencies are everywhere and nobody would even think that they could be harmful.

    Tons of people think these kind of micro dependencies are harmful and many of them have been saying it for years.

    • Groxx 19 hours ago ago

      I'm rather convinced that the next major language-feature wave will be permissions for libraries. It's painfully clear that we're well past the point where it's needed.

      I didn't think it'll make things perfect, not by a long shot. But it can make the exploits a lot harder to pull off.

      • gmueckl 19 hours ago ago

        Java went down that road with the applet sandboxing. They thought that this would go well because the JVM can be a perfect gatekeeper on the code that gets to run and can see and stop all calls to forbidden methods.

        It didn't go well. The JVM did it's part well, but they couldn't harden the library APIs. They ended up playing whack-a-mole with a steady stream of library bugs in privileged parts of the system libraries that allowed for sandbox escapes.

        • cjalmeida 18 hours ago ago

          It was too complex. Just making system calls require white listing libraries goes a long way of preventing a whole class of exploits.

          There’s no reason a color parser, or a date library should require network or file system access.

          • 0xDEAFBEAD 8 hours ago ago

            The simplest approach to whitelisting libraries won't work, since the malicious color parser can just call the whitelisted library.

            A different idea: Special stack frames such that while that frame is on the stack, certain syscalls are prohibited. These "sandbox frames" could be enabled by default for most library calls, or even used by developers to handle untrusted user input.

        • mike_hearn 18 hours ago ago

          Yes, but that was with a very ambitious sandbox that included full GUI access. Sandboxing a pure data transformation utility like something that strips ANSI escape codes would have been much easier for it.

      • crazygringo 19 hours ago ago

        Totally agreed, and I'm surprised this idea hasn't become more mainstream yet.

        If a package wants to access the filesystem, shell, OS API's, sockets, etc., those should be permissions you have to explicitly grant in your code.

        • crdrost 18 hours ago ago

          This was one of Doug Crockford's big bugaboos since The Good Parts and JSLint and Yahoo days—the idea that lexical scope aka closures give you an unprecedented ability to actually control I/O because you can say

              function main(io) {
                  const result = somethingThatRequiresHttp(io.fetch);
                  // ...
              }
          
          and as long as you don't put I/O in global scope (i.e. window.fetch) but do an injection into the main entrypoint, that entrypoint gets to control what everyone else can do. I could for example do

              function main(io) {
                const result = something(readonlyFetch(onlyOurAPI(io.fetch))
              }
              function onlyOurAPI(fetch) {
                return (...args) => {
                  const test = /^https:\/\/api.mydomain.example\//.exec(args[0]);
                  if (test == null) {
                    throw new ValueError("must only communicate with our API");
                  }
                  return fetch(..args);
                }
              }
              function readonlyFetch(fetch) { /* similar but allowlist only GET/HEAD methods */ }
          
          I vaguely remember him being really passionate about "JavaScript lets you do this, we should all program in JavaScript" at the time... these days he's much more likely to say "JavaScript doesn't have any way to force you to do this and close off all the exploits from the now-leaked global scope, we should never program in JavaScript."

          Shoutout to Ryan Dahl and Deno, where you write `#!/usr/bin/env deno --allow-net=api.mydomain.example` at the start of your shell script to accomplish something similar.

          In my amateur programming-conlang hobby that will probably never produce anything joyful to anyone other than me, one of those programming languages has a notion of sending messages to "message-spaces" and I shamelessly steal Doug's idea -- message-spaces have handles that you can use to communicate with them, your I/O is a message sent to your main m-space containing a bunch of handles, you can then pattern-match on that message and make a new handle for a new m-space, provisioned with a pattern-matcher that only listens for, say, HTTP GET/HEAD events directed at the API, and forwards only those to the I/O handle. So then when I give this new handle to someone, they have no way of knowing that it's not fully I/O capable, requests they make to the not-API just sit there blackholed until you get an alert "there are too many unread messages in this m-space" and peek in to see why.

        • mike_hearn 18 hours ago ago

          It's harder than it looks. I wrote an essay exploring why here:

          https://blog.plan99.net/why-not-capability-languages-a8e6cbd...

          • crazygringo 16 hours ago ago

            Thanks, it's great to see all the issues you raise.

            On the other hand, it seems about as hard as I was imagining. I take for granted that it has to be a new language -- you obviously can't add it on top of Python, for example. And obviously it isn't compatible with things like global monkeypatching.

            But if a language's built-in functions are built around the idea from the ground up, it seems entirely feasible. Particularly if you make the limits entirely around permissions around data communication -- with disk, sockets, APIs, hardware like webcams and microphones, and "god" permissions like shell or exec commands -- and not about trying to merely constrain resource usage around things like CPU, memory, etc.

            If a package is blowing up your memory or CPU, you'll catch it quickly and usually the worst it can do is make your service unavailable. The risk to focus on should be exclusively data access+exfiltration and external data modification, as far as I can tell. A package shouldn't be able to wipe your user folder or post program data to a URL at all unless you give it permission. Which means no filesystem or network calls, no shell access, no linked programs in other languages, etc.

          • Groxx 17 hours ago ago

            tbh none of that sounds particularly bad, nor do I think capabilities are necessary (but obviously useful).

            we could literally just take Go and categorize on "imports risky package" and we'd have a better situation than we have now, and it would encourage library design that isolates those risky accesses so people don't worry about them being used. even that much should have been table stakes over a decade ago.

            and like:

            >No language has such an object or such interfaces in its standard library, and in fact “god objects” are viewed as violating good object oriented design.

            sure they do. that's dependency injection, and you'd probably delegate it to a dependency injector (your god object) that resolves permissions. plus go already has an object for it that's passed almost everywhere: context.

            perfect isn't necessary. what we have now very nearly everywhere is the most extreme example of "yolo", almost anything would be an improvement.

            • mike_hearn 16 hours ago ago

              Yes, dependency injection can help although injectors don't have any understanding of whether an object really needs a dependency. But that's not a god object in the sense it's normally meant. For one, it's injecting different objects :)

              • Groxx 11 hours ago ago

                to be clear, I mean that the DI container/whatever is "the god object" - it holds essentially every dependency and every piece of your own code, knows how to construct every single one, and knows what everything needs. it's the biggest and most complicatedly-intertwined thing in pretty much any application, and it works so well that people forget it exists or how it works, and carrying permission-objects through that on a library level would be literally trivial because all of them already do everything needed.

                hence: doesn't sound too bad

                "truly needs": currently, yes. but that seems like a fairly easy thing to address with library packaging systems and a language that supports that. static analysis and language design to support it can cover a lot (e.g. go is limited enough that you can handle some just from scanning imports), and "you can ask for something you don't use, it just means people are less likely to use your library" for the exceptions is hardly a problem compared to our current "you already have every permission and nobody knows it".

                • mike_hearn an hour ago ago

                  Yes, I do agree that integration with DI is one way to make progress on this problem that hasn't been tried before.

          • ryukafalz 16 hours ago ago

            Thanks, this was a good overview of some of the challenges involved with designing a capability language.

            I think I need to read up more on how to deal with (avoiding) changes to your public APIs when doing dependency injection, because that seems like basically what you're doing in a capability-based module system. I feel like there has to be some way to make such a system more ergonomic and make the common case of e.g. "I just want to give this thing the ability to make any HTTP request" easy, while still allowing for flexibility if you want to lock that down more.

            • mike_hearn an hour ago ago

              In Java DI you can add dependencies without changing your public API using field injection. But really there needs to be a language with integrated DI. A lot of the pain of using DI comes from the way it's been strapped on the side.

        • int_19h 13 hours ago ago

          This exact idea has already been mainstream. Both Java and .NET used to have mechanisms like that, e.g.: https://en.wikipedia.org/wiki/Code_Access_Security

      • bunderbunder 19 hours ago ago

        Alternatively, I've long been wondering if automatic package management may have been a mistake. Its primary purpose seems to be to enable this kind of proliferation of micro-dependencies by effectively sweeping the management of these sprawling dependency graphs under the carpet. But the upshot of that is, most changes to your dependency graph, and by extension your primary vector for supply chain attacks, becomes something you're no longer really looking at.

        Versus, when I've worked at places that eschew automatic dependency management, yes, there is some extra work associated with manually managing them. But it's honestly not that much. And in some ways it becomes a boon for maintainability because it encourages keeping your dependency graph pruned. That, in turn, reduces exposure to third-party software vulnerabilities and toil associated with responding to them.

        • JoshTriplett 18 hours ago ago

          Manual dependency management without a package manager does not lead people to do more auditing.

          And at least with a standardized package manager, the packages are in a standard format that makes them easier to analyze, audit, etc.

          • Groxx 18 hours ago ago

            yea, just look at the state of many C projects. it's rather clearly worse in practice in aggregate.

            should it be higher friction than npm? probably yes. a permissions system would inherently add a bit (leftpad includes 27 libraries which require permissions "internet" and "sudo", add? [y/N]) which would help a bit I think.

            but I'm personally more optimistic about structured code and review signing, e.g. like cargo-crev: https://web.crev.dev/rust-reviews/ . there could be a market around "X group reviewed it and said it's fine", instead of the absolute chaos we have now outside of conservative linux distro packagers. there's practically no sharing of "lgtm" / "omfg no" knowledge at the moment, everyone has to do it themselves all the time and not miss anything or suffer the pain, and/or hope they can get the package manager hosts' attention fast enough.

            • bunderbunder 17 hours ago ago

              C has a lot of characteristics beyond simple lack of a standard automatic package manager that complicate the situation.

              The more interesting comparison to me is, for example, my experience on C# projects that do and do not use NuGet. Or even the overall C# ecosystem before and after NuGet got popular. Because then you're getting closer to just comparing life with and without a package manager, without all the extra confounding variables from differing language capabilities, business domains, development cultures, etc.

              • Groxx 17 hours ago ago

                when I was doing C# pre-nuget we had an utterly absurd amount of libraries that nobody had checked and nobody ever upgraded. so... yeah I think it applies there too, at least from my experience.

                I do agree that C is an especially-bad case for additional reasons though, yeah.

                • bunderbunder 15 hours ago ago

                  Gotcha. When I was, we actively curated our dependencies and maintaining them was a regularly scheduled task that one team member in particular was in charge of making sure got done.

                  • Groxx 11 hours ago ago

                    most teams I've been around have zero or one person who handles that (because they're passionate) (this is usually me) - tbh I think that's probably the majority case.

                    exceptions totally exist, I've seen them too. I just don't think they're enough to move the median away from "total chaotic garbage" regardless of the system

          • mikestorrent 18 hours ago ago

            Well, consider that a lot of these functions that were exploited are simple things. We use a library to spare ourselves the drugdery of rewriting them, but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control, vs. bringing in an external dependency that puts me on a permanent upgrade treadmill and opens the risk to supply chain attacks?

            Leftpad as a library? Let it all burn down; but then, it's Javascript, it's always been on fire.

            • JoshTriplett 5 hours ago ago

              > but now that we have AI, what's it to me if I end up with my own string-colouring functions for output in some file under my own control

              Before AI code generation, we would have called that copy-and-paste, and a code smell compared to proper reuse of a library. It's not any better with AI. That's still code you'd have to maintain, and debug. And duplicated effort from all the other code doing the same thing, and not de-duplicated across the numerous libraries in a dependency tree or on a system, and not benefiting from multiple people collaborating on a common API, and not benefiting from skill transfer across projects...

        • ryandrake 18 hours ago ago

          Unpopular opinion these days, but: It should be painful to pull in a dependency. It should require work. It should require scrutiny, and deep understanding of the code you're pulling in. Adding a dependency is such an important decision that can have far reaching effects over your code: performance, security, privacy, quality/defects. You shouldn't be able to casually do it with a single command line.

          • skydhash 18 hours ago ago

            I wouldn’t go for painful that much. The main issue is transitive dependencies. The tree can be several layer deep.

            In the C world, anything that is not direct is often a very stable library and can be brought in as a peer deps. Breaking changes happen less and you can resolve the tree manually.

            In NPM, there are so many little packages that even renowned packages choose to rely one for no obvious reason. It’s a severe lack of discipline.

          • heisenbit 18 hours ago ago

            For better or worse it is often less work to create a dependency than to maintain it over its lifetime. Improvements in maintenance also ease creation of new dependencies.

      • mbrevda1 19 hours ago ago

        yup, here is node's docs for it (WIP): https://nodejs.org/api/permissions.html

    • SebastianKra 19 hours ago ago

      Yeah, there's an entire community dedicated to cleaning up the js ecosystem.

      https://e18e.dev/

      Micro-dependencies are not the only thing that went wrong here, but hopefully this is a wakeup call to do some cleaning.

      • skydhash 18 hours ago ago

        Discord server? Is it that much work to create a forum or a mailing list with anonymous access. Especially with a community you can vet that easily?

    • stickfigure 19 hours ago ago

      It wouldn't be a problem if there wasn't a culture of "just upgrade everything all the time" in the javascript ecosystem. We generally don't have this problem with Java libraries, because people pick versions and don't upgrade unless there's good reason.

      • ilvez 18 hours ago ago

        From maintenance perspective both never and always seem like extremes though.

        Upgrading when falling off the train is serious drawback on moving fast..

        • 0xDEAFBEAD 8 hours ago ago

          Maybe we need two upgrade paths: An expedited auto-upgrade path which requires multi-key signoff from various trusted developers, and a standard upgrade path which is low-pressure.

      • jcelerier 17 hours ago ago

        and then you get Log4Shell

    • anonzzzies 20 hours ago ago

      Yes. It is a bit painful this is not rather obvious by now. But I do have, every code review, whine about people who just include trivial outdated one function npms :(

    • balder1991 19 hours ago ago

      Working for a bank did make me think much more about all the vulnerabilities that can go into certain tools. The company has a lot of bureaucracy to prevent installing anything or adding external dependencies.

      • benoau 19 hours ago ago

        Working for a fintech and being responsible for the software made me very wary of dependencies and weeding out the deprecated and EOL'd stuff that had somehow already found its way into what was a young project when I joined. Left unrestrained, developers will add anything if it resolves their immediate needs like you could probably spread malware very well just by writing a fake-blog advocating a malicious module to solve certain scenarios.

        • esseph 18 hours ago ago

          > Left unrestrained, developers will add anything if it resolves their immediate needs

          Absolutely. A lot of developers work on a large Enterprise app for years and then scoot off to a different project or company.

          What's not fun is being the poor Ops staff that have to deal with supporting the library dependencies, JVM upgrades, etc for decades after.

    • procaryote 18 hours ago ago

      I've nixed javascript in the backend in several places, partly because of the weird culture around dependencies. Having to audit that for compliance, or keeping it actually secure, is a nightmare.

      Nixing javascript in the frontend is a harder sell, sadly

      • christophilus 18 hours ago ago

        What did you switch to instead? I used to be a C# dev, and have done my fair share of Go. Both of those have decent enough standard libraries that I never found myself with a large 3rd party dependency tree.

        Ruby, Python, and Clojure, though? They weren’t any better than my npm projects, being roughly the same order of magnitude. Same seems to be true for Rust.

        • procaryote 17 hours ago ago

          You can get pretty far in python without a lot of dependencies, and the dependencies you do need tend to be more substantial blocks of functionality. Much easier to keep the tree small than npm.

          Same with Java, if you avoid springboot and similar everything frameworks, which admittedly is a bit of an uphill battle given the state of java developers.

          You can of course also keep dependencies small in javascript, but it's a very uphill fight where you'll have just a few options and most people you hire are used to including a library (that includes 10 libraries) to not have to so something like `if (x % 2 == 1)`

          Just started with golang... the language is a bit annoying but the dependency culture seems OK

    • amarant 19 hours ago ago

      Throwback to leftpad!

      Hey that was also on NPM iirc!

    • amysox 18 hours ago ago

      What I'd like to know is why anyone thinks it's a good idea to have this level of granularity in libraries? Seriously? A library that only contains "a utility function that determines if its argument can be used like an array"? That's a lot of overhead in dependency management, which translates into a lot of cognitive load. Sooner or later, something's going to snap...and something did, here.

  • kumavis 12 hours ago ago

    > all the malware did was modify the destination addresses of cryptocurrency payments mediated via online wallets like MetaMask

    A clarification: Despite MetaMask depending on the compromised packages it was not directly affected because: 1) packages were not updated while the compromise was live 2) MetaMask uses LavaMoat for install-time and run-time protections against compromised packages

    However the payload did attempt to compromise other pages that interact with wallets like MetaMask.

    Disclaimer: I worked on LavaMoat

    LavaMoat: https://github.com/lavamoat/lavamoat

  • duffpkg 11 hours ago ago

    For a very long time I have also used unique emails for each respective service that involves in email. When I sign up for npm it is something like email_npm@example.com . This makes it very easy to whitelist and also spot phishing emails because if an email for npm is coming to mail_cccoffee@example.com it screams that something is wrong. It is not bulletproof by any means but an additional layer that costs me almost nothing but requires effort on the part of attackers.

    • junon 3 hours ago ago

      That's exactly what I do, and have caught quite a lot of other phishing emails this way. They queried my npm email via the public API and sent it there.

  • stevoski 18 hours ago ago

    “We all dodged a massive bullet”

    I don’t think we did. I think it is entirely plausible that more sophisticated attacks ARE getting into the npm ecosystem.

  • nottorp 19 hours ago ago

    > With that in mind, at a glance the idea of changing your two-factor auth credentials "for security reasons" isn't completely unreasonable.

    No?

    How do you change your 2FA? Buy a new phone? A new Yubikey?

    • croemer 17 hours ago ago

      For TOTP it's as simple as scanning a new QR code.

      I agree that rotating 2FA should ring alarm bells as an unusual request. But that requires thinking.

  • fiatpandas 18 hours ago ago

    His email client even puts a green check mark next to the fake NPM email. UX fail.

    • yencabulator 18 hours ago ago

      The claim is valid -- it is legit from npm.help

      If you think npm.help is something it isn't, that's not something DKIM et al can help with.

      • kccqzy 17 hours ago ago

        Do you remember a few years ago that browsers used to put a lock icon for all HTTPS connections? That lock icon signified that the connection is encrypted alright. To a tech geek that's a valid use of a lock icon. But browsers still removed it because it's a massive UX fail. You have to consider what the lock icon means to people who are minimally tech literate. I understand and have set up DKIM and SPF, but you cannot condense the intended security feature of DKIM/SPF/DMARC into a single icon and expect that to be good UX.

        • yencabulator 17 hours ago ago

          Browsers moved away from the https lock icon after https become very very common. Email hasn't reached a comparable state.

          • kccqzy 17 hours ago ago

            We are talking about a UX failure regarding what a lock icon or a checkmark icon represents. Popularity is irrelevant. It's entirely about the disconnect between what tech geeks think a lock/checkmark icon represents and normal users think it represents.

            • yencabulator 16 hours ago ago

              Instead of ranting, can you say something constructive?

              I can think of 3 paths to improve situation (assuming that "everyone deploys cryptographic email infrastructure instantly" is not gonna happen).

              1. The email client doesn't indicate DKIM at all. This is strictly worse than today, because then the attack could have claimed to be from npmjs.com.

              2. You only get a checkmark if you have DKIM et al plus you're a "verified domain". This means only big corporations get the checkmark -- I hate this option. It's EV SSL but even worse. And again, unless npmjs.com was a "big corporation" the attacker could have just faked the sender and the user would not notice anything different, since in that world the authentic npmjs.com emails wouldn't have a checkmark either.

              3. The checkmark icon is changed into something else, nothing else happens. But what? "DKIM" isn't the full picture (and would be horribly confusing too). Putting a sunflower there seems a little weird. Do you really apply this much significance to the specific icon?

              The path that HTTPS took just hasn't been repeatable in the email space; the upgrade cycles are much slower, the basic architecture is client->server->server not client->server, and so on.

        • zokier 15 hours ago ago

          > Do you remember a few years ago that browsers used to put a lock icon for all HTTPS connections?

          Few years ago? I have lock icon right now in my address bar

          • yencabulator 12 hours ago ago

            Chrome removed it, Firefox de-emphasized it by making it grayscale.

      • 17 hours ago ago
        [deleted]
  • zaik 17 hours ago ago

    We need a permission system for packages just like with Android apps. The text coloring package suddenly needs a file access permission for the new version? Seems strange.

    • danenania 8 hours ago ago

      Deno has taken steps in this direction. It’s probably doable for pure js packages, but nearly impossible for packages with native extensions.

  • Pomelolo an hour ago ago

    Why are furries smart? It's like God telling us "yes, but...."

    > bluesky useooooors >furry >sexual nickname

  • dirkc 17 hours ago ago

    I had a minor scare some time ago with npm. Can't remember the exact details, something like I had a broken symlink in my homedir and nodemon printed an error about the symlink! My first thought was it's a supply chain attack looking for credentials!

    Since then I've done all my dev in an isolated environment like a docker container. I know it's possible to escape the container, but at least that raises the bar to a level I'm comfortable with.

  • PaulHoule 19 hours ago ago

    This has so many dimensions.

    An authentication environment which has gotten so complex we expect to be harassed by messages say "your Plex password might be compromised", "your 2FA is all fucked up", etc.

    And the crypto thing. Xe's sanguine about the impact, I mean, it just the web3 degens [1] that are victimized, good innocent decent people like us aren't hurt. From the viewpoint of the attacker it is all about the Benjamins and the question is: "does an attack like this make enough money to justify the effort?" If the answer is yes than we'll see more attacks like this.

    There are just all of these things that contribute to the bad environment: the urgent emails from services you barely use, the web3 degens, etc.

    [1] if it's an insult it is one the web3 community slings https://www.webopedia.com/crypto/learn/degen-meaning/

  • padjo 18 hours ago ago

    “A utility function that determines if its argument can be used like an array”

    I see the JavaScript ecosystem hasn’t changed since leftpad then.

    • dmitrygr 17 hours ago ago

      My man, it has...in the worse direction...

  • 1970-01-01 18 hours ago ago

    At this super-wide level of near-miss, you must assume Jia Tan 3.0 will be coming for your supply chains.

  • arunc 5 hours ago ago

    Trying to read this on Brave on Android and I couldn't get past Anubis. Does anyone else observe the same?

  • lxe 17 hours ago ago

    Gmail could have easily placed a red banner like

    > "Warning! This is the first time you have received a message from sender support@npmjs.help. Please be careful with links and attachments, and verify the sender's identity before taking any action."

  • monkpit 12 hours ago ago

    This article makes one faulty assumption that I think is really common - the author says it could be much worse, which implicitly assumes that we have noticed and caught every other time something like this has happened.

    Internally, we only noticed this because it caused a bunch of random junk to get barfed out into some CI logs.

    You really can’t say that nobody has ever done this better. Maybe they just did it so well that nobody noticed.

  • dang 18 hours ago ago

    In case you missed it:

    NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (697 comments, including exemplary comments from the project maintainer)

  • btbuildem 16 hours ago ago

    Sometimes I think I'm a stubborn old curmudgeon for staunchly refusing to use node, npm, and the surrounding ecosystem. Pick and choose specific packages if I really have to.

    Then there's days like this.

  • ChuckMcM 19 hours ago ago

    Agree with most that this could have been way way worse. No doubt next time it will be.

    I keep expecting some new company to bring out this revolutionary idea of "On prem: your machine, your libraries, your business."

  • 20 hours ago ago
    [deleted]
  • zamalek 17 hours ago ago

    WebAuthN/fido/passkey should be mandatory to publish a package with >N downloads. Email and TOTP codes can be MITMd.

  • keyle 9 hours ago ago

    "saved by procrastination!" made me smile.

    One of the common cases of being offline first, disconnected etc. pays off.

    Don't rush. Work on Hawaiian clock!

  • frabjoused 14 hours ago ago

    I wrote the first commit for slice-ansi in 2015 to solve a baby problem for a cli framework I was building, and worked with Qix a little on the chalk org after it. It's wild looking back and seeing how these things creep in influence over time.

    • junon 3 hours ago ago

      Didn't expect things to grow how they have, that's for sure! Hope you've been well :)

  • umvi 9 hours ago ago

    "A little duplication is better than a little dependency"

    - Golang Proverb (also applies to any other programming language...)

  • giveita 5 hours ago ago

    > One of the important things to take away from this is that every dependency could be malicious. We should take the time to understand the entire dependency tree of our programs, but we aren't given that time. At the end of the day, we still have to ship things.

    That's why you need vuln scanners and not upgrade to the latest thing as soon as released.

  • amradio1989 16 hours ago ago

    Great write up. I can understand the indignation at the exploit, but I believe it’s an A+ exploit for the chosen attack vector.

    Not only is it “proof of concept” but it’s a low risk high reward play. It’s brilliant really. Dangerously so.

  • mrbluecoat 19 hours ago ago

    Does the Go ecosystem have a similar security screening process as NPM? This was caught because a company was monitoring a centralized packaging distribution platform, but I worry about all those golang modules spread across GitHub without oversight..

    • quectophoton 18 hours ago ago

      This page has a short explanation of the default way in which Go downloads modules, with links for more details: https://sum.golang.org/

      • mrbluecoat 8 hours ago ago

        Thanks. It took a little more digging from that link but I eventually found https://go.dev/doc/security/vuln/#vulnerability-detection-fo...

        • quectophoton 2 hours ago ago

          Right, my bad, seems like I misunderstood the question. Glad you could still find an answer.

          For more context on why I thought that link would have been helpful: In Go you download dependencies "straight" from the source[1], while in npm and other languages you download dependencies from a completely unrelated registry that can have any random code (i.e. whether the published artifact was built from the alleged source repository, is a flip of a coin).

          So not having this kind of third party registry eliminates the point of failure that caused the issue commented in the article. The issue was caught because of a centralized place, yes, but it was also caused because npm dependencies are downloaded from a centralized place and because this centralized place only hosts artifacts unrelated to the source code itself; package authors can `npm publish` artifacts containing the exact source code from their repos if they want though. If.

          With Go, having a mirror of the source code is still third party infra, but is more an optimization than anything else, and checksums are generated based on the source itself[2] (rather than any unrelated artifact). This checksum should match even for people not using any proxy, so if you serve different code to someone, there will be a mismatch between the checksum of the downloaded module and the checksum from the SumDB. This should catch force-pushes done to a git repository version tag, for example.

          Also, Go downloads the minimum version that satisfies packages, so it's less likely that you'll download a (semver) "patch" release that someone pushed hours ago.

          All this makes me both like and dislike how Go handles dependencies.

          [1]: Well, from a mirror, unless you set `GOPROXY=direct`. Reasoning explained in next paragraph.

          [2]: The checksum is calculated from a zip file, but it is generated in a deterministic way, and this checksum is also generated and validated locally when you download dependencies. More info at https://go.dev/ref/mod#zip-files and https://go.dev/ref/mod#go-mod-verify

  • pcthrowaway 18 hours ago ago

    Is this not a good use case for AI in your email client (local-only to avoid more opportunities for data to leak)?

    Have the client-embedded AI view the email to determine if it contains a link to a purported service. Remotely verify if the service URL domain is valid, by comparing to the domains known for that service

    If unknown, show the user a suspected phishing message.

    This will occasionally give a false positive when a service changes their sending domain, but the remote domain<->service database can then be updated via an API call as a new `(domain, service)` pair for investigation and possible inclusion.

    I feel like this would mitigate much of the risk of phishing emails slipping past defenses, and mainly just needs 2 or 3 API calls to service once the LLM has extracted the service name from the email.

    • mwkaufma 17 hours ago ago

      No, the solution to a security problem is not to radically increase the vulnerable attack surface.

  • leoc 19 hours ago ago

    > Even then, that wouldn't really stand out to me because I've seen companies use new generic top level domains to separate out things like the blog at .blog or the docs at .guide, not to mention the .new stack.

    This is very much a 'can we please not' situation, isn't it? (Obviously it's not something that the email recipients can (usually) control, so it's not a criticism of them.) It also has to meaningfully increase the chance that someone will eventually forget to renew a domain, too.

    • junon 3 hours ago ago

      Anecdotally throughout all this, contacting npm got me a response from githubsupport.com. I had to double check if that was even real.

    • NoahZuniga 18 hours ago ago

      Facebook sends legit account secuirty emails from facebookmail.com. Horrible.

      • leoc 16 hours ago ago

        For a company that is otherwise quite serious about security nowadays, MS seems to be the champion of this. Say hello to live.com and its friends …

  • dzogchen 13 hours ago ago

    Lazy conclusion. People can and have been validating dependencies. We need an npm proxy with validated dependencies.

  • ivape 20 hours ago ago

    Dat domain name.

    Yeah, stop those cute domain names. I never got the memo on Youtu.be, I just had “learn” it was okay. Of course people started to let their guard down because dumbasses started to get cute.

    We all did dodge a bullet because we’ve been installing stuff from NPM with reckless abandon for awhile.

    Can anyone give me a reason why this wouldn’t happen in other ecosystems like Python, because I really don’t feel comfortable if I’m scared to download the most basic of packages. Everything is trust.

    • 1-more 19 hours ago ago

      of all people my mortgage servicer is the worst about this. Your login is valid on like 3 different top level domains and you get bounced between them when you sign in, eventually going from servicer.com to myservicer.com to servicer.otherthing.com! It's as though they were training you to not care about domain names.

      • wzamqo 19 hours ago ago

        Paying US taxes online is just as bad. The official way to pay tax balances with a debit card online is to use officialpayments[.]com. This is what the IRS advises you to use. Our industry is a clown factory.

        • LorenDB 19 hours ago ago

          Wells Fargo apparently emails from epay@onlinemyaccounts[.]com.

    • jvdvegt 19 hours ago ago

      What about aka.ms, which is a valid domain for Microsoft. Why didn't they use microsoft.com, or windows.com? I always wonder if this aka is short for 'also known as'.

      • dymk 18 hours ago ago

        They use that domain name because it’s used for short links

    • 19 hours ago ago
      [deleted]
  • binarymax 18 hours ago ago

    There's only one thing that would throw me off this email and that is DMARC. But I didn't get the email, so who is to say if I actually would have been caught.

    • junon 16 hours ago ago

          Authentication-Results: aspmx1.migadu.com;
              dkim=pass header.d=smtp.mailtrap.live header.s=rwmt1 header.b=Wrv0sR0r;
              dkim=pass header.d=npmjs.help header.s=rwmt1 header.b=opuoQW+P;
              spf=pass (aspmx1.migadu.com: domain of ndr-cbbfcb00-8c4d-11f0-0040-f184d6629049@mt86.npmjs.help designates 45.158.83.7 as permitted sender) smtp.mailfrom=ndr-cbbfcb00-8c4d-11f0-0040-f184d6629049@mt86.npmjs.help;
              dmarc=pass (policy=none) header.from=npmjs.help
    • vel0city 18 hours ago ago

      This was a domain "legitimately" owned by the adversary. They controlled that DNS. They could set any SPF or DKIM records they wanted. This email probably passed all DMARC checks. From some screenshots, the email client even has a green check probably because it did pass DMARC.

  • lysace 19 hours ago ago

    This reads like a joke that's missing the punchline.

    The post's author's resume section reinforces this feeling:

    I am a skilled force multiplier, acclaimed speaker, artist, and prolific blogger. My writing is widely viewed across 15 time zones and is one of the most viewed software blogs in the world.

    I specialize in helping people realize their latent abilities and help to unblock them when they get stuck. This creates unique value streams and lets me bring others up to my level to help create more senior engineers. I am looking for roles that allow me to build upon existing company cultures and transmute them into new and innovative ways of talking about a product I believe in. I am prioritizing remote work at companies that align with my values of transparency, honesty, equity, and equality.

    If you want someone that is dedicated to their craft, a fearless innovator and a genuine force multiplier, please look no further. I'm more than willing to hear you out.

    • gertop 10 hours ago ago

      That kind of fake self-aggrandizement-delusion-driven story telling is part of the autistic trans subculture. That particular subculture tends to speak of themselves as goddesses, wizards, or other higher beings. Their websites are usually dark themed with pastel or neon forecolors and you'll find anime girls inserted every now and then .

      As far as I can tell it isn't a joke per se, but it is tongue-in-cheek and the ego is often very real.

  • mayhemducks 17 hours ago ago

    `Object.getPrototypeOf(obj)[Symbol.iterator] !== undefined`

    There I fixed it. Now I don't even need the package array-ish!

    • junon 16 hours ago ago

      `Symbol` wasn't supported when I wrote `is-arrayish`. Neither were spreads. It was meant to be used with DOM lists or the magical `arguments` variable.

  • m463 14 hours ago ago

    Would recommend less click-baity: "NPM attack - we all dodged a bullet"

  • empathy_m 19 hours ago ago

    How much money did the attackers make?

    • ruuda 17 hours ago ago

      I'm not sure whether the compromised packages were the source of Kiln's API compromise, but it's plausible. It lead to theft of $41M worth of SOL. https://cointelegraph.com/news/swissborg-hacked-41m-sol-api-...

      • junon 3 hours ago ago

        These were different. The vulnerable packages wouldn't have caused an API exploit vector except in the most bizarre of edge cases I suppose.

    • zahlman 18 hours ago ago

      According to a crypto tracking site linked indirectly via the other popular submission, about $500 worth of crypto.

    • gazaim 18 hours ago ago

      5 cents of eth and $20 of a meme coin.

  • thefifthsetpin 18 hours ago ago

    Allowing just anybody to rent npmjs.help feels like aiding and abetting.

    • ameliaquining 17 hours ago ago

      Who should have stopped this from happening and how should they have gone about doing so?

  • jacobsenscott 18 hours ago ago

    This phishing email is full of red flags. Here are example red flags from that email:

    - Update your 2FA credentials

    What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.

    - It's been over 12 months since you last 2FA update

    Again - meaningless nonsense. There's no such thing as a 2FA update. Maybe the recipient was thinking "password update" - but updating passwords regularly is also bad practice.

    - "Kindly ask ..."

    It would be very unusual to write like that in a formal security notification.

    - "your credentials will be temporarily locked ..."

    What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.

    - A link to change your credentials

    A legit security email should never contains a link to change your credentials.

    - It comes from a weird domain - .help

    Any nonstandard domain is a red flag.

    I don't use NPM, and if this actually looks like an email NPM would send, NPM has serious problems. However security ignorant companies do send emails like this. That's why the second layer of defense if you receive an email like this and think it might be real is to just log directly into (in this case) NPM and update your account settings without clicking links in the email.

    NEVER EVER EVER click links in any kind of security alert email.

    I don't blame the people who fell for this, but it is also concerning that there's such limited security awareness/training among people with publish access to such widely used packages.

    • junon 16 hours ago ago

      Hi, said person who clicked on the link here. Been wanting to post something akin to this and was going to save it for the post mortem but I wanted to address the increase in these sort of very shout-ey comments directed toward me.

      > What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.

      I didn't sit and read and parse the whole thing. That was mistake one. I have stated elsewhere, I was stressed and in a rush, and was trying to knock things off my list.

      Also, 2FA can of course be updated. npm has had some shifts in how it approaches security over the years, and having worked within that ecosystem for the better part of 10-15 years, this didn't strike me as particularly unheard of on their part. This, especially after the various acquisitions they've had.

      It's no excuse, just a contributing factor.

      > It would be very unusual to write like that in a formal security notification.

      On the contrary, I'd say this is pretty par for the course in corpo-speak. When "kindly" is used incorrectly, that's when it's a red flag for me.

      > What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.

      Yes, of course it is. I'm well aware of that. Again, this email reached me at the absolute worst time it could have and I made a very human error.

      "Temporarily locked" surprises me that it surprises you. My account was, in fact, temporarily locked while I was trying to regain access to it. Even npm had to manually force a password reset from their end.

      > Any nonstandard domain is a red flag.

      When I contacted npm, support responded from githubsupport.com. When I pay my TV tax here in Germany (a governmental thing), it goes to a completely bizarre, random third party site that took me ages to vet.

      There's no such thing as a "standard" domain anymore with gTLDs, and while I should have vetted this particular one, it didn't stand out as something impossible. In my head, it was their new help support site - just like github.community exists.

      Again - and I guess I have to repeat this until I'm blue in the face - this is not an excuse. Just reasons that contributed to my mistake.

      > NEVER EVER EVER click links in any kind of security alert email.

      I'm aware. I've taught this as the typical security person at my respective companies. I've embodied it, followed it closely for years, etc. I slipped up, and I think I've been more than transparent about that fact.

      I didn't ask for my packages to be downloaded 2.6 billion times per week when I wrote most of these 10 years ago or inherited them more than five ago. You can argue - rightfully - about my technical failure here of using an outdated form of 2FA. That's on me, and would have protected against this, but to say this doesn't happen to security-savvy individuals is the wrong message here (see: Troy Hunt getting phished).

      Shit happens. It just happened to happen to me, and I happen to have undue control over some stuff that's found its way into most of the javascript world.

      The security lessons and advice are all very sound - I'm glad people are talking about them - but the point I'm trying to make is, that I am a security aware/trained person, I am hyper-vigilant, and I am still a human that made a series of small or lazy mistakes that turned into one huge mistake.

      Thank you for your input, however. I do appreciate that people continue to talk about the security of it all.

      • reyqn 2 hours ago ago

        I think what makes a lot of people talk about it precisely is this:

        "This is a 10/10 phishing email."

        It's not. But it doesn't mean I wouldn't also fall for it because I was tired/in a hurry or whatever else could let me drop my guard.

        Humans are humans.

    • yieldcrv 18 hours ago ago

      full of red flags present in many non phishing emails

      > However security ignorant companies do send emails like this

      exactly

  • Mystery-Machine 20 hours ago ago

    Always use password manager to automatically fill in your credentials. If password manager doesn't find your credentials, check the domain. On top of that, you can always go directly to the website, to make any needed changes there, without following the link.

    • dewey 19 hours ago ago

      Password managers are still too unreliable to auto-fill everywhere all the time, and manually having to copy paste something from the password manager happens regularly so it's not something that feels unusual if it doesn't auto-fill it for some reason.

      • zargon 18 hours ago ago

        I put the fault on companies for making their login processes so convoluted. If you take the time to do it, you can usually configure the password manager to work (we shouldn’t have to make the effort). But even if you do, then the company will at some point change something about their login processes and break it.

      • nilslindemann 11 hours ago ago

        Indeed. I have to fill in my TOTP manually on Lichess and on tutanota.com. On proton.me sometimes. On other sites it always works, e.g. GitHub.

    • teekert 20 hours ago ago

      Better yet, use password manager as the store of the valid domain and click there to go to resource.

    • Analemma_ 20 hours ago ago

      I don't think this really helps. I use Bitwarden and it constantly fails to autofill legitimate websites and makes me go to the app to copy-paste, because companies do all kinds of crap with subdomains, marketing domains, etc. Any safeguard relying on human attention is ultimately susceptible to this; the only true solutions are things like passkeys where human fuckups are impossible by design and they can't give credentials to the wrong place even if they want to.

      Passkeys are disruptive enough that I don't think they need to be mandated for everyone just yet, but I think it might be time for that for people who own critical dependencies.

      • teekert 20 hours ago ago

        It's a pita but BitWarden has quite some flexibility in filtering where what gets autofilled. I agree the defaults are pretty shit and indeed lead to constant copy-pasting. On the other hand, it will offer all my password all the time for all my selfhosted stuff on my 1 server.

    • fragmede 20 hours ago ago

      what do you mean bankofamericaabuse.com isn't a real website!? It's in the email and everything! The nice guy on the phone said it was legit...

    • esseph 19 hours ago ago

      [dead]

  • beefnugs 7 hours ago ago

    We didn't dodge anything, this is just 1 of 1000 publicly found and reported on

  • superkuh 15 hours ago ago

    Wow! This site uses anubis with the meta-refreshed based challenge that doesn't require javascript. So I can actually read the article in my old browser. It's so rare for anubis deployals to be setup with any configuration beyond the defaults. What a delight.

    • aloer 14 hours ago ago

      The blog author is also the creator of Anubis

  • HL33tibCe7 18 hours ago ago

    Most phishing emails are so bad, it’s quite terrifying when you see a convincing one like this.

    Email is such an utter shitfest. Even tech-savvy people fall for phishing emails, what hope do normal people have.

    I recommend people save URLs in their password managers, and get in the habit of auto-filling. That way, you’ll at least notice if you’re trying to log into a malicious site. Unfortunately, it’s not foolproof, because plenty of sites ask you to randomly sign into different URLs. Sigh…

  • darepublic 18 hours ago ago

    > Formatting text with colors for use in the terminal ... > These kinds of dependencies are everywhere and nobody would even think that they could be harmful.

    The first article I ever read discussing the possibility of npm supply chain attacks actually used coloured text in terminal as the example package to poison. And ever since then I have always been associated coloured terminal in text with supply chain attack

  • scotty79 19 hours ago ago

    Is there a tool that you can put between your npm client and npm web servers that serves package versions that are month old and possibly also tracks discovered malware and never serves infected versions?

    • mikebelanger 20 minutes ago ago

      Artifactory works fairly well. Although admittedly, when a user grabs a new dependency, they're downloading from the npmjs registry like anyone else.

      Really, the killer combo would be to have some kind of LLM-based tool that would scan someone's artifactory. Something smart enough to notice that code changed, and there's code for accessing a crypto-wallet, etc. This would be too expensive for npmjs to host for free, but I could see this happen to hosted artifactory dependencies.

    • JackFr 19 hours ago ago

      Artifactory. Nexus. I believe AWS/GCP/Azure have offerings.

      No bank, and almost no large corporations go directly to artifact/package repos. They all host them internally.

    • lovehashbrowns 18 hours ago ago

      I'm looking at Verdaccio currently, since Artifactory is expensive and I think the CE version still only supports C++. Does anyone have any experience with Verdaccio?

    • balder1991 19 hours ago ago

      Something like this? https://jfrog.com/artifactory/

    • singulasar 19 hours ago ago

      the company that first found this vulnerability also has a tool for this https://www.npmjs.com/package/@aikidosec/safe-chain

  • jongjong 13 hours ago ago

    My open source projects were not affected but close call. I was using 2 of the dependencies (as sub-dependencies) but older versions. Seems that my philosophy of minimizing the number of dependencies and looking up dependency authors is paying off.

    I saw this kind of thing coming years ago. I never understood why people were obsessed with using tiny dependencies to save them 4 lines of code. These useless dependencies getting millions of weekly downloads always seemed very suspicious to me.

  • lrvick 14 hours ago ago

    Daily reminder that no one can easily impersonate you if you sign your commits and make it easy to discover and verify your authentic key with keyoxide or similar.

    • junon 3 hours ago ago

      This wasn't a repository takeover. I do sign my commits.

  • 17 hours ago ago
    [deleted]
  • thrownaway561 15 hours ago ago

    The is the main reason why if you ever get a password reset email you ALWAYS go to the site directly and NEVER through the link provided in the email.

  • shadowgovt 16 hours ago ago

    I might need bloggers to not use "web 3" as a term.

    With the way things are going, I can't tell at a glance whether they mean crypto, VR, or AI when they say "web 3."

  • SirFatty 19 hours ago ago

    It's typical phishing email... and if the author when though any type of cybersecurity training, they would see that the email wasn't that great.

    The sense of urgency is always the red flag.

    • zaphar 19 hours ago ago

      I go through those trainings several times a year. That email is as close to perfect for a phishing email as I've ever seen.

      • kiitos 15 hours ago ago

        the link in the email went to an obviously invalid domain, hovering the mouse cursor over the link in the email would have made this immediately clear, so even clicking that link should have never happened in the first place. red flag 1

        but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2

        ok, maybe there is some browser cache issue, whatever, so you trigger your password manager to provide your auth to the website -- but here, every single password manager would immediately notice that the domain in the browser does not match the domain associated with the auth creds, and either refuse to paste the creds thru, or at an absolute minimum throw up a big honkin' alert that something is amiss, which you'd need to explicitly click an "ignore" button to get past. red flag 3

        nobody should be able to publish new versions of widely-used software without some kind of manual review/oversight in the first place, but even ignoring that, if someone does have that power, and they get pwned by an attack like this, with at least 3 clear red flags that they would need to have explicitly ignored/bypassed, then CLEARLY this person cannot keep their current position of authority

        • junon 3 hours ago ago

          > the link in the email went to an obviously invalid domain, hovering the mouse cursor over the link in the email would have made this immediately clear, so even clicking that link should have never happened in the first place. red flag 1

          The link went to the same domain as the From address. The URL scheme was 1:1 identical to the real npm's.

          > but, ok, you click the link, you get a new tab, and you're asked to fill in your auth credentials. but why? you should already be logged in to that service in your default browser, no? red flag 2

          Why wouldn't I be? I don't stay logged into npm at all.

      • 19 hours ago ago
        [deleted]
    • singulasar 19 hours ago ago

      I think it's quite good, there's a sense of urgency, but it's also not "immediately change it!" they gave more than a day, and stated that it would be a temporary lock. Feel like this one really hit the spot on that aspect.

      You should still never click a link in an email like this, but the urgency factor is well done here

    • small_scombrus 19 hours ago ago

      > the email wasn't that great

      It was obviously good enough.

      Snark aside, you only need to trick one person once and you've won.

  • ChrisArchitect 20 hours ago ago

    Related:

    NPM debug and chalk packages compromised

    https://news.ycombinator.com/item?id=45169657

  • theteapot 13 hours ago ago

    Article:

    > This is frankly a really good phishing email ... This is a 10/10 phishing email ..

    Phishing email:

    > As part of our on going commitment to account security, we are requesting that all users update their Two-Factor-Authentication (2FA) credentials ...

    What does that even mean? What type of 2FA needs updating? One 2FA method supported is OTP. Can't see a reason that would legitimately ever need to be updated, so doesn't really pass the sniff test that every single user would need to "update 2FA".

  • AlienRobot 19 hours ago ago

    Isn't it a bit crazy that phishing e-mails still exist? Like, couldn't this be solved by encrypting something in a header and using a public key in the DNS to unencrypt it?

    • mxuribe 18 hours ago ago

      I'm not a top-level expert in cybersecurity nor email infra....but the little that i know has taught me that i merely have to create a similar-looking domain name...

      Let's say there's a company named Awesome...and i register the domain name of AwesomeSupport.com. I could be a total dark hat/evil hacker/neverdoweller....and this domain may not be infringing on any trademark, etc. And, then i can start using all the encryption you noted...which merely means that *my domain name* (the bad one) is "technically sound"...but of course, all that use of encryption fails to convey that i am not the legitimate Awesome company. So, how is the victim supposed to know which of the domains is legit or not? Especially considering that some departments of the real, legit Awesome company might register their own domain name to use for actual, real reasons - like the marketing department might register MyAwesome.com...for managing customer accounts, etc.

      Is encryption necessary in digital life? Hellz yeah! Does it solve *all issues*? Hellz no! :-)

      • gfody 17 hours ago ago

        an OV cert "solves" this, but you'd still have to bother to check it

        • mxuribe 16 hours ago ago

          True! But, the possibility exists that enough % of victims do not indeed check the OV cert. Also, are we 100% sure that every single legit company that you and I do business with, has an OV cert for their websites?

      • AlienRobot 17 hours ago ago

        This honestly doesn't feel like it should be the case.

        There aren't that many websites. The e-mail provider could have a list of "popular" domains, and the user could have their own list of trusted domains.

        There is all sorts of ways to warn the user about it, e.g. "you have never interacted with this domain before." Even simply showing other e-mails from the same domain would be enough to prevent phishing in some cases.

        There are practical ways to solve this problem. They aren't perfect but they are very feasible.

        • mxuribe 16 hours ago ago

          My previous comments were merely in response to your original comments...so really only to point out that bare use of encryption by itself is not sufficient protection - that's all.

          To your more recent points, i agree that there are other several protections in place...and depending on a number of facotrs, some foks have more at their disposal, and others might have less...but, still there are mechnisms in place to help - without a doubt. But yet with all these mechanisms in place, people still fall prey to phishing attacks...and sometimes those victims are not lay people, but actual technologists. So, i think the solution(s) to solve this are not so simple, and likely are not only tech-based. ;-)

    • procaryote 18 hours ago ago

      I might be missing the joke, but there are several layers like SPF and DMARC available to only allow your whitelisted servers to send email on the behalf of your domain.

      Wouldn't help in this case where someone bought a domain that looked a tiny bit like the authentic one for a very casual observer.

    • 1970-01-01 18 hours ago ago

      100% solved and has been for a very long time. The PGP/GPG trust chain goes CLUNK CLUNK CLUNK. Everyone shuts it off after a week or so of experimentation.

  • xyst 11 hours ago ago

    wow - people still get fooled by _phishing_ emails? I understand aging boomers with decreasing visual acuity, and deteriorating me tal state. But, the younger generations falling for these spearfish email attempts is wild.

    We are cooked.

  • cataflam 19 hours ago ago

    Besides the ecosystem issues, for the phishing part, I'll repost what I responded somewhere in the other related post, for awareness

    ---

    I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:

    TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.

    I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.

    What really helps against phishing :

    1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.

    2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.

    That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.

    • dang 15 hours ago ago

      Please don't copy-paste comments on HN. It strictly lowers the signal/noise ratio.

    • nalllar 19 hours ago ago

      > 1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.

      Sites choosing to replace password login with initiating the login process and then clicking a "magic link" in your email client is awful for developing good habits here, or for giving good general advice. :c

      • kyle-rb 17 hours ago ago

        In that case it's the same as a reset-password flow.

        In both cases it's good advice not to click the link unless you initiated the request. But with the auth token in the link, you don't need to login again, so the advice is still the same: don't login from a link in your email; clicking links is ok.

        • tomsmeding 16 hours ago ago

          Clicking links from an email is still a bad idea in general because of at least two reasons:

          1. If a target website (say important.com) sends poorly-configured CORS headers and has poorly configured cookies (I think), a 3rd-party website is able to send requests to important.com with the cookies of the user, if they're logged in there. This depends on important.com having done something wrong, but the result is as powerful as getting a password from the user. (This is called cross-site request forgery, CSRF.)

          2. They might have a browser zero-day and get code execution access to your machine.

          If you initiated the process that sent that email and the timing matches, and there's no other way than opening the link, that's that. But clicking links in emails is overall risky.

          • johnecheck 16 hours ago ago

            1 is true, but this applies to all websites you visit (and their ads, supply chain, etc). Drawing a security boundary here means never executing attacker-controlled Javascript. Good luck!

            2 is also true. But also, a zero day like that is a massive deal. That's the kind of exploit you can probably sell to some 3 letter agency for a bag. Worry about this if you're an extremely high-value target, the rest of us can sleep easy.

      • kiitos 15 hours ago ago

        how is this any worse than a spear phishing email that gives a login link to a malicious domain that looks the same as the official domain?

    • progval 19 hours ago ago

      > 2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.

      TOTP doesn't need to be phishing-proof if you use a password manager integrated with the browser, though.

      • shhsshs 19 hours ago ago

        I think it's more appropriate to say TOTP /is (nearly)/ phishing-proof if you use a password manager integrated with the browser (not that it /doesn't need to be/ phishing-proof)

      • ameliaquining 18 hours ago ago

        A browser-integrated password manager is only phishing-proof if it's 100% reliable. If it ever fails to detect a credential field, it trains users that they sometimes need to work around this problem by copy-pasting the credential from the password manager UI, and then phishers can exploit that. AFAIK all existing password manager extensions have this problem, as do all browsers' native password-management features.

        • xboxnolifes 16 hours ago ago

          It doesnt need to be 100% reliable, just reliable enough.

          If certain websites fail to be detected, thats a security issue on those specific websites, as I'll learn which ones tend to fail.

          If they rarely fail to detect in general, its infrequent enough to be diligent in those specific cases. In my experience with password managers, they rarely fail to detect fields. If anything, they over detect fields.

          • ameliaquining 10 hours ago ago

            I think this security model requires nontechnical users to be paying more consistent attention than is realistically safe to rely on.

    • macintux 19 hours ago ago

      > 1. NEVER EVER login from an email link.

      I receive Google Doc links periodically via email; fortunately they're almost never important enough for me to actually log in and see what's behind them.

      My point, though, is that there's no real alternative when someone sends you a doc link. Either you follow the link or you have to reach out to them and ask for some alternative distribution channel.

      (Or, I suppose, leave yourself logged into the platform all the time, but I try to avoid being logged into Google.)

      I don't know what to do about that situation in general.

      • zargon 19 hours ago ago

        > leave yourself logged into the platform all the time

        Or only log in when you need to open a google link. Or better yet, use a multi-account container for google.

        • macintux 18 hours ago ago

          Yeah, this should have occurred to me. I guess for me it's alien to think about logging into Google.

        • zahlman 18 hours ago ago

          > Or better yet, use a multi-account container for google.

          Pardon; a what? Got any reference links?

      • tempestn 19 hours ago ago

        Log into Google, then click the link. If you get prompted to log in again, don't.

        • macintux 18 hours ago ago

          Good point, I guess this is the obvious answer.

    • zahlman 18 hours ago ago

      > U2F/Webauthn key as second factor is phishing-proof. TOTP is not.

      Last I checked, we're still in a world where the large majority of people with important online accounts (like, say, at their bank, where they might not have the option to disable online banking entirely) wouldn't be able to tell you what any of those things are, and don't have the option to use anything but SMS-based TOTP for most online services and maybe "app"-based (maybe even a desktop program in rare cases!) TOTP for most of the rest. If they even have 2FA at all.

      • ameliaquining 18 hours ago ago

        This is the point of the "passkey" branding. The idea is to get to the point where these alphabet-soup acronyms are no longer exposed to normal users and instead they're just like "oh, I have to set up a passkey to log into this website", the way they currently understand having to set up a password.

        • zahlman 18 hours ago ago

          Sure. That still doesn't make Yubikey-style physical devices (or desktop keyring systems that work the same way) viable for everyone, everywhere, though.

          • ameliaquining 17 hours ago ago

            Yeah, the pressure needs to be put on vendors to accept passkeys everywhere (and to the extent that there are technical obstacles to this, they need to be aggressively remediated); we're not yet at the point where user education is the bottleneck.

    • giveita 5 hours ago ago

      NPM needs to do better. Almost think there needs to be regulations / fines unfortunately.

      If I sell corn syrup for downstream food consumers and dont lock my factory doors and let whoever walk in, isn't it reckless?

    • nottorp 19 hours ago ago

      Urgency is also either phishing (log in now or we'll lock you out of your account in 24 hours) or marketing (take advantage of this promotion! expires in 24 hours!).

      Just ... don't.

      • bbarnett 18 hours ago ago

        It's funny how it's never "don't" too.

        A guy I knew needed a car, found one, I told him to take it to a mechanic first. Later he said he couldn't, the guy had another offer, so he had to buy it right now!!!, or lose the car.

        He bought, had a bad cylinder.

        False urgency = scam

      • ameliaquining 18 hours ago ago

        I mean, real deadlines do exist. The better heuristic is that, if a message seems to be deliberately trying to spur you into immediate action through fear of missing a deadline, it's probably some kind of trick. In this respect, the phishing message that was used here was brilliantly executed; it calmly, without using panic-inducing language, explains that action is required and that there's a deadline (that doesn't appear artificially short but in fact is coming up soon), in a way quite similar to what a legitimate action-required email would look like. Even a savvy user is likely to think "oh, I didn't realize the deadline was that soon, I must have just not paid attention to the earlier emails about it".

        • nottorp 17 hours ago ago

          With credentials? Aren’t you always forced to refresh them right after a login?

          As in right then, without being given a deadline…

          • ameliaquining 17 hours ago ago

            Yeah, this particular situation's a bit weird because it's asking the user to do something (rotate their 2FA secret) that in real life is not really a thing; I'm not sure what to think of it. But you could imagine something similar like "we want you to set up 2FA for the first time" or "we want you to supply additional personal information that the government has started making us collect", where the site might have to disable some kind of account functionality (though probably not a complete lockout) for users who don't do the thing in time.

    • finaard 17 hours ago ago

      Most mail providers have something like plus addressing. Properly used that already eliminates a lot of phishing attempts: If I get a mail I need to reset something for foobar, but it is not addressed to me-foobar (or me+foobar) I already know it is fraudulent. That covers roughly 99% of phishing attempts for me.

      The rest is handled by preferring plain text over HTML, and if some moron only sends HTML mails to carefully dissect it first. Allowing HTML mails was one of the biggest mistakes for HTML we've ever made - zero benefits with huge attack surface.

    • n8cpdx 17 hours ago ago

      I agree that #1 is correct, and I try to practice this; and always for anything security related (update your password, update your 2FA, etc).

      Still, I don’t understand how npmjs.help doesn’t immediately trigger red flags… it’s the perfect stereotype of an obvious scam domain. Maybe falling just short of npmjshelp.nigerianprince.net.

    • JoRyGu 18 hours ago ago

      Is there somewhere you'd recommend that I can read more about the pros/cons of TOTP? These authenticator apps are the most common 2FA second factor that I encounter, so I'd like to have a good source for info to stay safe.

    • x0x0 19 hours ago ago

      I watched a presentation from Stripe internal eng that was given I forget where.

      An internal engineer there who did a bunch of security work phished like half of her own company (testing, obviously). Her conclusion, in a really well-done talk, was that it was impossible. No human measures will reduce it given her success at a very disciplined, highly security conscious place.

      The only thing that works is yubikeys which prevent this type of credential + 2fa theft phishing attack.

      edit:

      karla burnette / talk https://www.youtube.com/watch?v=Z20XNp-luNA

    • unethical_ban 19 hours ago ago

      #1 is the real deal. Just like you don't give private info to any caller you aren't expecting. You call them back at a number you know.

      • glial 18 hours ago ago

        I had someone from a bank call me and ask for my SSN to confirm my identity. The caller ended up being legitimate, but I still didn't give it...like, are you kidding me?

        • RussianCow 18 hours ago ago

          This has happened to me more times than I can count, and it's extremely frustrating because it teaches people the wrong lesson. The worst part is they often get defensive when you refuse to cooperate, which just makes the whole thing unnecessarily more stressful.

        • Muromec 16 hours ago ago

          I would be surprised if the database with SSN of all adult americans wasn't out there on the usual data dumps website available for 5 dollars.

    • TZubiri 17 hours ago ago

      Here's the actual root cause of the issue:

      1- As a professional, installing free dependencies to save on working time.

      There's no such thing as a free lunch, you can't have your cake and eat it too that is, download dependencies that solve your problems, without paying, without ads, without propaganda (for example to lure you into maintaining such projects for THE CAUSE), without vendor lockin or without malware.

      It's really silly to want to pile up mountains of super secure technology like webauthn, when the solution is just to stop downloading random code from the internet.

  • torium 19 hours ago ago

    [flagged]

    • dang 15 hours ago ago

      Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

      If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.

  • tomasphan 19 hours ago ago

    The problem here is that a single dev account can make updates to a prod codebase, or in the case of NX a single CI/CD token. Something with 5 Million downloads per week should not be controlled by one token if it takes me 3 approvals to get my $20 lunch reimbursement. At the very least have an LLM review every PR to prod.

  • easterncalculus 11 hours ago ago

    > This post and its online comment sections are blame-free zones

    The author is claiming control over other comment sections? Where is this entitlement coming from? They hide that behind some fictional persona, as if that changes anything.

    The author then proceeds to list several reasons someone would fall for this, carefully ignoring the most important detail of the email, being its address. The absolute very first step of detecting email phishing is looking at the address.

    Obviously the blame is on NPM for having a system that can be defeated by clicking a bad email, but the JS ecosystem has no interest in doing things right and there's no point in putting our heads in the sand about basic security practices.

    • ViscountPenguin 11 hours ago ago

      > Where is this entitlement coming from? They hide that behind some fictional persona, as if that changes anything.

      I hardly think a polite request is entitlement.

    • giveita 5 hours ago ago

      NPM should promptly email everyone to upgrade their 2FA