79 comments

  • elric a day ago ago

    This is critical infrastructure, and it gets compromised way too often. There are so many horror stories of NPM (and similar) packages getting filled with malware. You can't rely on people not falling for phishing 100% of the time.

    People who publish software packages tend to be at least somewhat technical people. Can package publishing platforms PLEASE start SIGNING emails. Publish GPG keys (or whatever, I don't care about the technical implementation) and sign every god damned email you send to people who publish stuff on your platform.

    Educate the publishers on this. Get them to distrust any unsigned email, no matter how convincing it looks.

    And while we're at it, it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it, but it's clear that the actions in this example were suspicious: user logs in, changes 2FA settings, immediately adds a new API token, which immediately gets used to publish packages. Maybe there should be a 24 hour period where nothing can be published after changing any form of credentials. Accompanied by a bunch of signed notification emails. Of course that's all moot if the attacker also changes the email address.

    • feross 19 hours ago ago

      Disclosure: I’m the founder of https://socket.dev

      We analyzed this DuckDB incident today. The attacker phished a maintainer on npmjs.help, proxied the real npm, reset 2FA, then immediately created a new API token and published four malicious versions. A short publish freeze after 2FA or token changes would have broken that chain. Signed emails help, but passkeys plus a publish freeze on auth changes is what would have stopped this specific attack.

      There was a similar npm phishing attack back in July (https://socket.dev/blog/npm-phishing-email-targets-developer...). In that case, signed emails would not have helped. The phish used npmjs.org — a domain npm actually owns — but they never set DMARC there. DMARC is only set on npmjs.com, the domain they send email from. This is an example of the “lack of an affirmative indicator” problem. Humans are bad at noticing something missing. Browsers learned this years ago: instead of showing a lock icon to indicate safety, they flipped it to show warnings only when unsafe. Signed emails have the same issue — users often won’t notice the absence of the right signal. Passkeys and publish freezes solve this by removing the human from the decision point.

    • SoftTalker a day ago ago

      I think you just have to distrust email (or any other "pushed" messages), period. Just don't ever click on a link in an email or a message. Go to the site from your own previously bookmarked shortcut, or type in the URL.

      I got a fraud alert email from my credit card the other day. It included links to view and confirm/deny the suspicious charge. It all looked OK, the email included my name and the last digits of my account number.

      I logged in to the website instead. When I called to follow up I used the phone number printed on my card.

      Turns out it was a legit email, but you can't really know. Most people don't understand public key signing well enough to rely on them only trusting signed emails.

      Also, if you're sending emails like this to your users, stop including links. Instead, give them instructions on what to do on your website or app.

    • parliament32 20 hours ago ago

      > it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it

      USE PASSKEYS. Passkeys are phishing-resistant MFA, which has been a US govt directive for agencies and suppliers for three years now[1]. There is no excuse for infrastructure as critical as NPM to still be allowing TOTP for MFA.

      [1]https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-0...

    • jonplackett 4 hours ago ago

      One issue is that many institutions - banks, tech giants - still send ridiculously spammy looking emails asking you to click a link and go verify something.

      All these actions are teaching people to be dumb and make it more likely they’ll fall for a scam because the pattern has been normal before.

    • neilv 7 hours ago ago

      > This is critical infrastructure, and it gets compromised way too often.

      Most times that I go to use some JS, Python, or (sometimes) Rust framework, I get a sinking feeling, as I see a huge list of dependencies scroll by.

      I know that it's a big pile of security vulnerabilities and supply-chain attack risk.

      Web development documentation that doesn't start with `npm install` seems rare now.

      Then there's the 'open source' mobile app frameworks that push you to use the framework on your workstation with some vendor's Web platform tightly in the loop, which all your code flows through.

      Children, who don't know how things work, will push any button. But experienced software engineers should understand the technology, the business context, and the real-world threats context, and at least have an uneasy, disapproving feeling every time they work on code like this.

      And in some cases -- maybe in all cases that aren't a fly-by-night, or an investment scam, or a hobby project on scratch equipment -- software engineers should consider pushing back against engaging in irresponsible practices that they know will probably result in compromise.

    • nikcub a day ago ago

      * passkeys

      * signed packages

      enforce it for the top x thousand most popular packages to start

      some basic hygiene about detecting unique new user login sessions would help as well

    • evantbyrne a day ago ago

      The email was sent from the 'npmjs dot help' domain. I'm not saying you're wrong, but also basic due diligence would have prevented this. If not by email, the maintainer may have been able to be compromised over text or some other medium. And today maintainers of larger projects can avoid these problems by not importing and auto-updating a bunch of tiny packages that look like they could have been lifted from stack overflow

    • zokier a day ago ago

      Spf/dkim already authenticates the sender. But it doesn't help if the user doesn't check who the email is from. But in that case gpg would not help that much either.

    • thayne 17 hours ago ago

      > Of course that's all moot if the attacker also changes the email address.

      Maybe don't allow changing the email address right after changing 2fa?

      And if the email is changed, send an email to the original email alllowing you to dispute the change.

    • progx a day ago ago

      TRUE! A simple self defined word in an email and you will see, if the mail is fake or not.

    • ignoramous a day ago ago

      > Can package publishing platforms PLEASE start SIGNING emails

      I am skeptical this solves phising & not add to more woes (would you blindly click on links if the email was signed?), but if we are going to suggest public key cryptography, then: NPM could let package publishers choose if only signed packages must be released and consumers decide if they will only depend on signed packages.

      I guess, for attackers, that moves the target from compromising a publisher account to getting hold of the keys, but that's going to be impossible... as private keys never leave the SSM/HSM, right?

      > Get them to distrust any unsigned email, no matter how convincing it looks.

      For shops of any important consequence, email security is table stakes, at this point: https://www.lse.ac.uk/research/research-for-the-world/societ...

    • egorfine a day ago ago

      > You can't rely on people not falling for phishing 100% of the time

      1. I genuinely don't understand why.

      2. If it is true that people are the failing factor, then nothing is going to help. Hardware keys? No problem, a human will use the hardware key to sign a malicious action.

    • chatmasta 9 hours ago ago

      DuckDB is not critical infrastructure and I don’t even think these billion-download packages are critical infrastructure. In software everything can be rolled back and that’s exactly what happened here. Yes we were lucky that someone caught this rather sloppy exploit early, and (you can verify via the wallet addresses) didn’t make any money from it. And it could certainly have been worse.

      But I think calling DuckDB “critical infrastructure” is just a bit conceited. As an industry we really overestimate the importance of our software that can be deleted when it’s broken. We take ourselves way too seriously. In any worst case scenario, a technical problem can be solved with a people solution.

      If you want to talk about critical infrastructure then the xz backdoor was the closest we’ve caught to affecting it. And what came of that backdoor? Nothing significant… I suppose you could say there might be 100 xz-like backdoors lurking in our “critical infrastructure” today, but at least as long as they’re idle, it’s not actually a problem. Maybe one day China will invade Taiwan and we’ll see just how compromised our critical infrastructure has actually been this whole time…

  • diggan a day ago ago

    So far, it seems to be a bog-standard phishing email, with not much novelty or sophistication, seems the people running the operation got very lucky with their victims though.

    I'm starting to think we haven't even seen the full scope of it yet, two authors confirmed as compromised, must be 10+ out there we haven't heard of yet?

    • IshKebab a day ago ago

      Probably the differentiating factor here is that the phishing message was very plausible. Normally they're full of spelling mistakes and unprofessional grammar. The domain was also plausible.

      I think where they got lucky is

      > In hindsight, the fact that his browser did not auto-complete the login should have been a red flag.

      A huge red flag. I wonder if browsers should actually detect if you're putting login details for site A manually into site B, and give you a "are you sure this isn't phishing" warning or something?

      I don't quite understand how the chalk author fell for it though. They said

      > This was mobile, I don't use browser extensions for the password manager there.

      So are there mobile password managers that don't even check the URL? I dunno how that works...

    • skeeter2020 a day ago ago

      >> So far, it seems to be a bog-standard phishing email

      The fact this is NOT the standard phishing email shows how low the bar is:

      1. the text of the email reads like one you'd get from npm in the tone, format and lack of obvious spelling & grammatical errors. It pushes you to move quicker than you might normally, without triggering the typical suspicions.

      2. the landing domain and website copy seem really close to legit, no obfuscated massive subdomain, no uncanny login screen, etc.

      All the talk of AI disrupting tech; this is an angle where generative AI can have a massive impact in democratizing the global phishing industry. I do agree with you that there's likely many more authors who have been tricked and we haven't seen the full fallout.

    • polynomial a day ago ago

      The article says the victim used 2fa. How did the attacker know their 2fa in order to send them a fake 2fa request?

  • eviks a day ago ago

    > This website contained a *pixel-perfect copy* of the npmjs.com website.

    Not sure how this emphasis is of any importance, you brain doesn't have a pixel perfect image of the website, so you wouldn't know whether it's a perfect replica or not.

    Let the silicon dummies in the password manager do the matching, don't strain your brain with such games outside of entertainment

    • stanac a day ago ago

      My password manager is a separate app, I always have to manually copy/paste the credentials. That's because I believed that approach to be more secure, now I see it's replacing one attack vector for another.

    • udev4096 20 hours ago ago

      A mitm proxy can replicate the whole site, it's almost impossible to distinguish from the real one other than the checking the domain

  • 0xbadcafebee a day ago ago

    At least third major compromise in two weeks. (last comment: https://news.ycombinator.com/item?id=45172225) (before that: https://news.ycombinator.com/item?id=45039764)

    Forget about phishing, it's a red herring. The actual solution to this is code signing and artifact signing.

    You keep a private key on your local machine. You sign your code and artifacts with it. You push them. The packages are verified by the end-user with your public key. Even if your NPM account gets taken over, the attacker does not have your private key, so they cannot publish valid packages as you.

    But because these platforms don't enforce code and artifact signing, and their tools aren't verifying those signatures, attackers just have to figure out a way to upload their own poison package (which can happen in multiple ways), and everyone is pwnd. There must be a validated chain of trust from the developer's desktop all the way to the end user. If the end user can't validate the code they were given was signed by the developer's private key, they can't trust it.

    This is already implemented in many systems. You can go ahead and use GitHub and 1Password to sign all your commits today, and only authorize unsealing of your private key locally when it's needed (git commits, package creation, etc). Then your packages need to be signed too, public keys need to be distributed via multiple paths/mirrors, and tools need to verify signatures. Linux distributions do this, Mac packages do, etc. But it's not implemented/required in all package managers. We need Npm and other packaging tools to require it too.

    After code signing is implemented, then the next thing you want is 1) sign-in heuristics that detect when unusual activity occurs and either notifies users or stops it entirely, 2) mandatory 2FA (with the option for things like passkeys with hardware tokens). This will help resist phishing, but it's no replacement for a secure software supply chain.

    • feross 19 hours ago ago

      Disclosure: I’m the founder of https://socket.dev

      Strongly agree on artifact signing, but it has to be real end-to-end. If the attacker can trigger your CI to sign with a hot key, you still lose. What helps: 1) require offline or HSM-backed keys with human approval for release signing, 2) enforce that published npm artifacts match a signed Git tag from approved maintainers, 3) block publishes after auth changes until a second maintainer re-authorizes keys. In today’s incident the account was phished and a new token was used to publish a browser-side wallet-drainer. Proper signing plus release approvals would have raised several hard gates.

    • smw 20 hours ago ago

      "2) mandatory 2FA (with the option for things like passkeys with hardware tokens)."

      No, with the _requirement_ for passkeys or hardware tokens!

  • hiccuphippo a day ago ago

    Maybe email software should add an option to make links unclickable, or show a box with the clear link (and highlight the domain) before letting the user go through it.

    They already make links go through redirects (to avoid referrer headers?) so it's halfway there. Just make the redirect page show the link and a go button instead of redirecting automatically. And it would fix the annoyance that is not being able to see the real domain when you hover the link.

    • elric a day ago ago

      So many legit emails contain links that pass through some kind of URL shortener or tracker (like mailchimp does). People are being actively conditioned to ignore suspicious looking URLs.

  • vitonsky 21 hours ago ago

    Just for context. DuckDB team is consistently ignores any security practices.

    The single one method how to install DuckDB on laptop is to run

    `curl https://install.duckdb.org | sh`

    I've requested to deliver CLI as standard package, they have ignored it. Here is the thread https://github.com/duckdb/duckdb/issues/17091

    As you can see that it isn't single slip due to "human factor", but DuckDB management consistently puts users at risk.

    • throwaway127482 21 hours ago ago

      Genuine question: why is `curl https://trusted-site.com | sh` a security risk?

      Fundamentally, doesn't the security depend entirely on whether https is working properly? Even the standard package repos are relying on https right?

      Like, I don't see how it's different than going to their website, copying their recommended command to install via a standard repo, then pasting that command into your shell. Either way, you are depending entirely on the legitimacy of their domain right?

    • artemisart 10 hours ago ago

      Do you know about other security issues? If it's only about curl | sh it really isn't a problem, if the same website showed you a hash to check the file then the hash would be compromised at the same time as the file, and with a package manager you still end up executing code from the author that is free to download and execute anything else. Most package managers don't add security.

    • 0cf8612b2e1e 20 hours ago ago

      They also publish binaries on their GitHub if you prefer that.

  • weinzierl a day ago ago

    Is this related to npm debug and chalk packages being compromised?

    https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...

    • whizzter a day ago ago

      Seems to have been targeted by the same phishing campaign.

  • kyle-rb 20 hours ago ago

    I've been critical of blockchain in the past because of the lack of use cases, but I've gotta say crypto functions pretty well as an underlying bug bounty system. This probably could have been a much more insidious and well hidden attack if there wasn't a quick payoff route to take.

    • tripplyons 20 hours ago ago

      That argument only really makes sense if you assume the attackers aren't rational actors. If there was a better, more destructive way to profit from this kind of compromise, they would either do it or sell their access to someone who knew how to do it.

    • kyle-rb 18 hours ago ago

      Ah, apparently other people had thoughts along the same lines: https://news.ycombinator.com/item?id=45183029

  • greatgib 18 hours ago ago

    What is funny is again how many "young developers" had fun at old timers package managers like Debian being so slow to release new versions of packages.

    But never ever anyone was rooted because of malware that was snuck into an official .deb package.

    That was the concept of "stable" in the good old time, when software was really an "engineering" field.

    • SahAssar 15 hours ago ago

      > But never ever anyone was rooted because of malware that was snuck into an official .deb package.

      We got pretty close with the whole XZ thing. And people generated predictable keys due to a flaw in a debian patch to openssl.

      This stuff is hard and I'm not saying that npm is doing well but seems like no large ecosystem is doing exceptionally well either.

    • zahlman 6 hours ago ago

      > But never ever anyone was rooted because of malware that was snuck into an official .deb package.

      Sure. The tradeoff is that when there's a zero-day, you have to wait for Debian to fix it, or to approve and integrate the dev's fix. Finding malware is one thing; finding unintentional vulns is another.

  • dang 14 hours ago ago

    Related. Others?

    We all dodged a bullet - https://news.ycombinator.com/item?id=45183029 - Sept 2025 (273 comments)

    NPM debug and chalk packages compromised - https://news.ycombinator.com/item?id=45169657 - Sept 2025 (719 comments)

  • lima 3 hours ago ago

    Using Security Keys/FIDO2 instead of TOTP codes completely solves trivial phishing attacks like this one.

  • lovehashbrowns a day ago ago

    I guess it's hands off the npm jar for a week or three 'cause I am expecting a bunch more packages to be affected at this point.

  • theanonymousone a day ago ago

    How do these things mostly happen for npm? Why not (much) PyPI or Maven? Or do they?

    • zahlman 6 hours ago ago

      Python has a heavy standard library, and the most popular third-party libraries tend to have simple dependency graphs because they can lean on that standard library so much. Many of them are also maintained under umbrellas such as the Python Software Foundation (for things like `requests`) or the Python Packaging Authority (for build tools etc.). So there are many eyes on everything all the time, those eyes mostly belong to security-conscious people, and they all get to talk to each other quite a bit.

      PyPI also now requires 2FA for everyone and makes other proactive attempts to hunt down malware (https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2f...) in addition to responding to reports.

      There was still a known compromise recently: https://blog.pypi.org/posts/2025-07-31-incident-report-phish... (`num2words` gets millions of monthly downloads, but still for example two orders of magnitude less than NumPy). Speaking of the communication I mentioned in the first paragraph, one of the first people reporting seeing the phishing email was a CPython core developer.

      Malware also still does get through regularly, in the form of people just uploading it. But there are automated measures against typo-squatting (you can't register a name that's too similar to existing names, or which is otherwise blacklisted) and for most random crap there's usually just no reason anyone would find out about it to install it.

    • johnisgood 21 hours ago ago

      Or Cargo. I compiled Zed with release mode, pulled in 2000 dependencies. It does not fill me with confidence.

  • bakugo a day ago ago

    > According to the npm statistics, nobody has downloaded these packages before they were deprecated

    Is this actually accurate? Packages with weekly downloads in the hundreds of thousands, yet in the 4+ hours that the malicious versions were up for, not a single person updated any of them to the latest patch release?

    • hfmuehleisen a day ago ago

      DuckDB maintainer here, thanks for flagging this. Indeed the npm stats are delayed. We will know in a day or so what the actual count was. In the meantime, I've removed that statement.

    • feross 19 hours ago ago

      Disclosure: I’m the founder of https://socket.dev

      npm stats lag. We observed installs while the malicious versions were live for hours before removal. Affected releases we saw: duckdb@1.3.3, @duckdb/duckdb-wasm@1.29.2, @duckdb/node-api@1.3.3, @duckdb/node-bindings@1.3.3. Same payload as yesterday’s Qix compromise. Recommend pinning and avoiding those versions, reviewing diffs, and considering a temporary policy not to auto-adopt fresh patch releases on critical packages until they age.

    • diggan a day ago ago

      I think that's pretty unlikely. I aren't even a high-profile npm author, and if I publish any npm package they end up being accessed/downloadaded within minutes of first publish, and any update after that.

      I also know projects who are reading the update feeds and kick off CI jobs after any dependencies are updated to automatically test version upgrades, surely at least one dependent of DuckDB is doing something similar.

  • karel-3d 6 hours ago ago

    npm actually does send these emails. They are about setting up 2FA though. And never have this sense of urgency.

    "Hi, XXXX! It looks like you still do not have two-factor authentication (2FA) enabled on your npm account.

    To enable 2FA, please follow the instructions found here."

  • koakuma-chan a day ago ago

    Should enforce passkeys not 2FA

    • nodesocket a day ago ago

      I think just supporting yubikeys is sufficient.

    • cr125rider a day ago ago

      How is that different?

  • ritcgab a day ago ago

    For critical infra projects like this, making a release should require at least three signatures from different maintainers. In fact, I am surprised that this is not a common practice.

  • feross 20 hours ago ago

    Disclosure: I'm the founder of https://socket.dev.

    A few concrete datapoints from our analysis of this incident that may help cut through the hand-waving:

    1. This is the same campaign that hit Qix yesterday (https://socket.dev/blog/npm-author-qix-compromised-in-major-...). The injected payload is byte-for-byte behaviorally identical. It hooks fetch, XMLHttpRequest, and common wallet provider APIs and live-rewrites transaction payloads to attacker addresses across ETH, BTC, SOL, TRX, LTC, BCH. One tell: a bundle of very distinctive regexes for chain address formats, including multiple Solana and Litecoin variants.

    2. Affected versions and timing (UTC) that we verified:

    - duckdb@1.3.3 at 01:13

    - @duckdb/duckdb-wasm@1.29.2 at 01:11

    - @duckdb/node-api@1.3.3 at 01:12

    - @duckdb/node-bindings@1.3.3 at 01:11

    Plus low-reach test shots: prebid@10.9.1, 10.9.2 and @coveops/abi@2.0.1

    3. Payout so far looks small. Tracked wallets sum to roughly $600 across chains. That suggests speed of discovery contained damage, not that the approach is harmless.

    What would actually move the needle:

    === Registry controls ===

    - Make passkeys or FIDO2 mandatory for high-impact publisher accounts. Kill TOTP for those tiers.

    - Block publishing for 24 hours after 2FA reset or factor changes. Also block after adding a new automation token unless it is bound by OIDC provenance.

    - Require signed provenance on upload for popular packages. Verify via Sigstore-style attestations. Reject if there is no matching VCS tag.

    - Quarantine new versions from being treated as “latest” for automation for N hours. Exact-version installs still work. This alone cuts the blast radius of a hijack.

    === Team controls ===

    - Do not copy-paste secrets or 2FA. Use autofill and origin-bound WebAuthn.

    - Require maker-checker on publish for org-owned high-reach packages. CI must only build from a signed tag by an allowed releaser.

    - Pin and lock. Use `npm ci`. Consider an internal proxy that quarantines new upstream versions for review.

    === Detection ===

    - Static heuristics catch this family fast. Wallet address regex clusters and network shims inside non-crypto packages are a huge tell. If your tooling sees that in a data engine or UI lib, fail the build.

    Lastly, yes, training helps, but the durable fix is making the easy path the safe path.

  • ptrl600 a day ago ago

    Is there a way to configure npm that it only installs packages that are, like, a week old?

    • feross 19 hours ago ago

      Disclosure: I’m the founder of https://socket.dev

      A week waiting period would not be enough. On average, npm malware lingers on the registry for 209 days before it's finally reported and removed.

      Source: https://arxiv.org/abs/2005.09535

    • HatchedLake721 21 hours ago ago

      Don’t auto install latest versions, pick a version up to a patch and use package-lock.json

  • ebfe1 a day ago ago

    Is it just me who think this could have been prevented if npm admins put in some sort of cool off period to only allow new versions or packages to be downloaded after being published by "x" amount of hours? This way the npm maintainer would get notifications on their email and react immediately? And if it is urgent fix, perhaps there can be a process to allow npm admin to approve and bypass publication cool off period.

    Disclaimer: I don't know enough of npm/nodejs community so I might be completely off the mark here

    • herpdyderp a day ago ago

      If I was forced to wait to download my own package updates I would simply stop using npm altogether and use something else.

    • kaelwd 21 hours ago ago

      NPM could also flag releases that don't have a corresponding github tag (for packages that are hosted on github), most of these attacks are publishing directly to NPM without any git changes.

    • robjan a day ago ago

      They could definitely add a maker-checker process (similar to code review) for new versions and make it a requirement for public projects with x number of downloads per week.

    • hiccuphippo a day ago ago

      The could force release candidates that the package managers don't automatically update to, but let researchers analyse the packages before the real release.

  • skylurk a day ago ago

    I hate the janky password manager browser extensions but at least they make it hard to make this mistake.

    • smw 20 hours ago ago

      And passkeys or hardware tokens (FIDO/yubikeys) make it impossible

  • hoppp 13 hours ago ago

    Why the hell we use npm,

    Every dependency is a backdoor, To make them malicious it only take s a small slip up

  • xyst 10 hours ago ago

    > ... One of the maintainers read through this text and found it somewhat reasonable. He followed the link (now defunct) to a website hosted under the domain npmjs.help. This website contained a pixel-perfect copy of the npmjs.com website. He logged in using the duckdb_admin user and password, followed by 2FA. Again, the user profile, settings etc. were a perfect copy of the npmjs.com website including all user data. As requested by the email, he then re-set the 2FA setup.

    This is absolutely wild that this did not raise _any_ red flags to this person.

    red flag: random reset for 2FA ??? red flag: npmjs.help ??? red flag: user name and password not autofilled by browser ??? red flag: copy and pasting u/p combo into phishing site

    If _developers_ can't even get this right. Why do we expect dumb users to get this right? We are so cooked.

  • jeswin 21 hours ago ago

    Publishing could require clicking an email confirmation link, sent by npm.

    • petcat 21 hours ago ago

      It's all pointless theater because people want less friction to do what they want, not more. They'll just automate away the friction points like clicking an email confirmation link.

  • udev4096 21 hours ago ago

    > This website contained a pixel-perfect copy of the npmjs.com website

    This should not be considered high effort or a sophisticated attack. The attacker probably used a mitm proxy which can easily replicate every part of your site, with very little initial configuration. Evilginx is the most popular one I could think of

  • cefboud a day ago ago

    > malicious code to interfere with cryptocoin transactions

    Any idea what the interference was?

  • polynomial a day ago ago

    Serious question, how did the attacking site (npmjs.help) know the victim's 2fa? ie. How did they know what phone number to send the 2fa request to?

    • feross 19 hours ago ago

      It was a relay. The fake site forwarded actions to the real npm, so the legit 2FA challenge was triggered by npm and the victim entered the code into the phishing page. The attacker captured it and completed the session, then added an API token and pushed malware. Passkeys or FIDO2 would have failed here because the credential is bound to the real domain and will not sign for npmjs.help.

    • xx_ns a day ago ago

      It acted as a proxy for the real npm site, which was the one to send the request, intercepting the code when the user inserted it.

  • mediumsmart a day ago ago

    Comes with the territory considering that npm is defacto the number one enshittification dependency by now. But no worries - this will scale beautifully.

    downvotes appreciated but also happy to see one or two urls that would prove me wrong

    • eviks a day ago ago

      In the spirit of a substantive discussion could you likewise share a couple that would prove you right?

    • hiccuphippo a day ago ago

      I think the downvotes are because enshittification is a different thing, intentionally done by the developers themselves.

  • arewethereyeta a day ago ago

    > An attacker published new versions of four of duckdb’s packages that included malicious code to interfere with cryptocoin transactions

    How can anyone publish their packages?

    • OtherShrezzing a day ago ago

      The attacker emailed a maintainer from a legitimate looking email address. The maintainer clicked the link and reset their credentials on a legitimate looking website. The attacker then signs into the legitimate duckdb account and publishes their new package.

      This is the second high-profile instance of the technique this week.

    • pneff a day ago ago

      There is a detailed postmortem in the linked ticket explaining exactly how this happened.