NPM debug and chalk packages compromised

(aikido.dev)

1321 points | by universesquid 1 day ago

80 comments

  • junon 1 day ago
    Hi, yep I got pwned. Sorry everyone, very embarrassing.

    More info:

    - https://github.com/chalk/chalk/issues/656

    - https://github.com/debug-js/debug/issues/1005#issuecomment-3...

    Affected packages (at least the ones I know of):

    - ansi-styles@6.2.2

    - debug@4.4.2 (appears to have been yanked as of 8 Sep 18:09 CEST)

    - chalk@5.6.1

    - supports-color@10.2.1

    - strip-ansi@7.1.1

    - ansi-regex@6.2.1

    - wrap-ansi@9.0.1

    - color-convert@3.1.1

    - color-name@2.0.1

    - is-arrayish@0.3.3

    - slice-ansi@7.1.1

    - color@5.0.1

    - color-string@2.1.1

    - simple-swizzle@0.2.3

    - supports-hyperlinks@4.1.1

    - has-ansi@6.0.1

    - chalk-template@1.1.1

    - backslash@0.2.1

    It looks and feels a bit like a targeted attack.

    Will try to keep this comment updated as long as I can before the edit expires.

    ---

    Chalk has been published over. The others remain compromised (8 Sep 17:50 CEST).

    NPM has yet to get back to me. My NPM account is entirely unreachable; forgot password system does not work. I have no recourse right now but to wait.

    Email came from support at npmjs dot help.

    Looked legitimate at first glance. Not making excuses, just had a long week and a panicky morning and was just trying to knock something off my list of to-dos. Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).

    Just NPM is affected. Updates to be posted to the `/debug-js` link above.

    Again, I'm so sorry.

    • 33a 1 day ago
      We also caught this right away at Socket,

      https://socket.dev/blog/npm-author-qix-compromised-in-major-...

      While it sucks that this happened, the good thing is that the ecosystem mobilized quickly. I think these sorts of incidents really show why package scanning is essential for securing open source package repositories.

      • Yoric 1 day ago
        So how do you detect these attacks?
        • 33a 1 day ago
          We use a mix of static analysis and AI. Flagged packages are escalated to a human review team. If we catch a malicious package, we notify our users, block installation and report them to the upstream package registries. Suspected malicious packages that have not yet been reviewed by a human are blocked for our users, but we don't try to get them removed until after they have been triaged by a human.

          In this incident, we detected the packages quickly, reported them, and they were taken down shortly after. Given how high profile the attack was we also published an analysis soon after, as did others in the ecosystem.

          We try to be transparent with how Socket work. We've published the details of our systems in several papers, and I've also given a few talks on how our malware scanner works at various conferences:

          * https://arxiv.org/html/2403.12196v2

          * https://www.youtube.com/watch?v=cxJPiMwoIyY

          • ATechGuy 22 hours ago
            You rely on LLMs riddled with hallucinations for malware detection?
            • jmb99 22 hours ago
              I'm not exactly pro-AI, but even I can see that their system clearly works well in this case. If you tune the model to favour false positives, with a human review step (that's quick), I can image your response time being cut from days to hours (and your customers getting their updates that much faster).
              • ATechGuy 10 hours ago
                You are assuming that they build their own models.
            • Culonavirus 20 hours ago
              He literally said "Flagged packages are escalated to a human review team." in the second sentence. Wtf is the problem here?
              • ATechGuy 10 hours ago
                What about packages that are not "flagged"? There could be hallucinations when deciding to (or not) "flag packages".
                • orbital-decay 10 hours ago
                  >What about packages that are not "flagged"?

                  You can't catch everything with normal static analysis either. LLM just produces some additional signal in this case, false negatives can be tolerated.

                  • ATechGuy 10 hours ago
                    static analysis DOES NOT hallucinate.
                    • Twirrim 9 hours ago
                      So what? They're not replacing standard tooling like static analysis with it. As they mention, it's being used as additional signal alongside static analysis.

                      There are cases an LLM may be able to catch that their static analysis can't currently catch. Should they just completely ignore those scenarios, thereby doing the worst thing by their customers, just to stay purist?

                      What is the worst case scenario that you're envisioning from an LLM hallucinating in this use case? To me the worst case is that it might incorrectly flag a package as malicious, which given they do a human review anyway isn't the end of the world. On the flip side, you've got LLM catching cases not yet recognised by static analysis, that can then be accounted for in the future.

                      If they were just using an LLM, I might share similar concerns, but they're not.

            • wiseowise 18 hours ago
              > We use a mix of static analysis and AI. Flagged packages are escalated to a human review team.

              “Chat, I have reading comprehension problems. How do I fix it?”

            • Mawr 20 hours ago
              "LLM bad"

              Very insightful.

        • veber-alex 1 day ago
          AI based code review with escalation to a human
          • Yoric 1 day ago
            I'm curious :)

            Does the AI detect the obfuscation?

            • 33a 1 day ago
              It's actually pretty easy to detect that something is obfuscated, but it's harder to prove that the obfuscated code is actually harmful. This is why we still have a team of humans review flagged packages before we try to get them taken down, otherwise you would end up with way too many false positives.
              • Yoric 19 hours ago
                Yeah, what I meant is that obfuscation is a strong sign that something needs to be flagged for review. Sadly, there's only a thin line between obfuscation and minification, so I was wondering how many false positives you get.

                Thanks for the links in your other comment, I'll take a look!

            • justusthane 1 day ago
              Probably. It’s trivial to plug some obfuscated code into an LLM and ask it what it does.
              • spartanatreyu 1 day ago
                Yeah, but just imagine how many false positives and false negatives there would be...
      • hsbauauvhabzb 1 day ago
        [flagged]
        • josephg 22 hours ago
          Apparently it found this attack more or less immediately.

          It seems strange to attack a service like this right after it actively helped keep people safe from malware. I'm sure its not perfect, but it sounds like they deserve to take a victory lap.

          • hsbauauvhabzb 21 hours ago
            I don’t think celebrating a company who has a distinct interest in prolonging a problem while they profit off it is a good thing, no.
            • josephg 20 hours ago
              They're profiting off helping to solve the problem through early warning and detection. And by keeping their customers safe from stuff like this.

              Seems good to me. I want more attention and more tooling around this problem. You seem mad at them for helping solve a real problem?

        • fn-mote 1 day ago
          You could at least offer some kind of substantive criticism of the tool (“socket”).
          • hsbauauvhabzb 1 day ago
            Do I need any? Automated tools cannot prevent malicious code being injected. While they can make attempts to evaluate common heuristics and will catch low hanging malware, they are not fool proof against highly targeted attacks.

            Either way, the parent post is clearly ambulance chasing rather than having a productive conversation, which should really be about whether or not automatically downloading and executing huge hierarchal trees of code is absolutely fucking crazy, rather than a blatant attempt to make money off an ongoing problem without actually solving anything.

            • 33a 1 day ago
              When we find malware on any registry (npm, rubygems, pypi or otherwise), we immediately report it to the upstream registry and try to get it taken down. This helps reduce the blast radius from incidents like this and mitigates the damage done to the entire ecosystem.

              You can call it ambulance chasing, but I think this is a good thing for the whole software ecosystem if people aren't accidentally bundling cryptostealers in their web apps.

              And regarding not copying massive trees of untrusted dependencies: I am actually all for this! It's better to have fewer dependencies, but this is also not how software works today. Given the imperfect world we have, I think it's better to at least try to do something to detect and block malware than just complain about npm.

              • hsbauauvhabzb 1 day ago
                So instead you prolong the problem while making money? Nice!
                • jondwillis 20 hours ago
                  I’m all for thinking about second, or third, or fourth order effects of behavior, but unless you have proof that Socket is doing something like lobbying that developers keep using NPM against their own best interests, frankly, I don’t know what your point here is.
            • josephg 22 hours ago
              > Do I need any? Automated tools cannot prevent malicious code being injected. While they can make attempts to evaluate common heuristics and will catch low hanging malware, they are not fool proof against highly targeted attacks.

              So just because a lock isn't 100% effective at keeping out criminals we shouldn't lock our doors?

              • hsbauauvhabzb 21 hours ago
                Im not sure how that relates to the company ambulance chasing on what should be a public service announcement without a shade of advertising.

                That’s like lock companies parading around when their neighbour is murdered during a burglary but they weren’t because they bought a Foobar(tm) lock.

            • LocalH 18 hours ago
              The more tools that exist to help find vulnerabilities, the better, as long as they're not used in a fully automated fashion. Human vetting is vital, but using tools to alert humans to such issues is a boon.
        • hsbauauvhabzb 1 day ago
          For those interested, points associated with this post spiked to at least 4 then dropped back to one. Take of that what you will.
    • winwang 1 day ago
      Just want to agree with everyone who is thanking you for owning up (and so quickly). Got phished once while drunk in college (a long time ago), could have been anyone. NPM being slowish to get back to you is a bit surprising, though. Seems like that would only make attacks more lucrative.
      • internetter 1 day ago
        in general npm does a not-too-great job with these things
        • tripplyons 1 day ago
          Remember, NPM stands for Now Part of Microsoft!

          (Microsoft owns GitHub, which owns NPM.)

          • thayne 1 day ago
            Which means they don't have the excuse of being a volunteer effort to not be on top of this. MS has plenty of resources.
            • dabockster 1 day ago
              If you're running this kind of infrastructure online these days, you have every right to require payment somehow. Don't work for free.
      • sneak 1 day ago
        Can happen to anyone… who doesn’t use password manager autofill and unphishable 2FA like passkeys.

        Most people who get phished aren’t using password managers, or they would notice that the autofill doesn’t work because the domain is wrong.

        Additionally, TOTP 2FA (numeric codes) are phishable; stop using them when U2F/WebAuthn/passkeys are available.

        I have never been phished because I follow best practices. Most people don’t.

        • junon 1 day ago
          I use a password manager. I was mobile, the autofill stuff isn't installed as I don't use it often on my phone.

          In 15 years of maintaining OSS, I've never been pwned, phished, or anything of the sort.

          Thank you for your input :)

          • yawaramin 20 hours ago
            I'm angry about this. Large megacorps with the budget of medium-sized countries allocate the minimum amount of budget to maintain their auth systems and still allow the use of phishable auth methods. If npm disabled passwords and forced people to use passkeys, this huge problem just disappears tomorrow.

            But instead, we're left with this mess where ordinary developers are forced to deal with the consequences of getting phished.

            • hdjrudni 18 hours ago
              Passkeys can be a pain in the ass too. Evidentially I set up my Yubikey with Github as some point, which is fine if I'm at my desktop where my key is plugged in, but if I want to sign in on mobile.... now what? I just couldn't log in on mobile for months until I realized I think there's a button on there somewhere that's like "use different 2fa" but then what was even the point of having a key registered if it can be bypassed.
              • dchest 18 hours ago
                While you can setup passkeys with YubiKey, the most common intended use case is key pairs that are synchable via your Apple/Google/password manager account. So, once you add a passkey, you'll be able to sign in on mobile with it automatically.
              • yawaramin 10 hours ago
                > but if I want to sign in on mobile.... now what?

                Just set up a new passkey on the mobile device.

              • nialv7 11 hours ago
                you can use yubikeys for both passkey and password+2fa. this way you aren't bypassing anything. and btw, you can get USB-C yubikeys so you can plug it into your phone. if even that's not an option, you can get a USB-C to USB-A adapter.
              • sneak 17 hours ago
                You can use software u2f (iCloud supports this), you don’t need Yubikeys.

                Also, Yubikeys work on phones just fine, via both NFC and USB.

          • ants_everywhere 1 day ago
            sounds like you should use it on your phone then
          • bingabingabinga 1 day ago
            > In 15 years of maintaining OSS, I've never been pwned, phished, or anything of the sort.

            Well, until now.

          • sneak 17 hours ago
            I never copy and paste passwords. Any time you find yourself wanting to do that, alarm bells should be ringing.

            Password managers can’t help you if you don’t use them properly.

            Spotify steals (and presumably uploads) your clipboard, as well as other apps. Autofill is your primary defense against phishing, as you (and hopefully some others) learned this week.

            • johnisgood 6 hours ago
              Do not give them permission to your clipboard. It is possible today. I copy and paste passwords and I clear the clipboard afterwards, and I do not use junk like Spotify, and were I to use Spotify, it would be through the browser, not the application. Were it the application, it would be firejailed to oblivion.

              It is possible to restrict clipboard access when running applications inside Firejail, i.e. Firejail allows you to restrict access to X11 and Wayland sockets, which prevents the sandboxed application from reading or writing to the system clipboard. See: "--x11=none", "--private=...", "--private-tmp", and so forth. You can run a GUI app with isolated clipboard via "firejail --x11=xvfb app".

              For Wayland, you should block access to the Wayland socket by adding "--blacklist=/run/user/*/wayland-*".

              I do not use autofill on desktop at all. I use it on Android, however.

            • jasode 16 hours ago
              >Autofill is your primary defense against phishing,

              The autofill feature is not 100% reliable for various reasons:

              (1) some companies use different domains that are legitimate but don't exactly match the url in the password manager. Troy Hunt, the security expert who runs https://haveibeenpwned.com/ got tricked because he knew autofill is often blank because of legit different domains[1]. His sophisticated knowledge and heuristics of how autofill is implemented -- actually worked against him.

              (2) autofill doesn't work because of technical bugs in the plugin, HTML elements detection, interaction/incompatibility with new browser versions, etc. It's a common complaint with all password plugins:

              https://www.google.com/search?q=1password+autofill+doesn%27t...

              https://www.1password.community/discussions/1password/1passw...

              https://github.com/bitwarden/clients/issues?q=is%3Aissue%20a...

              ... so in the meantime while the autofill is broken, people have to manually copy-paste the password!

              The real-world experience of flaky and glitchy autofill distorts the mental decision tree.

              Instead of, "hey, the password manager didn't autofill my username/password?!? What's going on--OH SHIT--I'm being phished!" ... it becomes "it didn't autofill in the password (again) so I assume the Rube-Goldberg contraption of pw manager browser plugin + browser version is broken again."

              Consider the irony of how password managers not being perfectly reliable causes sophisticated technical minds to become susceptible to social engineering.

              In other words, password managers inadvertently create a "Normalization of Deviance" : https://en.wikipedia.org/wiki/Normalization_of_deviance

              [1] >Thirdly, the thing that should have saved my bacon was the credentials not auto-filling from 1Password, so why didn't I stop there? Because that's not unusual. There are so many services where you've registered on one domain (and that address is stored in 1Password), then you legitimately log on to a different domain. -- from: https://www.troyhunt.com/a-sneaky-phish-just-grabbed-my-mail...

              • mdaniel 10 hours ago
                I want to live in a world where the 1Password CEO makes a formal apology for this failure, and applies the necessary internal pressure to treat any "autofill does not work" as a P0

                The number of cases in this thread, about a malware attack basically because of 1Password, where people mention their bad experience with 1Password is really stretching the "no such thing as bad publicity" theory

          • typpilol 1 day ago
            I just don't get how you didn't look for an announcement about npm resetting 2fa. Especially when you get a random reset
            • acdha 1 day ago
              Because you’re one person with a job which isn’t security, and the world is full of legitimate warnings from companies telling you that you must do something by an arbitrary deadline?

              They screwed up, but we have thousands of years of evidence that people make mistakes even when they really know better and the best way to prevent that is to remove places where a single person making a mistake causes a disaster.

              On that note, how many of the organizations at risk do you think have contributed a single dollar or developer-hour supporting the projects they trust? Maybe that’s where we should start looking for changes.

        • acdha 23 hours ago
          I also use WebAuthn where possible but wouldn’t be so cocky. The most likely reason why we haven’t been phished because we haven’t been targeted by a sophisticated attacker.

          One side note: most systems make it hard to completely rely on WebAuthn. As long as other options are available, you are likely vulnerable to an attack. It’s often easier than it should be to get a vendor to reset MFA, even for security companies.

          • typpilol 18 hours ago
            But this wasn't even really a spear fishing attack.

            It was a generic Phish email you were in every single Corp 101 security course

            • acdha 13 hours ago
              The attacker did have a great domain name choice, didn’t overuse it to the point where it got on spam block lists, and got them at a moment of distraction, so it worked. It’s really easy to look at something in a training exercise and say “who’d fall for that” without thinking about what happens when you’re not at your best in a calm, focused state.

              My main point was simply that the better response isn’t to mock them but to build systems which can’t fail this badly. WebAuthn is great, but you have to go all in if you want to prevent phishing. NPM would also benefit immensely from putting speed bumps and things like code signing requirements in place, but that’s a big usability hit if it’s not carefully implemented.

              • typpilol 4 hours ago
                I wouldn't consider a .help domain to be a great choice.

                Ive literally never for a support email or any email from a .help domain.

                I'm not mocking them, just trying to understand how so many red flags slipped past.

                Domain name No auto-fill Unannounced MFA resets Etc...

                My point is that nothing could have saved this person except extreme security measures. There's literally no conclusion here besides:

                1. Lock everything down so extremely that it's extremely inconvenient to prevent mistakes 99% of people don't make. (How many npm packages vs the total have been hijacked, less than 1%)

                2. This person was always going to be a victim eventually... And that's a hard pill to swallow. For me and the maintainer. Being in network security it's my actual nightmare scenario.

                The only lesson to be learned is you need extreme security measures for even the most experienced of internet users. This wasn't your grandma clicking a link, it's a guy who's been around for decades in the online / coding world.

                It also makes me suspicious but that's a road I'd rather keep myself

          • sneak 17 hours ago
            The failure here was that his password manager was not configured and he manually copied and pasted the credentials into the wrong webpage.

            A password manager can’t manage passwords if you don’t configure it and use it.

            • acdha 13 hours ago
              Yes, and we know that’s a thing which people are trained to do by all of the sites which are sloppy about their login forms or host names so we should assume that attackers can trick people into doing it, even many people who think they are too smart for it. Hubris is quite a boon for attackers.
        • grumple 12 hours ago
          You can use password manager autofill and hardware 2fa and still get phished. All it takes is you rushing, not paying attention, clicking on a link, and logging in (been caught by my own security team doing this). Yes, in an ideal world you're going to be 100% perfect. The world is not ideal, unfortunately. I don't have a solution, but demanding humans behave perfectly in order to remain secure is not a reasonable ask.
      • wer232essf 1 day ago
        It’s way too easy to slip up once and end up on the wrong side of a phishing attempt — doesn’t matter if you’re drunk in college, tired after work, or just rushing through a busy day. These attacks are designed to catch people in those exact moments. The important part is how quickly someone owns up to it and takes steps to mitigate the damage, and in that sense I think the OP deserves a lot of credit.

        What does worry me, though, is exactly what you pointed out about NPM’s response time. Given how central NPM packages are to the entire JavaScript ecosystem, you’d expect their security processes to be lightning fast. Every hour of delay can mean thousands (or millions) of downloads happening with potentially compromised code. And as you said, that just increases the incentive for attackers to target maintainers in the first place.

    • hackerindio 1 day ago
      Hey, no problem, man. You do a lot for the community, and it's not all your fault. We learn from our mistakes. I was thinking of having a public fake profile to avoid this type of attack, but I'm not sure how it would work on the git tracking capabilities. Probably keeo it only internally for you&NPM ( the real one ) and have some fake ones open for public but not sure, just an obfuscated idea. Thanks for taking the responsibility and working in fixing ASAP. God bless you.
      • junon 1 day ago
        Unfortunately wouldn't have helped. They skimmed my npm-only address directly from the public endpoint.
      • Imustaskforhelp 1 day ago
        Wow, that's actually kinda genius not gonna lie. Honestly, I would love seeing some 2fa or some other way to prevent pwning. Maybe having a sign up with google with all of its flaws still might make sense given how it might be 2fa.

        But google comes with its own privacy nightmares.

    • Cthulhu_ 1 day ago
      Tbh, it's not your fault per se; everybody can fall for phishing emails. The issue, IMO, lies with npmjs which publishes to everyone all at the same time. A delayed publish that allows parties like Aikido and co to scan for suspicious package uploads first (e.g. big changes in patch releases, obfuscated code, code that intercepts HTTP calls, etc), and a direct flagging system at NPM and / or Github would already be an improvement.
      • junon 1 day ago
        Being able to sign releases would help, too. I would happily have that enabled since I'm always publishing from one place.
        • Yoric 1 day ago
          Wouldn't they have been able to change your key if they had compromised your entire npm account?

          Also, junon.support++ – big thanks for being clear about all this.

          • veber-alex 1 day ago
            Hmm, maybe npm needs to do the same thing the iPhone does now.

            If you change your key you can't use it for like 12 hours or something?

          • junon 1 day ago
            Yes though in theory my public key would have been published elsewhere at least for verification. Valid point though, yes they would have been able to do that.
          • dabockster 1 day ago
            For this kind of infrastructure, some kind of real world verification may be necessary as well. Like having human ran phone verification (not AI, an actual call center) using information intentionally kept offline for securing more widespread and mission critical packages.

            They can't pwn what they can't find online.

          • jmb99 22 hours ago
            Push to many repos with a brand new key would (should) trigger red flags.
            • Yoric 19 hours ago
              Good point. But how should the red flag materialize?
        • OptionOfT 1 day ago
          Provenance can be added to NPM https://docs.npmjs.com/generating-provenance-statements

          So if the hacker did an npm publish from local it would show up.

        • josephg 22 hours ago
          Yeah; I wish provenance was more widely used. I think about this a lot for mobile apps. If you take an opensource iOS app like signal, you can read the source code on github. But there's actually no guarantee that the code on github corresponds in any way to the app I download from the app store.

          With nodejs packages, I can open up node_modules and read the code. But packages get a chance to run arbitrary code on your computer after installation. By the time you can read the source code, it may be too late.

    • zachrip 1 day ago
      Thanks for sounding the alarm. I've sent an abuse email to porkbun to hopefully get the domain taken down.
      • junon 1 day ago
        Thank you, I appreciate it! I did so as well and even called their support line to have them escalate it. Hopefully they'll treat this as an urgent thing; I'd imagine I'm far from the only one getting these.
        • zachrip 1 day ago
          It's down, so there's some good news. Probably worth submitting to IC3 as well.
    • zachleat 1 day ago
      Yo, someone at npm needs to unpublish simple-swizzle@0.2.3 IMMEDIATELY. It’s still actively compromised.
      • junon 1 day ago
        It's been almost two hours without a single email back from npm. I am sitting here struggling to figure out what to do to fix any of this. The packages that have Sindre as a co-publisher have been published over but even he isn't able to yank the malicious versions AFAIU.

        If there's any ideas on what I should be doing, I'm all ears.

        EDIT: I've heard back, they said they're aware and are on it, but no further details.

        • alper 16 hours ago
          NPM is a Github company and when there was a relatively serious attack in Github Actions a while back there was also pretty much zero response from them.

          Github is SOC2 compliant, but that of course means nothing really.

        • lambda 1 day ago
          They have yanked the bad version of simple-swizzle by now, which was the last of the packages that I was tracking.

          It took them quite a long time to do so.

        • 9dev 1 day ago
          My god. The npm team should urgently review their internal processes. These two hours of neglect will cost a lot of money downstream. At this stage, they act nothing short of irresponsible.
        • dabockster 1 day ago
          I haven't published anything to npm in over a decade. But if you still have access to git, a cli, or a browser where the login is cached and you can access it, you should do so and either take the code down or intentionally sabotage/break it.
    • pryelluw 1 day ago
      Thank you for your service.

      Please take care and see this as things that happen and not your own personal failure.

    • cataflam 1 day ago
      Hey, you're doing an exemplary response, transparent and fast, in what must be a very stressful situation!

      I figure you aren't about to get fooled by phishing anytime soon, but based on some of your remarks and remarks of others, a PSA:

      TRUSTING YOUR OWN SENSES to "check" that a domain is right, or an email is right, or the wording has some urgency or whatever is BOUND TO FAIL often enough.

      I don't understand how most of the anti-phishing advice focuses on that, it's useless to borderline counter-productive.

      What really helps against phishing :

      1. NEVER EVER login from an email link. EVER. There are enough legit and phishing emails asking you to do this that it's basically impossible to tell one from the other. The only way to win is to not try.

      2. U2F/Webauthn key as second factor is phishing-proof. TOTP is not.

      That is all there is. Any other method, any other "indicator" helps but is error-prone, which means someone somewhere will get phished eventually. Particularly if stressed, tired, or in a hurry. It just happened to be you this time.

      Good luck and well done again on the response!

      • graemep 6 hours ago
        > NEVER EVER login from an email link. EVER

        Login using one off email links (instead of username + password) is increasingly common which means its the only option.

        • cataflam 2 hours ago
          In that case

          1. You just requested it, I'm not saying to never click link on transactional emails you requested. You still need to click on those verify email links

          2. It replaces entering your password, so you're not entering your password on a link from an email, which is the very wrong thing.

        • hirako2000 6 hours ago
          At least you've requested that email, to be able to login. The timing chance for a phishing mail to come here and there is insignificant. OP is referring to communications that are one way street, the (pseudo) organisation to you.
          • graemep 4 hours ago
            Its a lot lower risk, its still not great IMO. Email is really not designed for it, and it trains people to use links to login.
            • kngspook 3 hours ago
              Yeah, I hate these. It's also a very not-ergonomic was to sign in. I wish those companies would redirect those efforts to passkeys.
              • hirako2000 1 hour ago
                It's very ergonomic for those who discovered the internet via an iPhone, who think Gmail is email. They can't remember their passwords, and wouldn't know where how to recover most cryptographic factors. They have an email account they tend to have access to and use magic links to login , they are very happy with that.

                Not promoting the pattern, I also find it worrying the majority of internet users have no basic understanding of authentication and the risk for their digital identity.

      • diggan 1 day ago
        Or you know, get a password manager like the rest of us. If your password manager doesn't show the usual autofill, since the domain is different than it should, take a step back and validate everything before moving on.

        Have the TOTP in the same/another password manager (after considering the tradeoffs) and that can also not be entered unless the domain is right :)

        • SchemaLoad 22 hours ago
          I feel like it's extremely common for the autofill to not work for various reasons even when you aren't being phished. I have to manually select the site to fill fairly often, especially inside apps where the password manager doesn't seem to match the app to the website password.

          Passkeys seem like the best solution here where you physically can not fall for a phishing attack.

          • vaylian 18 hours ago
            > I feel like it's extremely common for the autofill to not work for various reasons even when you aren't being phished.

            This is how Troy Hunt got phished. He was already very tired after a long flight, but his internal alarm bells didn't ring loud enough, when the password manager didn't fill in the credentials. He was already used to autofill not always working.

            • junon 7 hours ago
              This is why I haven't bothered with them (the browser extensions; I have used password managers for years and years) and thus why they weren't there to protect against the attack.
          • diggan 15 hours ago
            > I feel like it's extremely common for the autofill to not work for various reasons even when you aren't being phished

            I dunno, it mostly seems to not work when companies change their field names/IDs, or just 3rd party authentication, then you need to manually add domains. Otherwise my password manager (1Password) works everywhere where I have an account, except my previous bank which was stuck in the 90s and disallowed pasting the passwords. If you find that your password manager doesn't work with most websites (since it's "extremely common") you might want to look into a different one, even Firefox+Linux combo works extremely well with 1Password. Not affiliated, just a happy years+ user.

            > Passkeys seem like the best solution here where you physically can not fall for a phishing attack.

            Yeah, I've looked into Passkeys but without any migration strategy or import/export support (WIP last time I looked into it), it's not really an alternative just yet, at least for me personally. I have to be 100% sure I can move things when the time ultimately comes for that.

            • kngspook 3 hours ago
              My understand is the people behind passkeys are working on an import/export solution. Who knows when it'll happen though.

              For now, when companies let me have multiple passkeys, that's sufficient for me. I put one on my Apple Keychain and one in 1Password.

            • mdaniel 10 hours ago
              I'm glad you've had such good experience with autofill consistently working for you. My experience has been closer to that of the sibling comments: 60/40 so I often just give up and copy-paste. I actually did try jettisoning 1Password for Proton Pass but that was even worse, so I went back

              > without any migration strategy or import/export support

              Since you're already a 1Password user, I wanted to draw your attention to the "Show debugging tools" in the "Settings > Advanced" section. From that point, you can say "Copy Item JSON" and it will give you the details you would want for rescuing the Passkey. Importing it into something else is its own journey that I can't help with

                {
                  "overview": {
                    "passkey": {
                      "credentialId": "...",
                      "rpId": "example.com",
                      "userHandle": "..."
                    },
                  ...
                  "details": {
                    "passkey": {
                      "type": "webauthn",
                      "createdAt": 175.......,
                      "privateKey": "eyJ...",
                      "userHandle": "..."
                    }
              
              I would guess their "op" CLI would allow similar, but I don't have the magic incantation to offer, whereas that Copy JSON is painless
        • cataflam 1 day ago
          I mostly agree and I do use one.

          You only need read the whole thread however to see reasons why this would sometimes not be enough: sometimes the password manager does not auto-fill, so the user can think it's one of those cases, or they're on mobile and they don't have the extension there, or...

          As a matter of fact, he does use one, that didn't save him, see: https://news.ycombinator.com/item?id=45175125

          • eviks 1 day ago
            > sometimes the password manager does not auto-fill

            So pick one that does? That's like its top 2 feature

            > he does use one

            He doesn't since he has no autofill installed, so loses the key security+ convenience benefit of automatch

            • acdha 23 hours ago
              > So pick one that does? That's like its top 2 feature

              Still doesn’t work 100% of the time, because half of the companies on earth demote their developer time to breaking 1995-level forms. That’s why every popular password manager has a way to fill passwords for other domains, why people learn to use that feature, and why phishers have learned to convince people to use that feature.

              WebAuthn prevents phishing. Password managers reduce it. This is the difference between being bulletproof like Superman or a guy in a vest.

              • eviks 20 hours ago
                You don't need 100%, just a high enough frequency that you wouldn't get used to dismissing the fail on auto pilot. Perfect shouldn't be the enemy of the good?
              • sunaookami 21 hours ago
                Then good password managers will still show you only the logins for that domain. If the login is on another domain then you would have saved it anyways when first logging in/registering and if the site moved then you can get suspicious and check carefully first.
                • acdha 12 hours ago
                  All password managers allow copy-paste (which is what happened here) and the popular ones all offer you the ability to search and fill passwords from other domains. It's important to understand why they do, because it's also why these attacks continue to work: the user _thinks_ they are working around some kind of IT screwup, and 9 times out of 10 (probably closer to 99 out of 100) that's correct. Every marketing-driven hostname migration, every SSO failure, every front-end developer who breaks autofill, every “security expert” who was an accountant last year saying password managers are a vulnerability helps train users to think that it's not suspicious when you have to search for a different variation of the hostname or copy-paste a password.

                  That's why WebAuthn doesn't allow that as a core protocol feature, preventing both this attack and shifting the cost of unnecessary origin changes back to the company hosting the site. Attacking this guy for making a mistake in a moment of distraction is like prosecuting a soldier who was looking the other way when someone snuck past: wise leaders know that human error happens and structure the system to be robust against a single mistake.

                • voxic11 20 hours ago
                  What are good password managers for chrome and Firefox on Android?
                  • Ghoelian 18 hours ago
                    Personally a big fan of 1Password. On the topic of autofill, the only website it sometimes won't fill is Reddit, which you know, whatever, I never go there anymore anyway.

                    As a developer I also love their ssh and gpg integrations, very handy.

                    I do get it for free from work, but if I had to choose one myself I'd have to pay for I'd probably still pick 1Passwrod.

                    • mdaniel 10 hours ago
                      > I do get it for free from work, but if I had to choose one myself I'd have to pay for I'd probably still pick 1Passwrod.

                      I wanted to highlight that "getting it for free from work" isn't a sweetheart deal offered just to OP, but a feature of 1Password for Teams, meaning all employees of a business that uses 1Password automatically have a Family license for use at home https://support.1password.com/link-family/

                      And, for clarity, it's merely a financial relationship: the business cannot manage your Family account, cannot see its contents, and if you have a separation event you can retain the Family account forever in a read only capacity or you can take over the payment (or, heh, I presume move to another employer that also uses 1Password) and nothing changes for your home passwords

              • vinterson 20 hours ago
                Given recent vuln of password manager extensions on desktop leaking passwords to malicious sites, I have disabled autofill on desktop... And autofill didn't work for me on ycombinator on mobile... Autofill is too unreliable.
            • voxic11 21 hours ago
              Mobile autofill requires you to make other security compromises.
              • eviks 20 hours ago
                Which ones, and how do they compare to this one?
            • y1n0 1 day ago
              He didn't say it didn't have the autofill feature, he said sometimes it doesn't work. I've experienced this pretty routinely with two different managers.
              • eviks 23 hours ago
                Yes he did, read again

                > I was mobile, the autofill stuff isn't installed

        • FooBarWidget 18 hours ago
          I wish it's that easy. 1Password autofill on Android Chrome broke for me a month ago. Installed all updates, checked settings, still nothing. Back to phishing prone copy paste.
    • Goofy_Coyote 23 hours ago
      Absolutely best response here.

      Folks from multi-billion dollar companies with multimillion dollar packages should learn a few things from this response.

    • kidk 1 day ago
      Could happen to any of us. Thanks for reacting so quickly!!
    • SkyPuncher 1 day ago
      The fact that NPMs entire ecosystem relies on this not happening regularly is very scary.

      I’m extremely security conscious and that phishing email could have easily gotten me. All it takes is one slip up. Tired, stressed, distracted. Bokm, compromised

    • winterqt 1 day ago
      Thank you for the swift and candid response, this has to suck. :/

      > The author appears to have deleted most of the compromised package before losing access to his account. At the time of writing, the package simple-swizzle is still compromised.

      Is this quote from TFA incorrect, since npm hasn’t yanked anything yet?

      • junon 1 day ago
        Quote is probably added recently. Not entirely correct as I have not regained access; nothing happening to the packages is of my own doing.

        npm does appear to have yanked a few, slowly, but I still don't have any insight as to what they're doing exactly.

    • aftbit 1 day ago
      Didn't your password manager notice that npmjs dot help was not a legit domain and avoid auto-filling there?
    • jap 1 day ago
      Could happen to anyone, many thanks for addressing this quickly.
    • jacquesm 1 day ago
      I hate that kind of email when sent out legitimately. Google does this crap all the time pretty much conditioning their customers to click those links. And if you're really lucky it's from some subdomain they never bothered advertising as legit.

      Great of you to own up to it.

      • antod 1 day ago
        Atlassian and MS are terrible for making email notifications that are really hard to distinguish from phishing emails. Using hard to identify undocumented random domains in long redirect chains, obfuscating links etc etc.
      • mikeryan 1 day ago
        I’ve started ignoring these types of emails and wait to do any sort of credentials reset until I get an alert when I log in (or try to) for just this reason.
    • mkfs 9 hours ago
      The 2FA/TOTP security theater was partly to blame for this.
      • lurker_jMckQT99 8 hours ago
        How so? Has the author mentioned somewhere that he was tricked into providing 2FA codes / had any sort of 2FA enabled at all?
    • BlackjackCF 1 day ago
      Thank you for being quick and upfront about this!
    • g42gregory 1 day ago
      I am not very sophisticated npm user on MacOS, but I installed bunch of packages for Claude Code development. How do we check if computer has a problem?

      Do we just run:

      npm list -g #for global installs

      npm list #for local installs

      And check if any packages appear that are on the above list?

      Thanks!

    • n8m8 11 hours ago
      Thanks for leaving a transparent response with what happened, how you responded, what you're doing next, and concisely taking accountability Great work!
    • rootlocus 1 day ago
      > Made the mistake of clicking the link instead of going directly to the site like I normally would (since I was mobile).

      Does anyone know how this attack works? Is it a CSRF against npmjs.com?

      • junon 1 day ago
        That was the low-tech part of their attack, and was my fault - both for clicking on it and for my phrasing.

        It wasn't a single-click attack, sorry for the confusion. I logged into their fake site with a TOTP code.

        • yard2010 1 day ago
          This is a clear example that this can happen to anyone.

          Sorry for what you're going through.

          • SchemaLoad 22 hours ago
            This is why Passkeys are getting pushed right now. They make it physically impossible to sign in to a phishing site.
      • veber-alex 1 day ago
        Fake site.

        You login with your credentials, the attacker logins to the real site.

        You get an SMS with a one time code from the real site and input it to the fake site.

        The attacker takes the code andc finishes the login to the real site.

      • smeijer87 1 day ago
        Probably just a fake site.
    • baloki 17 hours ago
      Happens to the best of people. Appreciate you’re fast and open response.
    • AsmodiusVI 1 day ago
      You're doing what you can, it's not easy. Thanks for handling this so well.
    • sidcool 21 hours ago
      Thanks for your response. But this does call for preventing a single point of failure for security.
    • joshmanders 1 day ago
      Insanely well crafted phishing, godspeed man.
      • junon 1 day ago
        Thanks Josh, appreciate it <3
    • nodesocket 1 day ago
      What did the phishing email say that made you click and login?
      • junon 1 day ago
        That it had been more than 12 months since last updating them. Npm has done outreach before about doing security changes/enhancements in the past so this didn't really catch me.

        Screenshot here: https://imgur.com/a/q8s235k

        • rollcat 1 day ago
          @everyone in the industry, everywhere:

          Urgency is poison.

          Please, please put a foot in the door whenever you see anyone trying to push this kind of sh*t on your users. Make one month's advance notice the golden standard.

          I see this pattern in scam mail (including physical) all the time: stamp an unreasonably short notice and expect the mark to panic. This scam works - and this is why legit companies that try this "in good faith" should be shamed for doing it.

          Actual alerts: just notify. Take immediate, preventive, but non-destructive action, and help the user figure out how to right it - on their own terms.

          • notmyjob 1 day ago
            Agree, but this example wasn’t even that aggressive in its urgency and op said they were merely ticking things off the todo, not feeling alarmed by the urgency. The problem is email as it’s used currently. The solution is to not use email.
            • niwtsol 1 day ago
              The email says accounts will start locking Sept 10th and it was sent Sept 8th - so a 48 hour urgency window or an account would be locked is urgency IMO
              • notmyjob 1 day ago
                Fair enough, was just thinking about many low effort scams that have “EMERGENCY!!! ACT NOW!!!” in red boldface. This, by being slightly? less aggressive is actually less likely to trip my “this is phishing” detector. Obviously ymmv.
            • lelanthran 1 day ago
              > The problem is email as it’s used currently. The solution is to not use email.

              No. The problem is unsigned package repositories.

              The solution is to tie a package to an identity using a certificate. Quickest way I can think off would be requiring packages to be linked to a domain so that the repository can always check incoming changes to packages using the incoming signature against the domain certificate.

              • benchloftbrunch 11 hours ago
                As long as you're OK with self signed certificates or PGP keys, I'd be on board with this.

                I really, really dislike the idea of using TLS certificates as we know them for this purpose, because the certificate authority system is too centralized, hierarchical, and bureaucratic, tightly coupled to the DNS.

                That system is great for the centralized, hierarchical, bureaucratic enterprises who designed it in the 90s, but would be a pain in the ass for a solo developer, especially with the upcoming change to 45 day lifetimes.

                • lelanthran 10 hours ago
                  > As long as you're OK with self signed certificates or PGP keys, I'd be on board with this.

                  I am with PGP but more wary of self-signed certs, though even self-signed certs allow mass revocation of packages when an author's cert is compromised.

              • rollcat 14 hours ago
                > The solution is to tie a package to an identity using a certificate.

                Identity on the Internet is a lie. Nobody knows you're a dog.

                The solution is to make security easy and accessible, so that the user can't be confused into doing the insecure thing.

                • lelanthran 12 hours ago
                  > Identity on the Internet is a lie.

                  What do you think HTTPS is?

                  • mdaniel 10 hours ago
                    Transport Layer Security, and has nothing to do with Identity. Take for example the perfectly valid certificate that was issued for npmjs[.]help which unquestionably does not belong to Microsoft/GitHub. Hell, even the certificate for npmjs.com is 'O=Google Trust Services' which doesn't sound like any of the business entities one would expect to own that cert
              • cluckindan 1 day ago
                And one pwned domain later, we are back in square one.
                • lelanthran 20 hours ago
                  > And one pwned domain later, we are back in square one.

                  1. It's an extra step: before you pwn the package, you need to pwn a domain.

                  2. When a domain is pwned, the packages it signs can be revoked with a single command.

              • dabockster 1 day ago
                That wouldn't work against a really sophisticated attacker. Especially for something that's clearly being maintained for free by one overworked person in their spare time (yet again).

                You'd need some kind of offline verification method as well for these widely used infrastructure libraries.

                • lelanthran 17 hours ago
                  > That wouldn't work against a really sophisticated attacker.

                  Nothing "really works" against a sophisticated hacker :-/ Doesn't mean that "defense in depth" does not apply.

                  > You'd need some kind of offline verification method as well for these widely used infrastructure libraries.

                  I don't understand why this is an issue, or even what it means: uploading a new package to the repository requires the contributor to be online anyway. The new/updated/replacement package will have to be signed. The signature must be verified by the upload script/handler. The verification can be done using the X509 certificate issued for the domain of the contributor.

                  1. If the contributor cannot afford the few dollars a year for a domain, they are extremely vulnerable to the supply chain attack anyway (by selling the maintenance of the package to a bad actor), and you shouldn't trust them anyway.

                  2. If the contributor's domain gets compromised you only have to revoke that specific certificate, and all packages signed with that certificate, in the past or in the future, would not be installable.

                  As I have repeatedly said in the past, NPM (and the JS tools development community in general) had no adults in the room during the design phase. Everything about JS stacks feels like it was designed by children who had never programmed in anything else before.

                  It's a total clown show.

                  • benchloftbrunch 11 hours ago
                    > X509 certificate

                    It should be a PGP or SSH key, absolutely not an X509 certificate (unless you allow self signed).

                    Personal identity keys should be fully autonomous and not contingent on the formal recognition of any external authority.

                  • idiotsecant 13 hours ago
                    If only they would have had the benefit of you being around to do all that work with your glorious hindsight.
                    • lelanthran 13 hours ago
                      > If only they would have had the benefit of you being around to do all that work with your glorious hindsight.

                      They didn't need me; plenty of repositories doing signed packages existed well before npm was created.

                      Which is why I likened them to a bunch of kids - they didn't look around at how the existing repos were designed, they just did the first thing that popped into their head.

                      • idiotsecant 12 minutes ago
                        On the other hand, they did the actual work when nobody else did. It's so easy to take potshots, when you've never done anything consequential enough for the results to matter as much as they do for npm.
            • naikrovek 1 day ago
              > The solution is to not use email.

              and use what? instant message? few things lack legitimacy more than an instant message asking you to do something.

              Links in email are much more of a problem than email itself. So tempting to click. It's right there, you don't have to dig through bookmarks, you don't have to remember anything, just click. A link is seductive.

              the actual solution is to avoid dependencies whenever possible, so that you can review them when they change. You depend on them. You ARE reviewing them, right? Fewer things to depend on is better than more, and NPM is very much an ecosystem where one is encouraged to depend on others as much as possible.

              • notmyjob 7 hours ago
                Depends what you use it for. I don’t think email is a single thing in that regard. For example I’ve used it as a backup method for important files and also as 2 factor. Those are wholly different things that warrant different solutions. The majority of email volume is not person to person communication but part of some corporation/spammers/scammers business model who at best, like my bank, is using it to shift liability away from themselves onto consumers and at worst is attempting to defraud me of all I own. It’s still useful in business, maybe, but pretty sure teams/slack/… will win eventually.
              • rollcat 14 hours ago
                > the actual solution is to avoid dependencies whenever possible, so that you can review them when they change.

                If you're publishing your software: you can't "not" depend on some essential service like source hosting or library index.

                > You ARE reviewing them, right?

                Werkzeug is 20kloc and is considered "bare bones" of Python's server-side HTTP. If you're going to write a complex Python web app using raw WSGI, you're just going to repeat their every mistake.

                While at it: review Python itself, GCC, glibc, maybe Linux, your CPU? Society depends on trust.

        • SSLy 1 day ago
          Can you post full message headers somewhere? It'd be interesting which MTA was involved in delivery from the sender's side.
          • junon 1 day ago
            • nsdfg 1 day ago
            • SSLy 1 day ago
              let's see the header of interest:

                   Received: from npmjs.help by smtp.mailtrap.live
            • alexellisuk 1 day ago
              How did simply opening this email in something like Gmail or a desktop client result in it being able to compromise NPM packages under your control?

              I'm just curious - and as a word of warning to others so we can learn. I may be missing some details, I've read most of the comments on the page.

              • junon 1 day ago
                I clicked the link like a genius :)
                • osa1 1 day ago
                  I don't understand. The link could've come from anywhere (for example from a HN comment). How does just clicking on it give your package credentials to someone else? Is NPM also at fault here? I'd naively think that this shouldn't be possible.

                  For example, GitHub asks for 2FA when I change certain repo settings (or when deleting a repo etc.) even when I'm logged in. Maybe NPM needs to do the same?

                  • koil 1 day ago
                    As OC mentioned elsewhere, it was a targeted TOTP proxy attack.
                    • hughw 1 day ago
                      So, he clicked the link and then entered his correct TOTP? how would manually typing the url instead of clicking the link have mitigated this?
                      • Mogzol 1 day ago
                        They wouldn't have manually typed the exact URL from the email, they would have just typed in npmjs.com which would ensure they ended up on the real NPM site. Or even if they did type out the exact URL from the email, it would have made them much more likely to notice that it was not the real NPM URL.
                  • dboreham 1 day ago
                    OP entered their credentials and TOTP code, which the attacker proxied to the real npmjs.com

                    FWIW npmjs does support FIDO2 including hard tokens like Yubikey.

                    They do not force re-auth when issuing an access token with publish rights, which is probably how the attackers compromised the packages. iirc GitHub does force re-auth when you request an access token.

                    • osa1 1 day ago
                      > They do not force re-auth when issuing an access token with publish rights, which is probably how the attackers compromised the packages

                      I'm surprised by this. Yeah, GitHub definitely forces you to re-auth when accessing certain settings.

                • alexellisuk 1 day ago
                  :-( How did the link hijack your password/2fa? Or did you also enter some stuff on the form?
        • bflesch 1 day ago
          Thanks for sharing, I've created an OTX entry for this: https://otx.alienvault.com/pulse/68bf031ee0452072533deee6
        • twoodfin 1 day ago
          Perfect example of why habituating users to renewing credentials (typically password expiration) is a terrible practice.
          • NooneAtAll3 1 day ago
            is there an actual habituation?

            that message feels like it could work as a first-time as well

            • twoodfin 1 day ago
              We should be immediately suspicious when we get any solicitation to "renew" something "expired" in a security domain. Swapping un-compromised secrets is essentially always more risky than leaving them be.

              Regardless of whether the real NPM had done this in the past, decades of dumb password expiration policies have trained us that requests like this are to be expected rather than suspected.

            • nicoburns 1 day ago
              If legitimate companies didn't do this, then the email would be suspicious.
          • anonymars 1 day ago
            Frustrating that you're being downvoted

            https://pages.nist.gov/800-63-FAQ/#q-b05

        • IshKebab 1 day ago
          And then what happens when you click the link? Wouldn't your password manager fail to auto fill your details?
          • junon 1 day ago
            This was mobile, I don't use browser extensions for the password manager there.
        • lifeinthevoid 1 day ago
          That green checkmark ... what application is this?
          • junon 1 day ago
            Migadu. The tooltip hovering over it shows:

                dkim=pass header.d=smtp.mailtrap.live header.s=rwmt1 header.b=Wrv0sR0r
          • markasoftware 1 day ago
            check marks in email clients usually mean DKIM / other domain verification passed. The attack author truly owns npmjs.help, so a checkmark is appropriate.
        • nodesocket 1 day ago
          Yikes, looks legit. Curious what are the destination addresses? Would like to monitor them to see how much coin they are stealing.
          • FergusArgyll 1 day ago
            0x66a9893cC07D91D95644AEDD05D03f95e1dBA8Af

            0x10ed43c718714eb63d5aa57b78b54704e256024e

            0x13f4ea83d0bd40e75c8222255bc855a974568dd4

            0x1111111254eeb25477b68fb85ed929f73a960582

            0xd9e1ce17f2641f24ae83637ab66a2cca9c378b9f

            Source: https://github.com/chalk/chalk/issues/656#issuecomment-32670...

            • dbdr 1 day ago
              Next comment:

              > Those are swap contract addresses, not attacker addresses. E.g. 0x66a9893cC07D91D95644AEDD05D03f95e1dBA8Af the Uniswap v4 universal router addr.

              > Every indication so far is that the attacker stole $0 from all of this. Which is a best-case outcome.

              • FergusArgyll 1 day ago
                Oh, that makes much more sense - thanks!
          • mcintyre1994 1 day ago
            There's a lot, looks like they start at line 103 in the gist here: https://gist.github.com/sindresorhus/2b7466b1ec36376b8742dc7...
          • hunter2_ 1 day ago
            In terms of presentation, yes. In terms of substance, short deadlines are often what separate phishing from legitimate requests.
            • mrguyorama 1 day ago
              There is NO reliable indicators, because every single one of these "Legit requests don't ..." recommendations has been done by a local bank trying to get their customers to do something.

              My local credit union sent me a "please change your password" email from a completely unassociated email address with a link to the change password portal. I emailed them saying "Hey it looks like someone is phishing" and they said, "nope, we really, intentionally, did this"

              Companies intentionally withhold warning emails as late as possible to cause more people to incur late fees. So everyone is used to "shit, gotta do this now or get screwed"

              You can't hope to have good security when everyone's money is controlled by organizations that actively train people to have bad OPSEC or risk missing rent.

              • cataflam 1 day ago
                > There is NO reliable indicators

                Completely agree. The only reliable way is to never use an email/SMS link to login, ever.

                • hunter2_ 8 hours ago
                  Or go ahead and use them, but abort if your password manager doesn't auto fill. Such abort scenarios include not only a password field without auto fill, but also a total lack of password field (e.g., sites that offer OTP-only authentication), since either way you don't have your password manager vetting the domain.
              • hunter2_ 1 day ago
                I agree: any of the potential indicators of phishing (whether it's poor presentation, incorrect grammar, tight deadlines, unusual "from" addresses, unusual domains in links, etc.) can easily have false positives which unfortunately dull people's senses. That doesn't mean they can't continue to be promulgated as indicators of possible (not definite) phishing, though.

                I used the word "often" rather than "always" for this reason.

    • mfedderly 1 day ago
      I'm sorry that you're having to go through this. Good luck sorting out your account access.

      I actually got hit by something that sounds very similar back in July. I was saved by my DNS settings where "npNjs dot com" wound up on a blocklist. I might be paranoid, but it felt targeted and was of a higher level of believability than I'd seen before.

      I also more recently received another email asking for an academic interview about "understanding why popular packages wouldn't have been published in a while" that felt like elicitation or an attempt to get publishing access.

      Sadly both of the original emails are now deleted so I don't have the exact details anymore, but stay safe out there everyone.

    • HelloWorldH 1 day ago
      Thank god I misspelled "npm run strat"! Might have been owned.
    • svendroevskaeg 18 hours ago
      So by "Just NPM is affected" does that mean yarn is unaffected?
      • junon 16 hours ago
        No, anything that connects to npm as an authoritative source for packages. Yarn, pnpm, and npm clients all do.
    • komali2 1 day ago
      `error-ex` 1.3.3, already removed from npm https://github.com/Qix-/node-error-ex/issues/17
    • tomkarho 1 day ago
      Hang in there buddy. These things happen.
    • senectus1 21 hours ago
      we're only human mate, great job responding to it!

      thanks for your efforts!

    • cyanydeez 1 day ago
      maybe you should work with feross to make a website-api that simply gives you a "true/false" on "can I safely update my dependencies right now" that gives an outofband way to mark the current or all versions thereof, of compromised packages.
    • sim7c00 1 day ago
      man. anyone and everyone can get fished in a targeted attack. good luck on the cleanup and thanks for being forward about it.

      want to stress everyone it can happen to. no one has perfect opsec or tradecraft as a 1 man show. its simply not possible. only luck gets one through and that often enough runs out.

    • naikrovek 1 day ago
      mistakes happen. owning them doesn't always happen, so well done.

      phishing is too easy. so easy that I don't think the completely unchecked growth of ecosystems like NPM can continue. metastasis is not healthy. there are too many maintainers writing too many packages that too many others rely on.

    • dboreham 1 day ago
      Sorry to be dumb, but can you expand a bit on "2FA reset email..." so the rest of us know what not to do?
      • junon 1 day ago
        Ignore anything coming from npm you didn't expect. Don't click links, go to the website directly and address it there. That's what I should have done, and didn't because I was in a rush.

        Don't do security things when you're not fully awake, too. Lesson learned.

        The email was a "2FA update" email telling me it's been 12 months since I updated 2FA. That should have been a red flag but I've seen similarly dumb things coming from well-intentioned sites before. Since npm has historically been in contact about new security enhancements, this didn't smell particularly unbelievable to my nose.

        The email went to the npm-specific inbox, which is another way I can verify them. That address can be queried publicly but I don't generally count on spammers to find that one but instead look at git addresses etc

        The domain name was `npmjs dot help` which obviously should have caught my eye, and would have if I was a bit more awake.

        The actual in-email link matched what I'd expect on npm's actual site, too.

        I'm still trying to work out exactly how they got access. They didn't technically get a real 2FA code from the actual, I don't believe. EDIT: Yeah they did, nevermind. Was a TOTP proxy attack, or whatever you'd call it.

        Will post a post-mortem when everything is said and done.

        • mkfs 9 hours ago
          > because I was in a rush

          That's how they get you.

        • dboreham 1 day ago
          I see (I think): they tricked you into entering a TOTP code into their site, which they then proxied to the real names, thereby authenticating as your account. Is that correct?
          • sugarpimpdorsey 1 day ago
            It only proves that TOTP is useless against phishing.
            • goku12 1 day ago
              Every day brings me another reason to ask the question: "Why the hell did they throw away the idea of mutual TLS?". They then went onto invent mobile OTP, HOTP, TOTP, FIDO-U2F and finally came a full cycle by reinventing the same concept, but in a more complex incarnation - Passkeys.
              • tpxl 1 day ago
                Works this way for my government and my bank. I was given a cert matching my real name and the login just asks for my cert and pulls me through (with additional 2FA for the bank). Pretty amazing if you ask me.
                • goku12 1 day ago
                  Which government is this, if I may ask?
                  • SahAssar 1 day ago
                    I'm going to guess estonia which has had this since mid 2000's IIRC.
                    • jve 18 hours ago
                      Latvia has it too. We have ID cards which is a smartcard, we use that to set up some authentication app that allows us to authenticate within online services and can even do remotely transactions like selling the house (well that is the extreme case and one needs to connect to teams meeting and show your face and have high quality video/connection and show your id card, along with digital auth). But anyways, it is used all around the place, many many sites support that auth, the banks support it and even remote auth scenarios are possible. Just today was calling mobile operator support and they had to verify me - so after saying my ID, an auth request pops up from app that asks to verify identity to mobile operator (app shows who is asking for auth).

                      Authentications are separated and if some signature must be placed or money to be sent, you must use other access code and the app shows the intention of what are you authorizing. If it is money being sent, you see where and how much you want to sent before you approve this request on the app.

                      But the app is all tied to digital identity from the id card in the first place - to set up these strong authentication guarantees in the first place you use your ID card. Some time ago we had to use computer with smartcard reader to set it up, nowdays I dunno whether it is NFC or something, but the mobile phone can read the ID card.

              • mschuster91 1 day ago
                the UI for client side certificates was shit for years. no one particularly cared. passkeys however are... pretty reasonable.
                • xorcist 1 day ago
                  That's just it. If any of the browser vendors put 1% of the work they spent on renewing their visual identity, remodeling their home page, or inventing yet another menu system into slightly easier to use client certificates (and smart cards) this would have been a solved problem two decades ago. All the pieces are in place, every browser has supported this since the birth of SSL, it's just the user interface bits that are missing.

                  It's nothing short of amazing that nobody worked on this. It's not as if there isn't a need. Everyone with high security requirements (defense, banks etc.) already do this, but this clumsy plugins and (semi-)proprietary software. Instead we get the nth iteration of settings redesigns.

                  • goku12 18 hours ago
                    Bingo! Exactly my point. Thanks!
                • goku12 1 day ago
                  > the UI for client side certificates was shit for years. no one particularly cared.

                  That's exactly what I mean! Who would use it if the UI/UX is terrible? Many Gemini (protocol) browsers like Lagrange have such pleasant UIs for it, though somewhat minimal. With sufficient push, you could have used mutual TLS from even hardware tokens.

                • chuckadams 1 day ago
                  At least on a Mac, you can just double-click a cert file, it'll prompt to install in Keychain, and anything using macOS's TLS implementation will see it.
                  • goku12 19 hours ago
                    And what about the browser? How does it know which client cert (I assume the key is also there) to use for a site? Does it prompt you before proceeding with authentication?
                    • chuckadams 12 hours ago
                      The domains the cert gets presented to is also configured in Keychain, and Safari uses it. Looks like Firefox has its own thing, buried several layers deep in settings. No idea about chrome. It's definitely a process you'd want to script in an installer, nothing you'd want to subject the end user to. So yeah, still pretty crap UX overall.
              • quotemstr 1 day ago
                Because the tech industry egregore is a middling LLM that gets it context window compacted every generation.
            • ksdnjweusdnkl21 1 day ago
              TOTP isnt designed to be against phishing. Its against weak, leaked or cracked passwords.
              • Scoundreller 1 day ago
                Lots of junk TOTP apps in app stores.

                Once heard of a user putting in a helpdesk ticket asking why they had to pay for the TOTP app. Then I realize their TOTP seed is probably out in the open now.

                I’m sure we can imagine how else this could go badly…

            • dboreham 1 day ago
              Yes. This attack would not have worked if FIDO2 (or the software emulation Passkey) had been used.
          • junon 1 day ago
            Seems so, yes.
        • jvuygbbkuurx 1 day ago
          Did they also phish the login password after clicking the link or did they already have it?
          • junon 1 day ago
            They phished username, password (unique to npm), and a TOTP code.

            They even gave me a new TOTP code to install (lol) and it worked. Showed up in authy fine. Whoever made this put a ton of effort into it.

            • scratchyone 1 day ago
              Damn, that's an impressively well-done attack. Curious, do you use a password manager? If so, did it not autofilling feel like a red flag to you?

              I've always wondered if I ever get phished if I'll notice bc of that or if I'll just go "ugh 1password isn't working, guess i'll paste my password in manually" and end up pwned

              • junon 1 day ago
                I was on mobile, didn't use the autofiller. Also previous experience with the web extensions showed me that they were flakey at best anyway.

                The `.help` should have been the biggest red flag, followed by the 48-hours request timeline. I wasn't thinking about things like I normally would this morning and just wanted to get things done today. Been a particularly stressful week, not that it's any excuse.

              • nixosbestos 1 day ago
                I'm thinking on what all the anti-passkey folks have to say right now. Or the "password managers aren't necessary" crowd.
        • tadamcz 1 day ago
          Using a security key as 2FA instead of TOTP would have prevented this attack, right?

          If you maintain popular open source packages for the love of God get yourself a couple of security keys.

          • SahAssar 1 day ago
            Well, that would also require all the services to support webauthn/FIDO, which a lot of them don't. Some who do support it only allow one key or trivial bypass via "security questions".
        • sugarpimpdorsey 1 day ago
          > The domain name was `npmjs dot help` which obviously should have caught my eye, and would have if I was a bit more awake.

          It's a good thing the WebPKI cartel mostly did away with EV certs.... these days any old cert where only the SAN matches the domain and your browser gives a warm fuzzy "you're secure!"

          • mananaysiempre 1 day ago
            The browsers mostly did away with EV certs[1], against sustained pushback from CAs, because of research invariably showing that the feeling of security is mostly unfounded. (Both because users are garbage at reading security indicators—and unscrupulous companies are eager to take advantage of that, see Cloudflare’s “security of your connection”—and because the legal-name namespace is much more Byzantine and locale-dependent than any layman can parse[2].)

            By contrast, OV certs, which were originally supposed a very similar level of assurance, were did away with by CAs themselves, by cost-optimizing the verification requirements into virtual nonexistence.

            That said, it remains a perpetual struggle to get people to understand the difference between being connected to the legitimate operator of satan.example (something an Internet-wide system mostly can guarantee) and it being wise to transact there (something extensive experience shows it can’t and shouldn’t try to). And if you’re a domain owner, your domain is your identity; pick one and stick to it. Stackoverflow.blog is stupid, don’t be like stackoverflow.blog.

            [1] https://www.troyhunt.com/extended-validation-certificates-ar...

            [2] https://arstechnica.com/information-technology/2017/12/nope-...

            • sugarpimpdorsey 1 day ago
              > That said, it remains a perpetual struggle to get people to understand the difference between being connected to the legitimate operator of satan.example

              That's because the browser implementers gave up on trying to solve the identity problem. It's too difficult they said, we'd rather push other things.

              Google implemented certificate pinning in Chrome for themselves and a few friends, said fuck everyone else, and declared the problem solved. Who cares about everyone else when your own properties are protected and you control the browser?

              Meanwhile the average user has no idea what a certificate does, whether it does or doesn't prove identity.

              No wonder they removed the lock icon from the browser.

              • ameliaquining 20 hours ago
                How would you propose that it should work?
          • Kwpolska 1 day ago
            People never paid attention to the special EV cert markers. And even if they did, what would stop someone from registering a company named "npm, Inc." and buying an EV cert for it? Sure, it’s going to cost some money upfront, but you can make much more by stealing cleptocurrency.
      • diggan 1 day ago
        > so the rest of us know what not to do?

        Can't really tell you what not to do, but if you're not already using a password manager so you can easily avoid phishing scams, I really recommend you to look into starting doing so.

        In the case of this attack, if you had a password manager and ended up on a domain that looks like the real one, but isn't, you'd notice something is amiss when your password manager cannot find any existing passwords for the current website, and then you'd take a really close look at the domain to confirm before moving forward.

        • ziml77 1 day ago
          After nearly being phished once (only having a confirmation email save me) I've taken to being extra vigilant if I don't get a password entry suggestion from my password manager. It means I need to be extremely damn sure I'm on a domain that is controlled by the same entity my account is with. So far I haven't had another incident like that and I hope to keep it that way.
        • withinboredom 1 day ago
          This isn’t exactly true. My password manager fails to recognise the domain I’m on, all the time. I have to go search for it and then copy/paste it in.

          That being said, if you’re making login pages: please, for the love of god, test them with multiple password managers. Oh, and make sure they also work correctly with the browser’s autotranslation. Don’t rely on the label to make form submission decisions ... please.

          • diggan 1 day ago
            > This isn’t exactly true. My password manager fails to recognise the domain I’m on, all the time. I have to go search for it and then copy/paste it in.

            I'd probably go looking for a new password manager if it fails to do one of the basic features they exist for, copy-pasting passwords defeats a lot of the purpose :)

            > That being said, if you’re making login pages

            I think we're doomed on this front already. My previous bank still (in 2025!) only allows 6 numbers as the online portal login password, no letters or special characters allowed, and you cannot paste in the field so no password manager works with their login fields, the future is great :)

            • withinboredom 1 day ago
              > I'd probably go looking for a new password manager if it fails to do one of the basic features they exist for, copy-pasting passwords defeats a lot of the purpose :)

              This isn’t the fault of the password managers themselves, but devs not putting the right metadata on their login forms, or havo the password field show only after putting in the email address, causing the password input to fail to be filled, etc.

              • sunaookami 21 hours ago
                Then get a good password manager that matches the domain and triple-check if it's a new domain. If your password manager shows you your npm login for npmjs.com and you are suddenly on a new domain and your password manager doesn't show logins, you will notice.
                • Macha 16 hours ago
                  I've noticed failure to fill the right fields (or any fields) on Lastpass, 1Password, Bitwarden and the KeepassXC browser extension.

                  What is your mythical "good password manager"?

                  • diggan 11 hours ago
                    I'm using 1Password+Firefox+Linux, it fails to find the right username+passwords maybe 10% of the time, mostly because services keep using different domains for login than for signup, so it doesn't recognize it's a valid domain.

                    In those cases, I carefully review the new domain, make sure it belongs to the right owner, then add it to the list of domains to accept. Now the account list properly show up in the future too, until they again change it. But it gives me a moment to pause and reflect before just moving past it.

                    I cannot remember any times in the last years where 1Password was 100% unable to fill out the username/password for a website unless the website itself prevented pasting passwords (like my old bank).

                    But even if it fills the wrong fields, it still provides safety as you wouldn't even see the accounts in the list if you're on the wrong domain, so that's your first warning sign.

              • aaronharnly 1 day ago
                or switching to some generic-sounding domain during login
                • sunaookami 21 hours ago
                  Good password managers can match subdomains, substrings, "url starts with", etc. There is no excuse.
    • quotemstr 1 day ago
      Not your fault. Thanks for posting and being proactive about fixing the problem. It could happen to anyone.

      And because it could happen to anyone that we should be doing a better job using AI models for defense. If ordinary people reading a link target URL can see it as suspicious, a model probably can too. We should be plumbing all our emails through privacy-preserving models to detect things like this. The old family of vulnerability scanners isn't working.

  • DDerTyp 1 day ago
    One of the most insidious parts of this malware's payload, which isn't getting enough attention, is how it chooses the replacement wallet address. It doesn't just pick one at random from its list.

    It actually calculates the Levenshtein distance between the legitimate address and every address in its own list. It then selects the attacker's address that is visually most similar to the original one.

    This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit of only checking the first and last few characters of an address before confirming a transaction.

    We did a full deobfuscation of the payload and analyzed this specific function. Wrote up the details here for anyone interested: https://jdstaerk.substack.com/p/we-just-found-malicious-code...

    Stay safe!

    • josefbud 1 day ago
      I'm a little confused on one of the excerpts from your article.

      > Our package-lock.json specified the stable version 1.3.2 or newer, so it installed the latest version 1.3.3

      As far as I've always understood, the lockfile always specifies one single, locked version for each dependency, and even provides the URL to the tarball of that version. You can define "x version or newer" in the package.json file, but if it updates to a new patch version it's updating the lockfile with it. The npm docs suggest this is the case as well: https://arc.net/l/quote/cdigautx

      And with that, packages usually shouldn't be getting updated in your CI pipeline.

      Am I mistaken on how npm(/yarn/pnpm) lockfiles work?

      • sigotirandolas 1 day ago
        Not the parent, but the default `npm install` / `yarn install` builds will ignore the lock file unless everything can be satisfied, if you want the lock file to be respected you must use `npm ci` / `yarn install --frozen-lockfile`.

        In my experience, it's common for CI pipelines to be misconfigured in this way, and for Node developers to misunderstand what the lock file is for.

        • 0cf8612b2e1e 1 day ago
          Not a web guy, but that seems a bonkers default. I would have naively assumed a lockfile would be used unless explicitly ignored.
          • metafunctor 1 day ago
            Welcome to the web side. Everything’s bonkers. Hard-earned software engineering truths get tossed out, because hey, wtf, I’ll just do some stuff and yippee. Feels like everyone’s stuck at year three of software engineering, and every three years the people get swapped out.
            • jiggawatts 1 day ago
              > every three years the people get swapped out

              That's because they are being "replaced", in a sense!

              When an industry doubles every 5 years like web dev was for a long time, that by the mathematical definition means that the average developer has 5 years or less experience. Sure, the old guard eventually get to 10 or 15 years of experience, but they're simply outnumbered by an exponentially growing influx of total neophytes.

              Hence the childish attitude and behaviour with everything to do with JavaScript.

              • metafunctor 1 day ago
                Good point! The web is going through its own endless September.

                And so, it seems, is everything else. Perhaps, this commentary adds no value — just old man yells at cloud stuff.

            • anonymars 22 hours ago
              The web saw "worse is better" and said "hold my beer"
          • Already__Taken 1 day ago
            We didn't get locking until npm v5 (some memory and googling, could be wrong.) And it took a long time to do everything you'd think you want.

            Changing the main command `npm install` after 7 years isn't really "stable". Anyway didn't this replace versions, so locking won't have helped either?

            • minitech 18 hours ago
              You can’t replace existing versions on npm. (But probably more important is what @jffry mentioned – yes, lockfiles include hashes.)
            • jffry 20 hours ago
              > Anyway didn't this replace versions, so locking won't have helped either?

              The lockfile includes a hash of the tarball, doesn't it?

        • DDerTyp 1 day ago
          TIL: I need to fix my CI pipeline. Gonna create a jira ticket I guess…

          Thank you!

          • josefbud 1 day ago
            Sorry, I had assumed this was what you were doing when I wrote my question but I should have specified. And sorry for now making your npm install step twice as long! ;)
            • rimunroe 1 day ago
              npm ci should be much faster in CI as it can install the exact dependency versions directly from the lockfile rather than having to go through the whole dependency resolution algorithm. In CI environments you don't have to wait to delete a potentially large pre-existing node_modules directory since you should be starting fresh each time anyway.
              • josefbud 1 day ago
                I've seen pipelines that cache node modules between runs to save time, but yeah if they're not doing that then you're totally right.
        • josefbud 1 day ago
          Yeah, I think I had made the assumption that they were using `npm ci` / `yarn install --frozen-lockfile` / `pnpm install --frozen-lockfile` in CI because that's technically what you're always supposed to do in CI, but I shouldn't have made that assumption.
      • Mattwmaster58 1 day ago
        As others have noted, npm install can/will change your lockfile as it installs, and one caveat for the clean-install command they provide is that it is SLOW, since it deletes the entire node_modules directory. Lots of people have complained but they have done nothing: https://github.com/npm/cli/issues/564

        The npm team eventually seemed to settle on requiring someone to bring an RFC for this improvment, and the RFC someone did create I think has sat neglected in a corner ever since.

        • saghm 1 day ago
          Is there no flag to opt out of this behavior? For Rust, Cargo commands will also do this by default, but they also have `--offline` for not checking online for new versions, `--locked` to require sticking with the exact version of the lockfile even when allowing downloading dependencies online (e.g. if you're building on a machine that's never downloaded dependencies before, so they aren't cached locally, but you still don't want to allow implicit updates), and `--frozen` (which is a shorthand for both `--locked` and `--offline`). I'm honestly on the fence about whether this is even sufficient, since I've worked at multiple places where the CI didn't actually run with `--locked` because whoever configured it didn't realize, and at least once a surprise update to the lockfile in CI ended up causing an issue that took a bit of time to debug before someone realized what was going on.
      • DDerTyp 1 day ago
        You’re right and the excerpt you quoted was poorly worded and confusing. A lockfile is designed to do exactly what you said.

        The package.json locked the file to ^1.3.2. If a newer version exists online that still satisfies the range in package.json (like 1.3.3 for ^1.3.2), npm install will often fetch that newer version and update your package-lock.json file automatically.

        That’s how I understand it / that’s my current knowledge. Maybe there is someone here who can confirm/deny that. That would be great!

    • __MatrixMan__ 1 day ago
      We should be displaying hashes in a color scheme determined by the hash (foreground/background colors for each character determined by a hash of the hash, salted by that character's index, adjusted to ensure sufficient contrast).

      That way it's much harder to make one hash look like another.

      • 9dev 1 day ago
        As someone with red/green vision deficiency: if you do this, please don’t forget people like me are unable to distinguish many shades of colours, which would be very disadvantageous here!
        • AaronAPU 1 day ago
          It’s not like it would hurt you for there to be supplementary info others can see but you can’t.
          • gblargg 16 hours ago
            I think 9dev was saying that providing only a colorized version might make it unreadable to some people, not merely that they wouldn't benefit from the extra color information.
          • macintux 21 hours ago
            And it's not like it would hurt the developers to be conscious of their choices.
            • zarzavat 18 hours ago
              There's actually nothing the developers can do about this particular issue other than to display all colors and allow colorblind people to see the colors that they can see.
              • __MatrixMan__ 14 hours ago
                It doesn't matter which colors the algorithm chooses so long as background/foreground are very distinguishable to as wide an audience as possible, and prev/next are likely to be distinguishable more often than not.

                That's a lot of flexibility within which to do clever color math which accounts for the types of colorblindness according to their prevalence.

              • bbarnett 17 hours ago
                For the newly made up feature, which doesn't exist yet, but already has an issue?

                Simple. Instead of forcing colour, one could retain a no colour option maybe?

                Done. Solved.

                Everything should have this option. I personally have no colour vision issues, other than I find colour annoying in any output. There's a lot who prefer this too.

        • __MatrixMan__ 1 day ago
          You could still ignore the colors and just read the characters, like people do now, and you could still use whatever color cues you are sensitive to.
      • Spivak 1 day ago
        Not sure why you're being downvoted, OpenSSH implemented randomart which gives you a little ascii "picture" of your key to make it easier for humans to validate. I have no idea if your scheme for producing keyart would work but it sounds like it would make a color "barcode".
        • Macha 16 hours ago
          I have to say the openssh random art has never really helped for me - I see each individual example so infrequently and there's so little detail to remember that it may as well just be a hash for all the memorability it doesn't add
        • __MatrixMan__ 1 day ago
          If you ignored the characters and just focused on the background colors, yeah I suppose it would look like a barcode. But the way I envision it, each line on the barcode is a character, so it still copy/pastes into notepad as the original text, but it'll copy/paste into word as colored text with colored background.
    • bflesch 1 day ago
      Can you attribute this technique to a specific group?
    • 3abiton 14 hours ago
      That moment where you respect the hacker. Still we are encroaching on dark times.
    • oasisbob 1 day ago
      > This is a brilliant piece of social engineering baked right into the code. It's designed to specifically defeat the common security habit ...

      I don't agree that the exuberance over the brilliance of this attack is warranted if you give this a moment's thought. The web has been fighting lookalike attacks for decades. This is just a more dynamic version of the same.

      To be honest, this whole post has the ring of AI writing, not careful analysis.

      • NoahZuniga 1 day ago
        > To be honest, this whole post has the ring of AI writing, not careful analysis.

        No it doesn't?

      • withinboredom 1 day ago
        > To be honest, this whole post has the ring of AI writing, not careful analysis.

        It has been what, hours? since the discovery? Are you expecting them to spend time analysing it instead of announcing it?

        Also, nearly everyone has AI editing content these days. It doesn’t mean it wasn’t written by a human.

        • bbarnett 17 hours ago
          Just for a counter, "nearly everyone" seems wildly ambitious.

          I want no part of AI in any form of my communication, and I know many which espouse the same.

          I will certainly agree on "many", but not "nearly everyone".

      • blueflow 1 day ago
        I've been thinking about using Levenshtein to make hexadecimal strings look more similar. Levenshtein might be useful for correcting typos, but not so when comparing hashes (specifically the start or end sections of it). Kinda odd.
  • 0xbadcafebee 1 day ago
    Here we are again. 12 days ago (https://news.ycombinator.com/item?id=45039764) I commented how a similar compromise of Nx was totally preventable.

    Again, this is not the failure of a single person. This is a failure of the software industry. Supply chain attacks have gigantic impacts. Yet these are all solved problems. Somebody has to just implement the standard security measures that prevents these compromises. We're software developers... we're the ones to implement them.

    Every software packaging platform on the planet should already require code signing, artifact signing, user account attacker access detection heuristics, 2FA, etc. If they don't, it's not because they can't, it's because nobody has forced them to.

    These attacks will not stop. With AI (and continuous proof that they work) they will now get worse. Mandate software building codes now.

    • zestyping 2 hours ago
      Interesting. According to https://www.wiz.io/blog/s1ngularity-supply-chain-attack the initial entry point was a "flawed GitHub Actions workflow that allowed code injection through unsanitized pull request titles" — which was detected and mitigated on August 29.

      That was more than ten days ago, and yet major packages were compromised yesterday. How?

    • TheJoeMan 1 day ago
      For a package with thousands of downloads a week, does the publishing pace need to be so fast? New version could be uploaded to NPM, then perhaps a notification email to the maintainer saying it will go live on XX date and click here to cancel?
      • 0xbadcafebee 23 hours ago
        A standard release process for Linux distro packages is 1) submitting a new revision, 2) having it approved by a repository maintainer, 3) it cooks a while in unstable, 4) then in testing, and finally 5) is released as stable. So there's an approval process, a testing phase, and finally a release. And since it's impossible for people to upload a brand new package into a package repository without this process, typosquatting never happens.

        Sadly, programming language package managers have normalized the idea that everyone who uses the package manager should be exposed to every random package and release from random strangers with no moderation. This would be unthinkable for a Linux distribution. (You can of course add 3rd-party Linux package repositories, unstable release branches, etc, which should enforce the same type of rules, but they don't have to)

        Linux distros are still vulnerable to supply chain attacks though. It's very rare but it has happened. So regardless of the release process, you need all the other mitigations to secure the supply chain. And once they're set up it's all pretty automatic and easy (I use them all day at work).

        • bbarnett 17 hours ago
          It's a problem solved decades ago, as you say. Devs, not caring about security or trust, just found it inconvenient.

          This will probably be reigned in soon. Many companies I know are backing away from npm/node, and even composer. It's just too risky an ecosystem.

        • papyrus9244 11 hours ago
          And for any Arch users reading this, AUR is the wild west too.
    • const_cast 1 day ago
      A lot of these security measures have trade offs, particularly when we start looking at heuristics or attestation-like controls.

      These can exclude a lot of common systems and software, including automations. If your heuristic is quite naive like "is using Linux" or "is using Firefox" or "has an IP not in the US" you run into huge issues. These sound stupid, because they are, but they're actually pretty common across a lot of software.

      Similar thing with 2FA. Sms isn't very secure, email primes you to phishing, TOTP is good... but it needs to be open standard otherwise we're just doing the "exclude users" thing again. TOTP is still phishable, though. Only hardware attestation isn't, but that's a huge red flag and I don't think NPM could do that.

      • rtpg 1 day ago
        I have a hard time arguing that 2FA isn't a massive win in almost every circumstance. Having a "confirm that you have uploaded a new package" thing as the default seems good! Someone like npm mandating that a human being presses a button with a recaptcha for any package downloaded by more than X times per week just feels almost mandatory at this point.

        The attacks are still possible, but they're not going to be nearly as easy here.

        • SchemaLoad 22 hours ago
          2FA is a huge benefit over plain passwords. But it wasn't enough here. The package dev had 2FA and it did not help since they got tricked in to logging in to a phishing page which proxied the 2FA code to the real login page.
          • bbarnett 17 hours ago
            Yet the parent said for each upload prior to publish.

            This attack would have 100% been thwarted, when a load of emails appeared saying "publish package you just uploaded?".

            (if you read the dev's account of this, you'll see this would have worked)

            • mnahkies 15 hours ago
              Another advantage of this would be for CI/CD - MFA can be a pain for this.

              If I could have a publish token / oidc Auth in CI that required an additional manual approve in the web UI before it was actually published I could imagine this working well.

              It would help reduce risk from CI system breaches as well.

              There are already "package published" notification emails, it's just at that point it's too late.

              • const_cast 4 hours ago
                Yes, exactly. A lot of these 2FA schemes or attestation schemes break automation, which is really undesirable in this particular scenario. Its tricky.
            • hvb2 13 hours ago
              Assuming you've compromised said developers account, wouldn't you be able to click that publish button too?
    • ropable 22 hours ago
      > Somebody has to just implement the standard security measures that prevents these compromises.

      I don't disagree, but this sentence is doing a lot of heavy lifting. See also "draw the rest of the owl".

      • giveita 21 hours ago
        Part of the owl can be how consumers upgrade. Don't get the latest patches but keep things up to date. Secondary sources of information about good versions to upgrade to and when. Allows time for vulns to be discovered like this before upgrading. Assumption is people can detect vulns before mass of people installing, which I think is true. Then you just need exceptions for critical security fixes.
      • sussmannbaka 18 hours ago
        We are engineers. Much like an artist could draw the rest of the owl, it’s not an unreasonable ask towards a field that each day seems to grow more accustomed to the learned helplessness.
    • imiric 1 day ago
      > Somebody has to just implement the standard security measures that prevents these compromises.

      It's not that simple. You can implement the most stringent security measures, and ultimately a human error will compromise the system. A secure system doesn't exist because humans are the weakest link.

      So while we can probably improve some of the processes within npm, phishing attacks like the ones used in this case will always be a vulnerability.

      You're right that AI tools will make these attacks more common. That phishing email was indistinguishable from the real thing. But AI tools can also be used to scan and detect such sophisticated attacks. We can't expect to fight bad actors with superhuman tools at their disposal without using superhuman tools ourselves. Fighting fire with fire is the only reasonable strategy.

    • ivape 1 day ago
      People focus on attacking windows because there are more windows users. What if I told you the world now has a lot more people involved in programming with JavaScript and Python?

      You’re right, this will only get a lot worse.

  • cddotdotslash 1 day ago
    NPM deserves some blame here, IMO. Countless third party intel feeds and security startups can apparently detect this malicious activity, yet NPM, the single source of truth for these packages, with access to literally every data event and security signal, can't seem to stop falling victim to this type of attack? It's practically willful ignorance at this point.
    • PokestarFan 1 day ago
      NPM is owned by GitHub and therefore Microsoft, who is too busy putting in Copilot into apps that have 0 reason to have any form of generative AI in them
      • Cthulhu_ 1 day ago
        But Github does loads of things with security, including reporting compromised NPM packages. I didn't know NPM is owned by Microsoft these days though, now that I think about it, Microsoft of all parties should be right on top of this supply chain attack vector - they've been burned hard by security issues for decades, especially in the mid to late 90's, early 2000s as hundreds of millions of devices were connected to the internet, but their OS wasn't ready for it yet.
      • bnchrch 1 day ago
        Good god. Not everything has to be about your opinion on AI.
        • PokestarFan 1 day ago
          GitHub was folded into Microsoft's "CoreAI" team. Not very confidence-inspiring.
        • jay_kyburz 1 day ago
          Actually, they could probably use AI to see if each update to a package looks malicious or obfuscated.
      • wutbrodo 1 day ago
        It's not like NPM pre-Microsoft was a paragon of professional management or engineering...
      • txdv 16 hours ago
        Just write a check.md instruction for copilot to check it for malicious acticity, problem solved
      • andix 1 day ago
        Is it really owned and run by Microsoft? I thought they only provide infrastructure, servers and funding.
    • buzuli 1 day ago
      For packages which have multiple maintainers, they should at least offer the option to require another maintainer to approve each publish.
    • twistedpair 1 day ago
      Identical, highly obfuscated (and thus suspicious looking) payload was inserted into 22+ packages from the same author (many dormant for a while) simultaneously and published.

      What kind of crazy AI could possible have noticed that on the NPM side?

      This is frustrating as someone that has built/published apps and extensions to other software providers for years and must wait days or weeks for a release to be approved while it's scanned and analyzed.

      For all the security wares that MS and GitHub sell, NPM has seen practically no investment over the years (e.g. just go review the NPM security page... oh, wait, where?).

    • legohead 1 day ago
      I blame the prevalence of package mangers in the first place. Never liked em, just for this reason. Things were fine before they became mainstream. Another annoying reason is package files that are set to grab the latest version, randomly breaking your environment. This isn't just npm of course, I hate them all equally.
      • stevenpetryk 1 day ago
        I'm a little confused, is this rage bait or what?

        > Things were fine before they became mainstream

        As in, things were fine before we had commonplace tooling to fetch third party software?

        > package files that are set to grab the latest version

        The three primary Node.js package managers all create a lockfile by default.

        • int_19h 9 hours ago
          > As in, things were fine before we had commonplace tooling to fetch third party software?

          In some ways they were. I remember how much friction it was to take a dependency in your typical desktop C++ or Delphi app in late 90s - early 00s. And because of that, developers would generally be hesitant to add a new dependency without a strong justification, especially so any kind of dependency that comes with its own large dependency tree. Which, in turn, creates incentives for library authors to create fairly large, framework-style libraries. So you end up with an ecosystem where dependencies are much more coarse and there are fewer of them, so dependency graphs are shallow. Whether this is an advantage or a disadvantage in its own right can be debated, but it's definitely less susceptible to this kind of attack because updating dependencies in such a system is also much more involved; it's not something that you do with a single `npm install`.

        • nananana9 19 hours ago
          I mostly share GP's sentiment, although they didn't argue their point very well.

          > As in, things were fine before we had commonplace tooling to fetch third party software?

          Yes. The languages without a dominant package manager (basically C and C++) are the only ones that have self-contained libraries, that you can just drag into your source tree.

          This is how you write good libraries - as can be seen by the fact that for many problems, there's a powerful C (or C++, but usually C) library with minimal (and usually optional) dependencies, that is the de-facto standard, and has bindings for most other languages. Think SDL, ffmpeg, libcurl, zlib, libpng/jpeg, FreeType, OpenSSL, etc, etc.

          That's not the case for libraries written in JS, Python, or even other compiled languages like Go and Rust - libraries written in those languages come with a dependency tree, and are never ported to other languages.

    • mrguyorama 1 day ago
      Why would NPM do anything about it? NPM has been a great source of distributing malware for like a decade now, and none of you have stopped using it.

      Why in the world would they NEED to stop? It apparently doesn't harm their "business"

      • pants2 1 day ago
        Dozens of businesses have been built to try fixing the npm security problem. There's clearly money in it, even if MS were to charge an access fee for security features.
  • joaomoreno 1 day ago
    From sindresorhus:

    You can run the following to check if you have the malware in your dependency tree:

    `rg -u --max-columns=80 _0x112fa8`

    Requires ripgrep:

    `brew install rg`

    https://github.com/chalk/chalk/issues/656#issuecomment-32668...

    • cgijoe 1 day ago
      Sorry, I am unfamiliar with ripgrep. Is this simply scanning for the string `_0x112fa8`? Could we do the same thing with normal grep -r?
      • skrebbel 1 day ago
        yes. ripgrep just does it faster, is all.
        • nothrabannosir 1 day ago
          But also respects .gitignore by default so I’m not sure you want to use ripgrep to scan your node_modules
          • Fishkins 1 day ago
            For others who didn't know, the -u flag in the OP's command makes it so ripgrep _will_ search files even if they're gitignored
            • postalcoder 19 hours ago
              -u searches through ignored files

              -uu searches through ignored and hidden files (eg dotfiles)

              -uuu searches through ignored, hidden, and binary files (ie everything)

          • AkshatJ27 1 day ago
            Isn't the intended behaviour of original comment checking the node_modules folder for the "infected" string.
        • hinkley 1 day ago
          Make it work, make it right, make it fast.

          For security checks, the first 2 out of 3 is just fine.

          • Aeolun 1 day ago
            Sure, but if you can get the last for free, why not?
      • EasyMark 1 day ago
        [flagged]
        • naikrovek 1 day ago
          I feel like you were trying to help here, but anyone can do this for themselves. Providing information in this way sort of indicates that you don't believe that the person you're replying to can do it on their own, and for that reason it's considered rude.
          • tbossanova 1 day ago
            I see what you mean, but I actually think there is a place for copy/pasting AI responses. I think of it as a kind of cache, surely a HN comment being served to n users means less resources used and faster access than if all n did their own AI query. But then of course you don’t get exactly your preference e.g. you might prefer a terser response than what is pasted here. Interesting to see how the etiquette around this plays out over time.
            • vasco 20 hours ago
              If you ever wanted to share an AI response, you probably should share your prompt, not the response. But likely you should not share anything, for the reasons already explained. Your argument about saving energy makes zero sense if you have any understanding of orders of magnitude but I won't share what AI says about it.
          • skygazer 1 day ago
            Also, HN hates machine generated replies, especially the lengthy and overly verbose slop variety -- I think that probably eclipsed any perceived rudeness.
    • yifanl 1 day ago
      Asking people to run random install scripts just feels very out of place given the context.
      • hunter2_ 1 day ago
        I would agree if this were one of those `curl | sh` scenarios, but don't we consider things like `brew` to be sufficiently low-risk, akin to `apt`, `dnf`, and the like?
        • tripplyons 1 day ago
          Anyone can upload an NPM package without much review. For Homebrew, you at least have to submit a pull request.
          • n8m8 11 hours ago
            https://docs.brew.sh/Acceptable-Casks#apps-that-bundle-malwa...

            > Unfortunately, in the world of software there are bad actors that bundle malware with their apps. Even so, Homebrew Cask has long decided it will not be an active gatekeeper (macOS already has one) and users are expected to know about the software they are installing. This means we will not always remove casks that link to these apps, in part because there is no clear line between useful app, potentially unwanted program, and the different shades of malware—what is useful to one user may be seen as malicious by another.

            ---

            So there might be pull requests, but Brew's official stance is that they do not actively moderate casks for malware. I guess there's something built into the MacOS packaging step that help mitigate the risk, but I don't know much about it outside playing w/ app development in XCode.

          • what 21 hours ago
            Homebrew has been compromised before. To think it’s immune is a bit naive.
            • n8m8 11 hours ago
              Agreed that it's a bit funny given the context and no community-managed package manager should be 100% trusted.

              That said, I think rg is pretty well known to linux daily-drivers and they just wanted to share something quickly for powerusers who want to check their workspaces quickly. Probably better to just instruct n00bs to use grep than install a whole cli tool for searching

              Come to think of it, I wonder if a 2-phase attack could be planned by an attacker in the future: Inject malware into a package, flood guidance with instructions to install another popular tool that you also recently compromised... lol

            • tripplyons 9 hours ago
              I'm not saying its immune. I'm saying that NPM doesn't have as many protections, making NPM an easier target.
        • anthk 1 day ago
          APT repos for Debian, Trisquel, Ubuntu... require far more checkings and bureaucracy.
          • socalgal2 1 day ago
            I'll bet they don't. There's way to much churn for it all to be checked
            • const_cast 1 day ago
              Churn? On Debian?

              It takes like 2 years to get up to date packages. This isn't NPM.

              • SchemaLoad 22 hours ago
                The xscreensaver dev managed to very easily slip a timebomb in to the debian repos. Wasn't obscured in any way, the repo maintainers just don't review the code. It would be physically impossible for them to review all the changes in all the programs.
            • justusthane 1 day ago
              No, they are extremely well vetted. Have you ever heard of a supply chain attack involving Red Hat, Debian or Ubuntu repos?
        • dmitrygr 1 day ago
          > don't we consider things like `brew` to be sufficiently low-risk,

          Like ... npm?

          • fn-mote 1 day ago
            Nah…

            Everybody knows npm is a gaping security issue waiting to happen. Repeatedly.

            It’s convenient, so it’s popular.

            Many people also don’t vendor their own dependencies, which would slow down the spread at the price of not being instantly up to date.

            • dabockster 1 day ago
              > Many people also don’t vendor their own dependencies, which would slow down the spread at the price of not being instantly up to date.

              npm sold it really hard that you could rely on them and not have to vendor dependencies yourself. If I suggested that a decade ago in Seattle, I would have gotten booed out of the room.

              • marcus_holmes 23 hours ago
                I have repeatedly been met with derision when pointing out what a gaping security nightmare the whole Open Source system is, especially npm and its ilk.

                Yet here we are. And this is going to get massively worse, not better.

                • Intermernet 15 hours ago
                  Nothing specific to open source is to blame in this instance. The author got phished. Open source software often has better code vetting and verification than closed source software. npm, however, does not.
            • johnisgood 6 hours ago
              Convenient, as in the barrier to entry is way too low. I am pretty much against it.
            • albedoa 20 hours ago
              > Nah…

              I mean, I believe you, but the person you are replying to obviously believes that they are similar. Could you explain the significant differences?

          • hunter2_ 1 day ago
            I thought getting code into brew is blocked by some vetting (potentially insufficient, which could be argued for all supply chains), whereas getting code into npm involves no vetting whatsoever.
            • n8m8 11 hours ago
              Went and found the link: https://docs.brew.sh/Acceptable-Casks#apps-that-bundle-malwa...

              > Unfortunately, in the world of software there are bad actors that bundle malware with their apps. Even so, Homebrew Cask has long decided it will not be an active gatekeeper (macOS already has one) and users are expected to know about the software they are installing. This means we will not always remove casks that link to these apps, in part because there is no clear line between useful app, potentially unwanted program, and the different shades of malware—what is useful to one user may be seen as malicious by another.

      • justusthane 1 day ago
        ripgrep is quite well known. It’s not some obscure tool. Brew is a well-established package manager.

        (I get that the same can be said for said for npm and the packages in question, but I don’t really see how the context of the thread matters in this case).

    • koolba 1 day ago
      Try the same recursive grep on ~/.npm to see if you have it cached too. Not just the latest in the current project.
      • tripplyons 1 day ago
        Haven't installed any modules today, but I ran these commands to clear caches for npm and pnpm just to be safe.

        npm cache clean --force pnpm cache delete

        • PokestarFan 1 day ago
          You probably want to check before you clear cache
    • dabockster 1 day ago
      Here's something I generated in my coding AI for Powershell:

      `Get-ChildItem -Recurse | Select-String -Pattern '_0x112fa8' | ForEach-Object { $_.Line.Substring(0, [Math]::Min(80, $_.Line.Length)) }`

      Breakdown of the Command:

      - Get-ChildItem -Recurse: This command retrieves all files in the current directory and its subdirectories.

      - Select-String -Pattern '_0x112fa8': This searches for the specified pattern in the files.

      - ForEach-Object { ... }: This processes each match found.

      - Substring(0, [Math]::Min(80, $_.Line.Length)): This limits the output to a maximum of 80 characters per line.

      ---

      Hopefully this should work for Windows devs out there. If not, reply and I'll try to modify it.

      • metaltyphoon 21 hours ago
        Or you can just install ripgrep on windows too and have it check much faster ;)
    • timsh 1 day ago
      If it produces no output, does that mean that there's no code that could act in the future? I first acted out of nerves and deleted the whole node-modules and package.lock in a couple of freshly opened Astro projects, curious if I should considered my web surfing to still be potentially malicious
      • nosefurhairdo 1 day ago
        The malware introduced here is a crypto address swapper. It's possible that even after deleting node_modules that some malicious code could persist in a browser cache.

        If you have crypto wallets on the potentially compromised machine, or intend to transfer crypto via some web client, proceed with caution.

    • aerodynamic_ 1 day ago
      convenience script that checks through package.json dependency tree + a couple malicious binary patterns:

      https://gist.github.com/edgarpavlovsky/695b896445c19b6f66f14...

    • airtonix 1 day ago
      [dead]
  • simpaticoder 1 day ago
    I've come to the conclusion that avoiding the npm registry is a great benefit. The alternative is to import packages directly from the (git) repository. Apart from being a major vector for supply-chain attacks like this one, it is also true that there is little or no coupling between the source of a project and its published code. The 'npm publish' step takes pushes local contents into the registry, meaning that a malefactor can easily make changes to code before publishing.
    • HexDecOctBin 1 day ago
      As a C developer, having being told for a decade that minimising dependencies and vendoring stuff straight from release is obsolete and regressive, and now seeing people have the novel realisation that it's not, is so so surreal.

      Although I'll still be told that using single-header libraries and avoiding the C standard library are regressive and obsolete, so gotta wait 10 more years I guess.

      • dpc_01234 1 day ago
        NPM dev gets hacked, packages compromised, it's detected within couple of hours.

        XZ got hacked, it reached development versions of major distributions undetected, right inside an _ssh_, and it only got detected due to someone luckily noticing and investigated slow ssh connections.

        Still some C devs will think it's a great time to come out and boast about their practices and tooling. :shrug:

        • grayhatter 1 day ago
          xz didn't get hacked (phished).

          For xz an advanced persistent threat, inserted hypertargeted self modifying code into a tarball.

          A single npm dev was "hacked" (phished) by a moderate effort, (presumably drive by) crypto thief.

          I have no idea what you meant by "right inside _ssh_" but I don't think that's a good description of what actually happened in any possible case.

          I'm unlikely to defend C devel practices but this doesn't feel like an indictment of C, if anything the NPM ecosystem looks worse by this comparison. Especially considering the comment you replied to was advocating for minimizing dependencies, which if the distros effected by xz being compromised had followed, (instead of patching sshd) they wouldn't have shipped a compromised version.

        • typpilol 1 day ago
          Lol it's so true.. the C smugness is unmatched
      • 1718627440 5 hours ago
        This isn't part of the current discussion, but what is the appeal of single-header libraries?

        Most times they actually are a normal .c/.h combo, but the implementation was moved to the "header" file and is simply only exposed by defining some macro. When it is actually a like a single file, that can be included multiple times, there is still code in it, so it is only a header file in name.

        What is the big deal in actually using the convention like it is intended to and name the file containing the code *.c ? If is intended to only be included this can be still done.

        > avoiding the C standard library are regressive and obsolete

        I don't understand this as well, since the one half of libc are syscall wrappers and the other half are primitives which the compiler will use to replace your hand-rolled versions anyway. But this is not harming anyone and picking a good "core" library will probably make your code more consistent and readable.

      • dboon 1 day ago
        Yeah lol I’m making a C package manager for exactly this. No transitive dependencies, no binaries served. Just pulling source code, building, and being smart about avoiding rebuilds.
        • eviks 1 day ago
          Being smart about avoiding rebuilds is serving prebuilds
    • aabbccsmith 1 day ago
      npm's recent provenance feature fixes this, and it's pretty easy to setup. It will seriously help prevent things like this from ever happening again, and I'm really glad that big packages are starting to use it.
      • billywhizz 1 day ago
        > When a package in the npm registry has established provenance, it does not guarantee the package has no malicious code. Instead, npm provenance provides a verifiable link to the package's source code and build instructions, which developers can then audit and determine whether to trust it or not
        • OptionOfT 1 day ago
          It prevents the npm publish from locally modified source code.
    • typpilol 1 day ago
      You can do some weird verify thing on your GitHub builds now when they publish to npm, but I've noticed you can still publish from elsewhere even with it pegged to a build?

      But maybe I'm misunderstanding the feature

    • komali2 1 day ago
      Do you do this in your CI as well? E.g. if you have a server somewhere that most would run `npm install` on builds, you just `git clone` into your node_modules or what?
    • cstrahan 1 day ago
      > The alternative is to import packages directly from the (git) repository.

      That sounds great in theory. In practice, NPM is very, very buggy, and some of those bugs impact pulling deps from git repos. See my issue here: https://github.com/npm/cli/issues/8440

      Here's the history behind that:

      Projects with build steps were silently broken as late as 2020: https://github.com/npm/cli/issues/1865

      Somehow no one thought to test this until 2020, and the entire NPM user base either didn't use the feature, or couldn't be arsed to raise the issue until 2020.

      The problem gets kinda sorta fixed in late 2020: https://github.com/npm/pacote/issues/53

      I say kinda sorta fixed, because somehow they only fixed (part of) the problem when installing package from git non-globally -- `npm install -g whatever` is still completely broken. Again, somehow no one thought to test this, I guess. The issue I opened, which I mentioned at the very beginning of this comment, addresses this bug.

      Now, I say "part of of the problem" was fixed because the npm docs blatantly lie to you about how prepack scripts work, which requires a workaround (which, again, only helps when not installing globally -- that's still completely broken); from https://docs.npmjs.com/cli/v8/using-npm/scripts:

          prepack
          
              - Runs BEFORE a tarball is packed (on "npm pack", "npm publish", and when installing a git dependencies).
      
      Yeah, no. That's a lie. The prepack script (which would normally be used for triggering a build, e.g. TypeScript compilation) does not run for dependencies pulled directly from git.

      Speaking of TypeScript, the TypeScript compiler developers ran into this very problem, and have adopted this workaround, which is to invoke a script from the npm prepare script, which in turn does some janky checks to guess if the execution is occuring from a source tree fetched from git, and if so, then it explicitly invokes the prepack script, which then kicks off compiler and such. This is the workaround they use today:

      https://github.com/cspotcode/workaround-broken-npm-prepack-b...

      ... and while I'm mentioning bugs, even that has a nasty bug: https://github.com/cspotcode/workaround-broken-npm-prepack-b...

      Yes, if the workaround calls `npm run prepack` and the prepack script fails for some reason (e.g. a compiler error), the exit code is not propagated, so `npm install` will silently install the respective git dependency in a broken state.

      How no one looks at this and comes to the conclusion that NPM is in need of better stewardship, or ought to be entirely supplanted by a competing package manager, I dunno.

  • paxys 1 day ago
    Yeah I know "everyone can be pwned" etc. but at this point if you are not using a password manager and still entering passwords on random websites whose domains don't match the official one then you have no business doing anything of value on the internet.
    • const_cast 1 day ago
      This is true, but I've also run into legitimate password fields on different domains. Multiple times. The absolute worst offender is mobile app vs browser.

      Why does the mobile app use a completely different domain? Who designed this thing?

    • djkoolaide 1 day ago
      Yeah, a password manager/autofill would have set off some alarms and likely prevented this, because the browser autofill would have detected a mismatch for the domain npmjs.help.
    • 4ndrewl 19 hours ago
      And I guess you can just withdraw your funding from him any time.
    • Tarq0n 17 hours ago
      Have you used a Microsoft product lately? So many bigco's publishing their org chart as login domains.
    • darkamaul 17 hours ago
      I get the sentiment behind 'just use a password manager', but I don’t think victim-blaming should be the first reflex. Anyone can be targeted, and anyone can fail, even people who do 'everything right'.

      Password managers themselves have had vulnerabilities, browser autofill can fail, and phishing can bypass even well-trained users if the attack is convincing enough.

      Good hygiene (password managers, MFA, domain awareness) certainly reduces risk, but it doesn’t eliminate it. Framing security only as a matter of 'individual responsibility' ignores that attackers adapt, and that humans are not perfect computers. A healthier approach would be: encourage best practices, but also design systems that are resilient when users inevitably make mistakes.

    • Drblessing 1 day ago
      How does someone intelligent with 2FA get pwned? Serious question.
      • Mawr 17 hours ago
        Thinking you're above getting pwned is often step one :)

        It's not easy to be 100% vigilant 100% of the time against attacks deliberatly crafted to fall for them. All it takes is a single well crafted attack that strikes when you're tired and you're done.

      • odie5533 22 hours ago
        Numbers game. Plenty of people got the email and deleted it. Only takes one person distracted and thinking "oh yeah my 2FA is pretty old" for them to get pwned.
        • CGamesPlay 21 hours ago
          (I think everyone in this comment chain already knows this, but) PSA: your 2FA does not "get old" and does not need to be rotated (unless the device YOU stored it on was compromised). "Rotate your 2FA periodically" is NOT recommended security advice.
        • pier25 21 hours ago
          It's more than that. You need to log in, manually, into a new domain you've never used your password before.
  • a022311 1 day ago
    After all these incidents, I still can't understand why package registries don't require cryptographic signatures on every package. It introduces a bit more friction (developers downloading CI artifacts and manually signing and uploading them), but it prevents most security incidents. Of course, this can fail if it's automated by some CI/CD system, as those are apparently easily compromised.
    • parliament32 1 day ago
      Real registries do[1], npm is just amateur-hour which is why its usage is typically forbidden in enterprise contexts.

      [1] https://www.debian.org/doc/manuals/securing-debian-manual/de...

      • 9dev 1 day ago
        In all fairness—npm belongs to GitHub, which belongs to Microsoft. Amateur-hour is both not a valid excuse anymore, and also a boring explanation. GitHub is going to great lengths to enable SLSA attestations for secure tool chains; there must be systemic issues in the JS ecosystem that make an implementation of proper attestations infeasible right now, everything else wouldn't really make sense.

        So if we're discussing anything here, why not what this reason is, instead of everyone praising their favourite package registry?

        • parliament32 1 day ago
          The NPM team has repeatedly commented that it's "too hard", effectively, and would discourage new developers from publishing packages. See:

          https://github.com/npm/npm/pull/4016#issuecomment-76316744

          https://news.ycombinator.com/item?id=38645969

          https://github.com/npm/cli/commit/5a3b345d6d5d175ea9ec967364...

          • a022311 1 day ago
            I don't think I'd trust a package from a new developer like that, so this helps filter out people that don't know how to properly maintain a package. If they really want to make onboarding easier, saying "after e.g. 1000 monthly downloads, you'll need to sign your artifacts" is also a viable solution in my opinion.
          • metafunctor 1 day ago
            The npm team is, frankly, a bunch of idiots for saying that. It has been obvious for TEN YEARS that the bar for publishing npm packages is far too low. That’s what made npm what it is, but it’s no longer needed. They should put on their big boy pants.
          • jiggawatts 1 day ago
            > discourage new developers from publishing packages

            Good.

            • yread 15 hours ago
              It's not like these packages are super sophisticated million LOCs masterpieces. ansi-regex is literally just this:

                  export default function ansiRegex({onlyFirst = false} = {}) {
               // Valid string terminator sequences are BEL, ESC\, and 0x9c
               const ST = '(?:\\u0007|\\u001B\\u005C|\\u009C)';
              
               // OSC sequences only: ESC ] ... ST (non-greedy until the first ST)
               const osc = `(?:\\u001B\\][\\s\\S]*?${ST})`;
              
               // CSI and related: ESC/C1, optional intermediates, optional params (supports ; and :) then final byte
               const csi = '[\\u001B\\u009B][[\\]()#;?]*(?:\\d{1,4}(?:[;:]\\d{0,4})*)?[\\dA-PR-TZcf-nq-uy=><~]';
              
               const pattern = `${osc}|${csi}`;
              
               return new RegExp(pattern, onlyFirst ? undefined : 'g');
              }
              • 1718627440 4 hours ago

                   ... | wc -c
                   592
                
                592 bytes of code including comments and whitespace versus which amount of overhead in package description, tarball caches, etc...?
            • kyer-sh 10 hours ago
              No kidding. New developers need to learn the important skill of doing something correctly, not just “ship fast; break things”
        • beefnugs 1 day ago
          Yeah Microsoft would have bought or taken over npm just to train on all the data against peoples wills, not to actually improve or put any effort into making it better
      • herpdyderp 1 day ago
        It sure hasn’t been forbidden in any enterprise I’ve been in! And they, in my experience, have it even worse because they never bother to update dependencies. Every install has lots of npm warnings.
      • anonfordays 1 day ago
        [flagged]
    • Joker_vD 1 day ago
      Mmm. But how does the package registry know which signing keys to trust from you? You can't just log in and upload a signing key because that means that anyone who stole your 2FA will log in and upload their own signing key, and then sign their payload with that.

      I guess having some cool down period after some strange profile activity (e.g. you've suddenly logged from China instead of Germany) before you're allowed to add another signing key would help, but other than that?

      • 9dev 1 day ago
        Supporting Passkeys would improve things; not allowing releases for a grace period after adding new signing keys and sending notifications about this to all known means of contact would improve them some more. Ultimately, there will always be ways; this is as much a people problem as it is a technical one.
      • a022311 1 day ago
        I suppose you'd register your keys when signing up and to change them, you'd have some recovery passphrase, kind of like how 2FA recovery codes work. If somebody can phish _that_, congratulations.
      • pants2 1 day ago
        That still requires stealing your 2FA again. In this attack they compromised a one-time authenticator code, they'd have to do it a second time in a row, and the user would be looking at a legitimate "new signing key added" email alongside it.
    • solatic 15 hours ago
      < developers downloading CI artifacts and manually signing and uploading them

      Hell no. CI needs to be a clean environment, without any human hands in the loop.

      Publishing to public registries should require a chain of signatures. CI should refuse to build artifacts from unsigned commits, and CI should attach an additional signature attesting that it built the final artifact based on the original signed commit. Public registries should confirm both the signature on the commit and the signature on the artifact before publishing. Developers without mature CI can optionally use the same signature for both the source commit and the artifact (i.e. to attest to artifacts they built on their laptop). Changes to signatures should require at least 24 hours to apply and longer (72 hours) for highly popular foundation packages.

    • rtpg 1 day ago
      I'm a fan of post-facto confirmation. Allow CI/CD to do the upload automatically, and then have a web flow that confirms the release. Release doesn't go out unless the button is pressed.

      It removes _most_ of the release friction while still adding the "human has acknowledged the release" bit.

      • eviks 1 day ago
        Maybe even send a user an email notification with a link...
        • rtpg 19 hours ago
          lol granted! But notice how in that universe since npm has to send the link, then access to the link is coupled to access to the email address, serving as an auth factor.

          In the attack described above, the attacker did not have access to the victim's email address.

    • mirekrusin 20 hours ago
  • thedougd 23 hours ago
    It was a pain in the ass but I always appreciated that Maven central required packages to be signed with a public key pre-associated with the package name.
  • numpad0 1 day ago
    I thought it stupid that there were some old established electro-mechanical manufacturing companies that would just block github.com and Internet downloads in general, only allowing codes from internal repos that took months to get approved, breaking npm dependent workflows.

    Now? Why aren't everyone setting up own GitHub mirrors is beyond me, almost. They were 100% right.

  • xrd 1 day ago
    It wouldn't be a perfect solution, but I wonder why browsers don't indicate the registration date for a domain in the URL bar somehow? I bet junon would have seen that and gotten suspicious.
    • gaudystead 1 day ago
      I like this idea and could see it being visually represented as a faint red/green bar behind the URL text in the address bar, with a greater amount of the bar being red when the domain is less trusted.

      As for developers trusting a plugin that reaches out to an external location to determine the reputation of every website they visit seems like a harder sell though.

    • webdev1234568 1 day ago
      that's a good one not perfect for sure, hackers would just start buying domains earlier but still...
      • xrd 1 day ago
        Yeah, but there is a takedown process when a spam site is detected (the server provider can shut off access, etc), so it is a game that is somewhat winnable.
    • AtNightWeCode 18 hours ago
      There are curated lists over newly registered domain names that some security software uses so it should be easy to add without any privacy issues.
  • bstsb 1 day ago
    looks like it won't affect you if you just downloaded the packages locally.

    the actual code only runs in a browser context - it replaces all crypto addresses in many places with the attacker's.

    a list of the attacker's wallet addresses: https://gist.github.com/sindresorhus/2b7466b1ec36376b8742dc7...

    • smoovb 10 hours ago
      Etherscan has tagged these addresses already. As of this check, none of the other block explorers have. Etherscan - yes - https://etherscan.io/address/0x4Cb4c0E7057829c378Eb7A9b174B0...

      Mempool.space - no Blockchair - no Tronscan - no Blockcypher.com - no Blockread.io - no

    • pingou 1 day ago
      I wonder why they didn't add something more nefarious that can run on developers machines while they were at it, would it have been too easy to see? It was caught very quickly anyway.
    • keepamovin 1 day ago
      that will still affect users of your website that uses these packages, tho.
  • Imustaskforhelp 1 day ago
    I can't imagine all the struggle the author must feel like.

    Like the need to constantly explain himself because of one single blunder.

    It shows how much so many open source projects rely on dependencies which are owned by one person and they can be pwned and (maybe hacked too)

    Everyone can get pwned I suppose. From a more technical perspective though, from the amounts of times I am listening AI,AI & AI BS, Couldn't something like deno / node / bun etc. just give a slight warning on if they think that the code might be malware or, maybe the idea could be that we could have a stable release that lets say could be on things like debian etc. which could be verified by external contributors and then instead of this node world moving towards @latest, we move towards something like @verified which can take builds / source from something like debian maintained or something along that way...

    I hope people can understand that author is a human too and we should all treat him as such and lets treat him with kindness because I can't imagine what he might be going as I said. Woud love a more technical breakdown once things settle and we can postmortem this whole situation.

  • Vincenius 1 day ago
    Wow, I also received the same phishing email even though my packages only have a few hundred downloads a week (eg. bsky-embed).

    So I guess a lot more accounts/packages might be affected than the ones stated in the article

    • gaudystead 1 day ago
      Did you receive the email in a similar time window? I'm trying to think of ways to scan other repositories for signs of compromise.
      • Vincenius 21 hours ago
        Yeah, I received the email on the 8th a little earlier at 2:50am (utc +2)
  • tomxor 1 day ago
    Finally validated for writing my own damn ANSI escape codes.
    • jmull 1 day ago
      Yeah, I get that learning the codes is a little annoying, but not actually harder than finding, incorporating, and learning one of the APIs here. Also one is standard while the other is not. Seems a bit nuts to use a package for this.
      • junon 1 day ago
        Hi, missing a lot of history here. When Chalk was written, colors in the terminal wasn't a flashy thing people tried to do very often, at least not in the JS world. Coming from browsers and wanting to make CLI apps using the flashy new Node.js 0.10/0.12 at the time saw a lot of designers and other aesthetically-oriented folks with it. Chalk filled a hole for people to do that without needing to understand how TTYs worked.

        Node.js proper has floated the idea of including chalk into the standard libraries, FWIW.

        • herpdyderp 23 hours ago
          > Node.js proper has floated the idea of including chalk into the standard libraries, FWIW.

          Oh my word please no! Every time I run into an issue where a dependency suddenly isn’t logging colors like it’s supposed to, it always boils down to chalk trying to do something fancy to handle an edge case that doesn’t actually exist. Just log the dang colors!

          • junon 16 hours ago
            I'd be curious what those edge cases are, as a maintainer of chalk.
        • jmull 13 hours ago
          I doubt we'll ever see eye-to-eye on this. Some people try to think about how to write less code, and some people try to think about how to write more code.
          • junon 9 hours ago
            We probably don't disagree, to be honest. Some of these packages shouldn't exist, I'll be the first to say it.
      • int_19h 9 hours ago
        I would argue that ANSI color output should be something natively supported in stdlib for any general purpose or systems programming language today. Precisely for this reason - it has been a standard for a very long time, and for several years now (since Windows enabled it by default) it is a truly universal standard de facto as well. This is exactly the kind of stuff that stdlib should cover.
  • hnquestion10987 1 day ago
    I'm a little confused after reading everything. I have an Expo app and if I run `npm audit`, I get the notification about `simple-swizzle`.

    The GitHub page (https://github.com/advisories/GHSA-hfm8-9jrf-7g9w) says to treat the computer as compromised. What does this mean? Do I have to do a full reset to be sure? Should I avoid running the app until the version is updated?

    • herpdyderp 23 hours ago
      The advisories on GitHub were/are weird for several reasons:

      1. The version matching was wrong (now fixed).

      2. The warning message is (still) exaggerated, imo, though I understand why they’d pass the liability downstream by doing so.

    • pixl97 1 day ago
      I mean the statement is pretty clear

      >Any computer that has this package installed or running should be considered fully compromised. All secrets and keys stored on that computer should be rotated immediately from a different computer. The package should be removed, but as full control of the computer may have been given to an outside entity, there is no guarantee that removing the package will remove all malicious software resulting from installing it.

      It sounds like the package then somehow executes and invites other software onto the machine. If something else has executed then anything the executing user has access to is now compromised.

      • mirekrusin 19 hours ago
        Confusing as hell. From code analysis shared malicious code replaces ethereum and other crypto wallet addresses in browser context. You can install malicious package, run it, run it in browser context (ie. in your playwright tests), then update package to not compromised version and you're fine - your system is clean.

        This incident would be much more severe if the code would actually steal envs etc. because a lot of packages have dependency on debug as wildcard.

  • diggan 1 day ago
    > Yes, I've been pwned. First time for everything, I suppose. It was a 2FA reset email that looked shockingly authentic. I should have paid better attention, but it slipped past me. Sincerely sorry, this is embarrassing.

    My worst nightmare is to wake up, see an email like that and hastily try to recover it while still 90% asleep, compromising my account in the process.

    However, I think I can still sleep safe considering I'm using a password manager that only shows up when I'm on the right domain. A 2FA phishing email sending me to some unknown domain wouldn't show my password manager on the site, and would hence give me a moment to consider what's happening. I'm wondering if the author here wasn't using any sort of password manager, or something slipped through anyways?

    Regardless, fucking sucks to end up there, at least it ends up being a learned lesson for more than just one person, hopefully. I sure get more careful every time it happens in the ecosystem.

    • hunter2_ 1 day ago
      I agree, and this is arguably the best reason to use a password manager (with the next being lack of reuse which automatically occurs if you use generated passwords, and then the next being strength if you use generated passwords).

      I generally recommend Google's to any Android users, since it suggests your saved password not only based on domain in Chrome browser, but also based on registered appID for native apps, to extend your point. I'm not sure if third party password managers do this, although perhaps it's possible for anti-monopoly reasons?

      • mcjiggerlog 1 day ago
        I actually also received this phishing email, also read it while half-asleep after a 6 week break and clicked on it. Luckily I was saved by exactly this - no password suggestion made me double check the domain.
        • hunter2_ 7 hours ago
          Nice. It's basically a TOFU system (unfortunately disguised).
      • peaseagee 1 day ago
        I use Bitwarden on Android and on web and it is aware of app IDs and (usually) correctly maps them. If it's missing, you can force the mapping [yes this is moderately dangerous] and report it to Bitwarden so other users get the benefit.
      • tracker1 1 day ago
        I'm a pretty big fan of BitWarden/VaultWarden myself... though relatively recently something changed on my Android phone in that the password fills aren't working from inside my browser, I have to copy/paste from the app, which is not only irritating but potentially less safe.
        • Dayshine 1 day ago
          Consider adding the widget/action to your quick actions: then to don't need to copy paste at least
          • hunter2_ 1 day ago
            For those of us unfamiliar, can you describe the resulting UI pattern? Do you give focus to the password field and then tap a button at the top of the notification shade which automatically types (or gives a choice, if multiple are saved) whatever the password manager has for that site? I'm slightly surprised that something running in that context would know what site the browser has open.
            • tracker1 1 day ago
              It appears to work... I wasn't even really aware I could add such a thing until the GP comment. I also managed to get the integrated use working... apparently there's now a separate config option for "chrome integration" and "brave integration" etc.
            • sunaookami 21 hours ago
              It reads the browser URL through an accessibility service.
  • phkahler 1 day ago
    >> which silently intercepts crypto and web3 activity in the browser, manipulates wallet interactions, and rewrites payment destinations so that funds and approvals are redirected to attacker-controlled accounts without any obvious signs to the user.

    If you're doing financial transactions using a big pile of NPM dependencies, you should IMHO be financially liable for this kind of thing when your users get scammed.

    • bpavuk 1 day ago
      using NPM at all must be treated as a liability at this point. it's not the first and definitely not the last time NPM got pwned this hard.
      • pixl97 1 day ago
        Lots of very big financial originations and other F100 companies use a whole lot more node than you'd be comfortable with.

        Luckily some of them actually import the packages to a local distribution point and check them first.

    • palmfacehn 1 day ago
      It isn't uncommon in crypto ecosystems for the core foundation to shovel slop libraries on application developers.
  • gslepak 1 day ago
    Tips to protect yourself from supply-chain attacks in the JavaScript ecosystem:

    - Don't update dependencies unless necessary

    - Don't use `npm` to install NPM packages, use Deno with appropriate sandboxing flags

    - Sign up for https://socket.dev and/or https://www.aikido.dev

    - Work inside a VM

    • egorfine 1 day ago
      > Don't update dependencies unless necessary

      And get yourself drowning in insurmountable technical debt in about two months.

      JS ecosystems moves at an extremely fast pace and if you don't upgrade packages (semi) daily you might inflict a lot of pain on you once a certain count of packages start to contain incompatible version dependencies. It sucks a lot, I know.

      • lpribis 1 day ago
        Updating packages daily (!) is insane to me as someone from the other end of the programming spectrum (embedded C). Is this really the recommended practice?
        • DecoySalamander 15 hours ago
          It really isn't, and I've never seen anyone do that. In every project I've worked on in the past decade, dependencies were only occasionally bumped in the context of some maintenance task or migration.
        • egorfine 1 day ago
          It is insane to me as a C programmer as well. It is something I got used to as a frontend js developer.

          It so recommend to stay on top of the dependencies and for different stacks this means different update schedule. For some, daily is indeed a good choice.

          • 1718627440 4 hours ago
            Even if there is a new version every day, not every release is born equal. Wouldn't updating while developing to "stay on top of dependencies" only be necessary on a major version? Surely there is not a major version per day. I mean otherwise you would use a library, that constantly imposes work on you and it would probably make more sense to write the library yourself. Minor versions and bugfixes can be incorporated when you do your release.
        • acdha 1 day ago
          It varies but there are a lot of tools built around the idea of rapid updates so things like APIs can change quickly throughout a far more fragmented ecosystem. I suspect that we’re going to see a lot of places back off of that a bit to have something like monthly update cycles where there’s more time for scanning and review while still expecting people to upgrade more frequently than used to be common.
      • gslepak 23 hours ago
        > daily

        Somehow we've survived without updating dependencies for probably at least a year.

        • egorfine 16 hours ago
          Then you probably have over a dozen CVEs in your code. Now, this is a different question whether they are exploitable and how much it is a risk.

          Other than that you now probably have an insurmountable technical debt and upgrading the dependencies is a project of itself.

          All the above applies to JavaScript world, of course. It's much different for the rest.

          • gslepak 1 hour ago
            > Then you probably have over a dozen CVEs in your code.

            We continuously monitor our dependencies for CVEs and update them if necessary. Most of the time the CVEs that are reported are not relevant / worth updating for.

          • 1718627440 4 hours ago
            If a library introduces CVEs per day, it's probably not so good to begin with.
          • yread 15 hours ago

               content-security-policy: default-src 'self';
            
            (and not sending crypto transactions): No need to worry about CVEs in js
    • butshouldyou 1 day ago
      Can you expand on "use Deno" for installing dependencies? I assume you don't mean to use Deno as the runtime, just for dependency management.
      • gslepak 1 day ago
        I do mean use Deno as the runtime. Details and examples of how to switch are here: https://crib.social/notice/AwQqG9gm365uat93Nw
        • j-krieger 16 hours ago
          How would this have prevented anything?
          • gslepak 1 hour ago
            In this case it would not have prevented anything, but I never claimed that it would. Using Deno with appropriate sandboxing flags can protect developers against many classes of supply-chain attacks.

            The reason it doesn't help in this instance is because the attack targets the generated bundle and runs on client devices, whereas other attacks will target developer machines themselves (and possibly also client devices). Those types of attacks can be mitigated by using Deno.

  • anticristi 1 day ago
    This is really scary. It could have totally happened to me too. How can we design security which works even when people are tired or stressed?

    Once upon a time, I used a software called passwordmaker. Essentially, it computed a password like hash(domain+username+master password). Genius idea, but it was a nightmare to use. Why? Because amazon.se and amazon.com share the same username/password database. Similarly, the "domain" for Amazon's app was "com.amazon.something".

    Perhaps it's time for browser vendors to strongly bind credentials to the domain, the whole domain and nothing but the domain, so help me Codd.

    • samhh 1 day ago
      Passkeys already solve for this, we just have to get past the FUD.
      • odie5533 22 hours ago
        In this case, how is the Passkey safer than 2FA?
        • samhh 19 hours ago
          It’s cryptographically bound to the domain.
  • marifjeren 1 day ago
    Definitely sounds like spear phishing targeting you specifically.

    Kudos to you for owning up to it.

    As others have said, it's the kind of thing that could happen to anyone, unfortunately.

    • mcjiggerlog 1 day ago
      I also received the same phishing email and I only have packages with a few thousand downloads per week.
  • hofrogs 18 hours ago
    This attack could have been so, so much worse. We were saved by the attacker's lack of creativity and competence.
    • carwyn 14 hours ago
      And the author's prompt response.
  • heipei 1 day ago
    If you want to see what the phishing site (npmjs[.]help) looks like: https://urlscan.io/result/01992a3e-4f8c-72bb-90a9-c13826f2d8... - Was still up and running 2 hours ago.
    • mdaniel 21 hours ago
      > Size: 3124 kB

      Hey, that's a pretty good reproduction of npmjs

  • l0rdkr0n0s 1 day ago
    Did someone wrote a script to check if the attacker wallets really did get any transactions? I checked a few bitcoin portfolios balance manually but nothing in there but the first ETH portfolio had a few cents. I would be curious about the total financial impact so far
  • wch 1 day ago
    When I run `npm audit`, it points me to a security advisory at GitHub. For example, for debug, it is https://github.com/advisories/GHSA-8mgj-vmr8-frr6 .

    That page says that the affected versions are ">=0". Does that seem right? That page also says:

    > Any computer that has this package installed or running should be considered fully compromised. All secrets and keys stored on that computer should be rotated immediately from a different computer. The package should be removed, but as full control of the computer may have been given to an outside entity, there is no guarantee that removing the package will remove all malicious software resulting from installing it.

    Is this information accurate?

  • stevage 18 hours ago
    I'm really surprised that NPM does not have better means to detect and respond to events like this. Since all the affected packages were by the same author, it would seem straightforward to have a mitigation event that rolls back all recent changes to some recent milestone. Then it's just a question of knowing when to hit the button.
  • lifeinthevoid 15 hours ago
  • martypitt 1 day ago
    A super quick script to check the deps in your package-lock.json file is here[0].

    [0]: https://gist.github.com/martypitt/0d50c350aa7f0fc73354754343...

    • patates 1 day ago
      aren't these already nuked and show up in the "npm audit" command?
    • krona 1 day ago
      how about:

      grep -r "_0x112fa8"

      • 9dev 1 day ago
        Irritatingly, this doesn't turn up anything, despite having a theoretically-compromised project as per the package-lock.json… At least on my end
        • mewpmewp2 1 day ago
          What do you mean irritatingly? Do you mean that you think 'grep -r "_0x112fa8"' is not enough or are you irritated that npm audit is flagging as if it was compromised?
          • 9dev 1 day ago
            I'm irritated because I expected to find at least one compromised file, but there were none. It may be, though, that we only use the affected packages as transitive development dependencies, in which case they are not installed locally. But a sliver of doubt remains that I missed something.
        • AgentME 1 day ago
          If you had the dependency installed before this attack, then you would still be pinned to an old safe version.
  • andix 1 day ago
    I've posted this idea already last time with the nx incident: we need some mechanism for package managers to ignore new packages for a defined time. Skip all packages that were published less than 24 hours ago.

    Most of those attacks are detected and fixed quickly, because a lot of people check newly published packages. Also the owners and contributors notice it quickly. But a lot of consumers of the package just install the newest release. With some grace period those attacks would be less critical.

  • adudethatgolfs 1 day ago
    • cddotdotslash 1 day ago
      Nathan, do you work for Socket? I think you should at least disclose that when sharing posts here.
      • whatamidoingyo 1 day ago
        I've never heard of Socket before this thread. They could be taking advantage of this news and promoting the company, as it's mentioned quite a few times in this thread. Or it's just a good service that I should probably be using.
  • progx 17 hours ago
    One reason why i run everything on my development machine in a docker container, you can't trust any package.

    I use bun, but similar could be done with npm

    Add to .bashrc:

      alias bun='docker run --rm -it -u $(id -u):$(id -g) -p 8080:8080 -v "$PWD":/app -w /app my-bun bun "$@"'
    
    then you can use `bun` command as usual.

    Dockerfile:

      FROM oven/bun:1 AS base
      VOLUME [ "/app" ]
      EXPOSE 8080/tcp
      WORKDIR /app
      # Add your custom libs
      # RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get -y install \
      #  ... \
    
    
    Create once the container:

      $ docker build -t "my-bun" -f "Dockerfile" .
  • duffpkg 21 hours ago
    Managed large health groups for a long time, we actually care about security, billion of patient interactions, never a compromise. I managed the modernization of the payment platform for the largest restaurant in the world. Billions of dollars a year. Early thing we did was freeze versions, maintain local package repos, carefully update. It is very concerning how rare these things are done. Tens of thousands of random people are in the core supply chain of most node projects and there seems to be a lot of carelessness of that fact.
  • naugtur 15 hours ago
    You too can run malware from NPM (I mean without consequences)

    https://github.com/naugtur/running-qix-malware?tab=readme-ov...

  • artooro 1 day ago
    This looks pretty bad. Even if this only affects crypto wallets, I can't help but imagine how much worse this could be.

    Another good read is at https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...

  • tadamcz 1 day ago
    Using a security key instead of TOTP would have prevented this.
  • vladkens 1 day ago
    Actually, my problem is not really with NPM itself or the fact that it can be hacked, but with the damn auto-update policy of software – as users we usually have no idea which versions are installed, and there is even no way to roll back to a safe version.

    All these Chrome, VSCode, Discord, Electron-apps, browser extensions, etc – they all update ± every week, and I can't even tell what features are being added. For comparison, Sublime updates once a YEAR and I'm totally fine with that.

  • dafelst 1 day ago
    I'm curious if anyone is tracking transactions against the wallet addresses in the malicious code - I assume that is essentially the attackers' return on investment here.
    • eiiot 1 day ago
      Just ran a script to do this – doesn't seem like there's much going in, other than one test transaction.
  • mattbilson 17 hours ago
    Completely understand people getting phished.

    How long before npm mandates using phishing resistant mfa? At least for accounts that can publish packages with this may downloads.

  • maxloh 1 day ago
    For everyone wondering: the code looks this way: https://web.archive.org/web/20250908162250/https://cdn.jsdel...

    Got it from the "simple-swizzle" package that hasn't been taken down by NPM.

  • baloki 18 hours ago
    A package on the list called ‘simple-swizzle’ turns out to be used in OpenNext which is an unexpected attack vector for sure.
  • alaintno 1 day ago
    How is it possible that this code (line 9 of the index.js) isn't present in the source github repo, but can be seen in the beta feature of npmjs.com?

    Also, the package 1.3.3 has been downloaded 0 times according to npmjs.com, how can the writer of this article has been able to detect this and not increment the download counter?

    • DDerTyp 1 day ago
      The discrepancy comes from how npm packages are published. What you see on GitHub is whatever the maintainer pushed to the repo, but what actually gets published to the npm registry doesn’t have to match the GitHub source. A maintainer (or someone with access) can publish a tarball that includes additional or modified files, even if those changes never appear in the GitHub repo. That’s why the obfuscated code shows up when inspecting the package on npmjs.com.

      As for the “0 downloads” count: npm’s stats are not real-time. There’s usually a delay before download numbers update, and in some cases the beta UI shows incomplete data. Our pipeline picked up the malicious version because npm install resolved to it based on semver rules, even before the download stats reflected it. Running the build locally reproduced the same issue, which is how we detected it without necessarily incrementing the public counter immediately.

      • Jenk 1 day ago
        It can also be that the repo was modified after a release.
      • alaintno 1 day ago
        I see, thanks for the explanations, and thanks for warning us about this!
    • behindsight 1 day ago
      > How is it possible that this code (line 9 of the index.js) isn't present in the source github repo, but can be seen in the beta feature of npmjs.com

      You may also be interested in npm package provenance [1] which lets you sign your npm published builds to prove it is built directly from the source being displayed.

      This is something ALL projects should strive to setup, especially if they have a lot of dependent projects.

      1: https://github.blog/security/supply-chain-security/introduci...

  • Phelinofist 11 hours ago
    Maybe it's time to declare the JS package world broken?
  • dismalaf 1 day ago
    The irony of this post's reception a few hours ago: https://news.ycombinator.com/item?id=45167394
  • alex_suzuki 1 day ago
  • mmis1000 1 day ago
    Seems a quite targeted attack though, the phishing domain is registered just 4 days ago.
  • koolba 1 day ago
    Another great example of why things like dependabot or renovate for automatically bumping dependencies to the latest versions is not a good idea. If it's not a critical update, better to let the world be your guinea pig and only update after there's been a while of real world usage and analysis. If it is a critical enough update that you have to update right away, then you take the time to manually research what's in the package, what changed, and why it is being updated.
  • andrewmcwatters 1 day ago
    @junon, if it makes you feel any better, I once had a Chinese hacking group target my router and hijack my DNS configuration specifically to make "amazon.com" point to 1:1 replica of the site just to steal my Amazon credentials.

    There was no way to quickly visualize that the site was fake, because it was in fact, "actually" amazon.com.

    Phishing sucks. Sorry to read about this.

    Edit: To other readers, yes, the exploit failed to use an additional TLS attack, which was how I noticed something was wrong. Otherwise, the site was identical. This was many years ago before browsers were as vocal as they are now about unsecured connections.

    • littlecranky67 1 day ago
      How did they get a valid ssl cert though?
      • dns_snek 1 day ago
        Before HSTS you didn't need a valid certificate. When you typed "amazon.com" in the address bar your browser would first connect to the server unencrypted on port 80 which would then redirect you to the HTTPS address.

        If someone hijacked your DNS, they could direct your browser to connect to their web server instead which served a phishing site on port 80 and never redirected you, thus never ran into the certificate issue. That's part of the reason why browsers started warning users when they're connecting to a website without HTTPS.

      • klysm 1 day ago
        Could've been a while ago when SSL certs failures weren't as loud in the browser
    • bix6 1 day ago
      Any write up? I would like to learn more to avoid.
      • dns_snek 16 hours ago
        The exact attack they described is less of an issue these days due to HSTS and preloading, but:

        - make sure you're connected to the expected official domain (though many companies are desensitizing us to this threat by using distinct domains instead of subdomains for official business)

        - make sure you're connected over HTTPS (this was most likely their issue)

        - use a password manager which remembers official domains for you and won't offer to auto-fill on phishing sites

        - use a 2FA method that's immune to phishing, like passkeys or security keys (if you do this, you get a lot of leniency to mistakes everywhere else)

    • thehamkercat 1 day ago
      What about SSL?
    • dboreham 1 day ago
      How did that get past TLS checks? They used Unicode characters that visually looked like amazon.com ?
    • nixosbestos 1 day ago
      That's not... how that works, unless you clicked through a very loud, obvious TLS warning.
      • jowea 1 day ago
        Yeah that sounds weird. Certificate pinning and HSTS should protect from that, right?
  • nromiun 1 day ago
    I have nothing to do with this but still I am getting second hand embarrassment. Here is an example, is-arrayish package, 73.8 MILLION downloads per week. The code? 3 lines to check if an object can be used like an array.

    I am sorry, but this is not due to not having a good standard library, this is just bad programming. Just pure laziness. At this point just blacklist every package starting with is-.

    • zahlman 1 day ago
      Meanwhile in Python: 134 million weekly downloads, seemingly slowly trending upward over time, for https://pypistats.org/packages/six which provides third-party compatibility for a version of Python that dropped support over five years ago.
      • nromiun 1 day ago
        The code is 813 lines [0]. Not saying everyone should use it but these two are not directly comparable.

        [0] https://github.com/benjaminp/six/blob/main/six.py

        • zahlman 1 day ago
          It is much more code, but it should be even more useless. (No slight intended to Benjamin Peterson.) The 2.7 window was already extended to give everyone a chance to migrate.
      • rtpg 1 day ago
        Was a bit surprised at this, but looking into the packages in a work project that require six, a _huge_ chunk of them are packages that are still explicitly supporting Python 2.7 still (usually stuff related to operations).

        I believe if you pay money to certain repo maintainers like red hat you can still have a supported version of Python 2.7.

        • zahlman 10 hours ago
          > a _huge_ chunk of them are packages that are still explicitly supporting Python 2.7 still

          Do you know if they also support 3.x?

          Do you know if they're available on PyPI?

          > (usually stuff related to operations).

          What kind of "operations" do you mean?

          • rtpg 40 minutes ago
            yes they also support python 3.x, are available on PyPI, and are related to operations in the sense of like... infrastructure management and the like.

            You have a huge pile of "sysop Python" out there interfacing with various infrastructure providers who are more interested in selling infra usage than getting off of Python 2.

            "In order to use our new storage service via our library you need to upgrade to Python 3 first" "ehhhhhhhh kinda annoying"

            That interaction has happened in the past. Time marches forward of course but.

      • sega_sai 1 day ago
        Most of these downloads, I would bet, are from CI
        • zahlman 1 day ago
          Oh, I don't doubt it at all. But that doesn't make it any less depressing to me. Nor does it matter to Fastly's bandwidth burden.
    • junon 1 day ago
      I wrote it 10 years ago, I think before Node was v1, and forgot about it for a long time. This was back before we had spreads, classes, typescript, and had to use DOM arrays and other weird structures, and where `arguments` wasn't an array but an object.

          > (function() { return Array.isArray(arguments); })()
          false
      • discomrobertul8 17 hours ago
        Do you think it might be time to deprecate and then retire this package, given that the ecosystem has evolved? Sure, it'll mean downstream packages will need to update their reliance on `is-arrayish` and use some other means suited to their task, but perhaps that's positive design pressure?
        • junon 7 hours ago
          Even if I sunset those packages, they'd still be downloaded and used in perpetuity, and certainly for many years afterward, even by popular packages, and even by such packages that have removed them in newer versions. Even if I had done this five years ago, I'd wager the scope of this attack would have been similar in size - maybe a billion less, but that's still a billion with a B, at which point I really wonder if it would have mattered as much.

          I agree that `is-arrayish` is silly, but that's not really the problem that needs fixing, in my opinion. There's a general, cross-language package management culture that has permeated over the last 10-15 years that is susceptible to this exact problem. It's TOTP today (in my case), something else tomorrow, and it can come to a Package Manager Near You at any time - npm is just a ripe target because of how much it's used, and how concentrated the download counts are for some of its larger packages, especially given how CI has started to operate (re-downloading everything etc).

          That's just my $0.02 on it though.

    • tkiolp4 1 day ago
      You don’t get it. People don’t add “is-arrayish” directly as a dependency. It goes like this:

      1) N tiny dubious modules like that are created by maintainers (like Qix)

      2) The maintainer then creates 1 super useful non-tiny module that imports those N dubious modules.

      3) Normal devs add that super useful module as a dependency… and ofc, they end up with countless dubious transitive dependencies

      Why maintainers do that? I don’t think it’s ignorance or laziness or lack of knowledge about good software engineering. It’s because either ego (“I’m the maintainer of N packages with millions of downloads” sounds better than “I’m the maintainer of 1 package “), or because they get more donations or because they are actually planning to drop malware some time soon.

      • paulddraper 9 hours ago
        I think the real answer is far less nefarious.

        They personally buy into modularization, do-one-thing-do-it-well. Also engineering is fun, and engineering more things is more fun.

    • quotemstr 1 day ago
      And at the other extreme, it takes TC39 seven years to bikeshed half of a decent implementation of Python's context managers: https://github.com/tc39/proposal-explicit-resource-managemen...

      On one extreme, we have standards committees that move glacially, and on the other, we have a chaotic package ecosystem moving faster than is prudent. The two are related.

  • DDerTyp 1 day ago
    It looks like a lot of packages of the author have been compromised (in total over 1 billion downloads). I've updated the title an added information to the blog post.
  • DDerTyp 1 day ago
    Update: It seems like all packages of the author got hacked.
  • codedokode 1 day ago
    I wanted to remind once again that hardware keys are immune to fishing because they check website domain unlike humans.
  • molsson 1 day ago
    I maintain a package on npm with >1M weekly downloads. I also got the same phishing e-mail, although I didn't click it.. here are the e-mail headers in the phishing e-mail I got:

    Return-Path: <ndr-6be2b1e0-8c4b-11f0-0040-f184d6629049@mt86.npmjs.help> X-Original-To: martin@minimum.se Delivered-To: martin@minimum.se Received: from mail-storage-03.fbg1.glesys.net (unknown [10.1.8.3]) by mail-storage-04.fbg1.glesys.net (Postfix) with ESMTPS id 596B855C0082 for <martin@minimum.se>; Mon, 8 Sep 2025 06:47:25 +0200 (CEST) Received: from mail-halon-02.fbg1.glesys.net (37-152-59-100.static.glesys.net [37.152.59.100]) by mail-storage-03.fbg1.glesys.net (Postfix) with ESMTPS id 493F2209A568 for <martin@minimum.se>; Mon, 8 Sep 2025 06:47:25 +0200 (CEST) X-SA-Rules: DATE_IN_PAST_03_06,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FROM_FMBLA_NEWDOM,HTML_FONT_LOW_CONTRAST,HTML_MESSAGE,MIME_HTML_ONLY,SPF_HELO_NONE,SPF_PASS X-RPD-Score: 0 X-SA-Score: 1.1 X-Halon-ID: e9093e1f-8c6e-11f0-b535-1932b48ae8a8 Received: from smtp-83-4.mailtrap.live (smtp-83-4.mailtrap.live [45.158.83.4]) by mail-halon-02.fbg1.glesys.net (Halon) with ESMTPS id e9093e1f-8c6e-11f0-b535-1932b48ae8a8; Mon, 08 Sep 2025 06:47:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; x=1757637200; d=smtp.mailtrap.live; s=rwmt1; h=content-transfer-encoding:content-type:from:to:subject:date:mime-version: message-id:feedback-id:cfbl-address:from; bh=46LbKElKI+JjrZc6EccpLxY7G+BazRijag+UbPv0J3Y=; b=Dc1BbAc9maHeyNKed/X7iAPabcuvlgAUP6xm5te6kkvGIJlame8Ti+ErH8yhFuRy/xhvQTSj8ETtV f3AElmzHDWcU3HoD/oiagTH9JbacmElSvwtCylHLriVeYbgwhZVzTm4rY7hw/TVqNE5xIZqWWCMrVG wi+k9uY+FUIQAh7Ta2WiPk/A4TPh04h3PzA50zathvYcIsPC0iSf7BBE+IIjdLXzDzNZwRmjgv2ZHW GAx/FRCPFgg0PbVvhJw98vSHnKmjPO/mmcotKFG+MUWkCtTu28Mm46t7MI7z5PrdCXZDA7L1nVnIwE ffIf0zED32Z6tFSJFNmYgFZlD6g+DnQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; x=1757637200; d=npmjs.help; s=rwmt1; h=content-transfer-encoding:content-type:from:to:subject:date:mime-version: message-id:feedback-id:cfbl-address:from; bh=46LbKElKI+JjrZc6EccpLxY7G+BazRijag+UbPv0J3Y=; b=DyWvxSOjMf7WfCVtmch+zw63kZ/OOBjcWnh1kIYs/hozgemb9mBIQCMqAdb4vSZChoW5uReVH5+k5 Jaz7UodbPJksVkYWqJOVg6nyx5EaYMYdgcw1+BCct/Sf2ceFwWurhupa6y3FBTFWBYLhcsAXERlx2l IuxWlpZoMDEBqDxjs8yvx/rkBrcd/2SNTcI+ooKJkrBIGBKuELOd3A5C6jlup6JNA4bE7vzP3FUfKw y0357UMnn45zWHm9HvudO4269FRlNjpiJaW7XF1/ANVrnDlNWfUGNQ5yxLZqmQDTtxFI7HcOrF3bTQ O/nrmVOvN9ywMvk/cJU4qGHqD9lT32A== CFBL-Address: fbl@smtp.mailtrap.live; report=arf X-Report-Abuse-To: abuse@mailtrap.io Received: from npmjs.help by smtp.mailtrap.live with ESMTPSA 6aee9fff-8c4b-11f0-87bb-0e939677d2a1; Mon, Sep 08 2025 00:33:20 GMT Feedback-ID: ss:770486:transactional:mailtrap.io Message-ID: <6be2b1e0-8c4b-11f0-0040-f184d6629049@npmjs.help> X-Mt-Data: bAX0GlwcNW6Dl_Qnkf3OnU.GLCSjw_4H01v67cuDIh2Jkf52mzsVFT_ZEVEe0W6Lf3qzW2LP_TCy93I46MCsoT0pB9HozQkvCw22ORSCt3JBma1G3v9aDEypT1DLmyqlb6hYLF3H7tJCgcxTU5pbijyNaOFtoUMdiTA6jxaONeZbBj.SKUa5CLT5TMpeNHG6oGIiY_jqlU.nQkxGPY3v9E34.Nz4ga8p9Pd_BplftaE~--2CLrluJMY65S5xFl--IISg0olYJu6DVyVDEcJ.AQ~~ MIME-Version: 1.0 Date: Mon, 08 Sep 2025 00:33:20 +0000 Subject: Two-Factor Authentication Update Required To: "molsson" <martin@minimum.se> From: "npm" <support@npmjs.help> Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable

    • 6mile 1 day ago
      That domain (npmjs[.]help) has been taken down. Looks like it was purchased and started hosting on September 5th, 2025.
  • zabil 1 day ago
    Does anybody have tips on how to invalidate a wallet address response if it's intercepted and modified like this?
    • Mattwmaster58 1 day ago
      Off the top of my head, you could include your own checksum in the payload. Their code only modifies the address. Nothing would prevent them from reverse engineering checksum, too.

      There are ways to detect a replaced/proxied global window function too, and that's another arms race.

  • 15155 1 day ago
    BTW: the NPM logo is blurry in that phishing email.
  • ikekkdcjkfke 11 hours ago
    What a ss email is
  • goku12 1 day ago
    Developer account got hijacked through phishing. @junon acknowledged this readily and is trying to get it sorted. Meanwhile, this is a mistake that can happen to anyone, especially under pressure. So no point in discussing the personal oversight.

    So let me raise a different concern. This looks like an exploit for web browsers, where an average user (and most above average users) have no clue as to what's running underneath. And cryptocurrency and web3 aren't the only sensitive information that browsers handle. Meaning that similar exploits could arise targeting any of those. With millions of developers, someone is bound to repeat the same mistake sooner or later. And with some packages downloaded thousands of times per day, some CI/CD system will pull it in and publish it in production. This is a bigger problem than just a developer's oversight.

    - How do the end user protect themselves at this point? Especially the average user?

    - How do you prevent supply chain compromises like this?

    - What about other language registries?

    - What about other platforms? (binaries, JVM, etc?)

    This isn't a rhetorical question. Please discuss the solutions that you use or are aware of.

    • eviks 1 day ago
      > Meanwhile, this is a mistake that can happen to anyone, especially under pressure. So no point in discussing the personal oversight.

      Unless this is a situation that could've been easily avoided with a password manager since the link was from a website not in your manager's database, so can't happen to anyone following security basics, and the point of discussing the oversight instead of just giving up is to increase the share of people who follow the basics?

      • junon 1 day ago
        I've mentioned this elsewhere. I was mobile, I don't often use it there, and I was in a rush.
    • NoahZuniga 1 day ago
      One thing I've been thinking of is to restrict global access to packages. Something like ansi-styles doesn't need access to the crypto global, or to the DOM, or make web requests, etc. So if you can sandbox individual libraries, you can decrease the attack surface a lot.

      You could imagine that a compromised pad-left package could read the contents of all password inputs on the page and send it to an attacker server, but if you don't let that package access the document, or send web requests, you can avoid this compromise.

    • edent 1 day ago
      > How do the end user protect themselves at this point? Especially the average user?

      Don't use unregulated financial products. The likelihood of a bank being hit by this isn't zero - but in most parts of the world they would be liable and the end user would be refunded.

      > How do you prevent supply chain compromises like this?

      Strictly audit your code.

      There's no magic answer here. Oh, I'm sure you can throw an LLM at the problem and hope that the number of false positives and false negatives don't drown you. But it comes down to having an engineering culture which moves slowly and doesn't break things.

      • semverbad 1 day ago
        So Node also has semver and also package-lock.json, but these are pretty cumbersome. These are a huge part of this.

        Why a package with 10+ million weekly downloads can just be "updated" like this is beyond me. Have a waiting period. Make sure you have to be explicit. Use dates. Some of the packages hadn't been updated in 7 years and then we firehosed thousands of CI/CD jobs with them within minutes?

        npm and most of these package manager should be getting some basic security measures like waiting periods. it would be nice if I could turn semver off to be honest and force folks to actually publish new packages. I'm always bummed when a 4 layer deep dependency just updates at 10PM EST because that's when the open source guy had time.

        Packages used to break all the time, but I guess things kind of quieted down and people stopped using semvers as much. Like I think major packages like React don't generally have "somedepend" : "^1.0.0" but go with "1.0.0"

        I think npm and the community knew this day was coming and just hopes it'll be fixed by tooling, but we need fundamental change in how packages are updated and verified. The idea that we need to "quickly" rollout a security fix with a minor patch is a good idea in theory, but in practice that doesn't really happen all that often. My audit returns all kinds of minor issues, but its rare that I need it...and if that's the case I'll probably do a direct update of my packages.

        Package-lock.json was a nice bandaid, but it shouldn't have been the final solution IMHO. We need to reduce semver usage, have some concept of package age/importance, and npm needs a scanner that can detect obviously obfuscated code like this and at least put the package in quarantine. We could also use some hooks in npm so that developers could write easy to control scripts to not install newer packages etc.

        • 12_throw_away 1 day ago
          > Why a package with 10+ million weekly downloads can just be "updated" like this is beyond me. Have a waiting period. Make sure you have to be explicit. Use dates.

          Yep. Also interesting how many automated security scanners picked this up right away ... but NPM itself can't be bothered, their attitude is "YOLO we'll publish anything"

    • hansott 1 day ago
      > How do you prevent supply chain compromises like this? Take a look at https://github.com/AikidoSec/safe-chain and https://intel.aikido.dev/?tab=malware! Disclaimer: I work at Aikido Security :)
    • ashishbijlani 1 day ago
      Packj [1] detects malicious PyPI/NPM/Ruby/PHP/etc. dependencies using behavioral analysis. It uses static+dynamic code analysis to scan for indicators of compromise (e.g., spawning of shell, use of SSH keys, network communication, use of decode+eval, etc). It also checks for several metadata attributes to detect bad actors (e.g., typo squatting).

      1. https://github.com/ossillate-inc/packj

    • sigotirandolas 1 day ago
      > - How do the end user protect themselves at this point? Especially the average user?

      - Install as little software as possible, use websites if possible.

      - Keep important stuff (especially cryptocurrency) on a separate device.

      - If you are working on a project that pulls 100s of dependencies from a package registry, put that project on a VM or container.

      • goku12 19 hours ago
        > Install as little software as possible, use websites if possible.

        If I understood this correctly, this is an exploit for the browser.

  • hoppp 1 day ago
    Damn, I use chalk... did they remove the malicious versions?
  • ndhandala 17 hours ago
    I think Passkeys fixes this
  • MrContent04 14 hours ago
    Incidents like this show how fragile the supply chain really is. One compromised maintainer account can affect thousands of projects. We need better defaults for package signing + automated trust checks, otherwise we’ll just keep repeating the same cycle.”
  • fareesh 1 day ago
    i use node/npm moderately

    is there a runnable command to determine if the package list has a compromised version of anything?

  • stathibus 1 day ago
    As an outsider to the npm ecosystem, reading this list of packages is astonishing. Why do js people import someone else's npm module for every little trivial thing?
    • thewebguyd 1 day ago
      Lack of a good batteries-included stdlib. You're either importing a ton of little dependencies (which then depend on other small libraries) or you end up writing a ton of really basic functionality yourself.
      • rudedogg 1 day ago
        This is the answer IMO. The number of targets and noise would be a lot less if JS had a decent stdlib or if we had access to a better language in the browser.

        I have no hope of this ever happening and am abandoning the web as a platform for interactive applications in my own projects. I’d rather build native applications using SDL3 or anything else.

        • mrguyorama 1 day ago
          But this can't be the whole story. In the Java world, it's pretty common to import a couple huge libraries full of utility functions, but those are each one import, that you can track and version and pay attention to.

          Apache Commons helper libraries don't import sub libraries for every little thing, they collect a large toolbox into a single library/jar.

          Why instead do people in the javascript ecosystem insist on separating every function into it's own library that STILL has to import helper libraries? Why do they insist on making imports fractally complex for zero gain?

          • crabmusket 1 day ago
            Bundle size optimisation. See my comment upthread for more detailed explanation. Bundle size is one of the historical factors that makes JS ecosystem a unique culture, and I'd argue uniquely paranoid.
          • xd1936 1 day ago
            I didn't used to be. It's just become less trendy to import a big giant Lodash, Underscore, Sugar, or even jQuery.
          • flomo 1 day ago
            Originally I think it was to avoid the applet experience of downloading a large util.jar or etc. (Not that most js devs really care.) However, I suspect the motivation is often social status on GitHub & their resume.
        • imiric 1 day ago
          To be fair, this is not a problem with the web itself, but with the Node ecosystem.

          It's perfectly possible to build web apps without relying on npm at all, or by being very selective and conservative about the packages you choose as your direct and transitive dependencies. If not by reviewing every line of code, then certainly by vendoring them.

          Yes, this is more inconvenient and labor intensive, but the alternative is far riskier and worse for users.

          The problem is with web developers themselves, who are often lazy, and prioritize their own development experience over their users'.

          • palmfacehn 1 day ago
            I'm often surprised at the number of JS experts who struggle with the basics of the browser API. Instead of reasoning through the problem, many will reach for a framework or library.
            • PeterisP 1 day ago
              At least historically it used to be the case that you don't ever want to use the browser API directly for compatibility reasons but always through some library that will be a do-nothing-wrapper in some cases but do a bunch of weird stuff for older browsers. And traditions are sticky.
            • skydhash 1 day ago
              Especially with the MDN, an amazing resource.
      • tannhaeuser 1 day ago
        npmjs is the stdlib, or what emerged from it.

        It started as CommonJs ([1]) with Server-side JavaScript (SSJS) runtimes like Helma, v8cgi, etc. before node.js even existed but then was soon totally dominated by node.js. The history of Server-side JavaScript btw is even longer than Java on the server side, starting with Netscape's LifeScript in 1996 I believe. Apart from the module-loading spec, the CommonJs initiative also specified concrete modules such as the interfaces for node.js/express.js HTTP "middlewares" you can plug as routes and for things like auth handlers (JSGI itself was inspired by Ruby's easy REST DSL).

        The reason for is-array, left-pad, etc. is that people wanted to write idiomatic code rather than use idiosyncratic JS typechecking code everywhere and use other's people packages as good citizens in a quid pro quo way.

        [1]: https://wiki.commonjs.org/wiki/CommonJS

        Edit: the people crying for an "authority" to just impose a stdlib fail to understand that the JS ecosystem is a heterogeneous environment around a standardized language with multiple implementations; this concept seems lost on TypeScripters who need big daddy MS or other monopolist to sort it all out for them

        • int_19h 1 day ago
          > JS ecosystem is a heterogeneous environment around a standardized language with multiple implementations

          It's not unique in this sense, yet others manage to provide a lot more in their stdlib.

          It's not that you need a "big daddy". It's that the ecosystem needs a community that actually cares about shit like this vulnerability.

        • spankalee 1 day ago
          > TypeScripters who need big daddy MS or other monopolist to sort it all out for them

          What is this crap statement?

          So you want type-checking because it helps you catch a class of errors in an automated way, and suddenly you have a daddy complex and like monopolies?

          Claiming this says a lot more about you than people who use TypeScript.

          • tannhaeuser 18 hours ago
            If you want type safety there are any number of better languages out there compiling to JavaScript. The entire point of JS is that's a portable good enough dynamic scripting language, not a freaking mess changing all the time.
          • typpilol 22 hours ago
            One of the most bizarre attacks on typescript I've seen lol.
      • tracker1 1 day ago
        Worth mentioning...

        https://jsr.io/@std

        • wonger_ 1 day ago
          How is this different than npm? It looks like a bunch of third-party packages, just prefixed with std.
          • tracker1 1 day ago
            They are separate packages, curated by the Deno team[1]. The interfaces are pretty consistent and useful.

            1. https://github.com/denoland/std

          • SahAssar 1 day ago
            It's not, and a third of them are tagged as unstable. JSR also still seems to not encourage proper versioning.
            • tracker1 1 day ago
              They are published as separate packages, but managed in the same repo by the Deno team. Versioning works fine with JSR.

              A fully-formed standard library doesn't spring into existence in a day.

              • DimmieMan 22 hours ago
                They seem pretty cautious with that unstable flag too.

                UUID v7 for example is unstable and one would be pretty confident in that not changing at this stage.

                Many unstable functions have less churn than a lot of other “stable” packages. It’s a standard library so it’s the right place to measure twice before cementing it forever.

      • pier25 1 day ago
        Yes this is the fundamental problem.

        It started with browsers giving you basically nothing. Someone had to invent jQuery 20 years ago for sensible DOM manipulation.

        Somehow this ethos permeated into Node which also basically gives you nothing. Not even fundamental things like a router or db drivers which is why everyone is using Express, Fastify, etc. Bun and Deno are fixing this.

      • DrewADesign 1 day ago
        I just never got the argument against including things like the sort of text formatting tools and such that people always import libraries for. It’s not like an embedded system for mission-critical realtime applications where most functions people write for it get formal proofs — it’s freaking javascript. Sure it’s become a serious tool used for serious tasks for some reason, but come on.
      • skydhash 1 day ago
        But why can’t we have a good library instead of those mini thingies?
        • progbits 1 day ago
          For C++ there are Boost, Folly, Absl, several more large libraries with reputable orgs behind them. I'm surprised someone doesn't make a big npm lib like that.

          Not hating on the author but I doubt similar compromise would happen to Facebook or Google owned package.

        • zahlman 1 day ago
          Because you have to figure out what should be in it, and coordinate the distribution. It's not like there's a reference implementation of JavaScript maintained by a well-known team that you consciously install everywhere that you need it.
          • skydhash 1 day ago
            Node is pretty much everywhere regarding JavaScript cli and web apps (server side). As for the web it’s hard to argue for a slim library when most sites are dumping huge analytics bundle on us.

            At this point, it’s just status-quo and lazyness

        • mhitza 1 day ago
          Because "look at how many open source packages I maintain!"

          At a time small JS libraries were desired, and good library marketing approach, but nowadays simple sites ship megabytes of without a care.

          In particular this developer is symptomatic of the problem of the NPM ecosystem and I've used him multiple times as an example of what not to do.

        • eviks 1 day ago
          Because a mini thing can be written in mini time by a mini number of people
          • pixl97 1 day ago
            And a mini thing can be switched to another 'mini' package easy enough if the current package decides to do something dumb.

            If your mega package decides to drop something you need you pretty much have to follow.

            • skydhash 1 day ago
              > If your mega package decides to drop something you need you pretty much have to follow.

              Or you can code it in. Mega packages can be very stable. Think SDL, ffmpeg, ImageMagick, Freetype...There's usually a good justification for dropping something alongside a wide deprecation windows. You don't just wake up and see the project gone. It's not like the escape codes for the unix terminal are going to change overnight.

        • jamesnorden 1 day ago
          The JS ecosystem values quantity over quality, for some bizarre reason.
      • pverheggen 1 day ago
        Not just a stdlib, lack of an SDK as well. Both Deno and Bun have decided to ship with tooling included, which cuts down on dev dependency bloat.
    • austin-cheney 1 day ago
      I can provide you with some missing background as I was a prior full time JavaScript/TypeScript developer for 15 years.

      Most people writing JavaScript code for employment cannot really program. It is not a result of intellectual impairment, but appears to be more a training and cultural deficit in the work force. The result is extreme anxiety at the mere idea of writing original code, even when trivial in size and scope. The responses vary but often take the form of reused cliches of which some don't even directly apply.

      What's weird about this is that it is mostly limited to the employed workforce. Developers who are self-taught or spend as much time writing personal code on side projects don't have this anxiety. This is weird because the resulting hobby projects tend to be substantially more durable than products funded by employment that are otherwise better tested by paid QA staff.

      As a proof ask any JavaScript team at your employment to build their next project without a large framework and just observe how they respond both verbally and non-verbally.

      • sangeeth96 1 day ago
        > Most people writing JavaScript code for employment cannot really program.

        > As a proof ask any JavaScript team at your employment to build their next project without a large framework and just observe how they respond both verbally and non-verbally.

        With an assumption like that, I bet the answer is mostly the same if you ask any Java/Python dev for example — build your next microservice/API without Spring or DRF/Flask.

        Even though I only clock at about 5YOE, I'm really tired of hearing these terrible takes since I've met plentiful share of non-JS backend folks for example, who have no idea about basic API design, design patterns or even how to properly use the same framework they use for every single project.

      • jbreckmckye 1 day ago
        > The responses vary but often take the form of reused cliches of which some don't even directly apply.

        "It has been tested by a 1000 people before me"

        "What if there is an upstream optimisation?"

        "I'm just here to focus on Business Problems™"

        "It reduces cognitive load"

        ---

        Whilst I think you are exaggerating, I do recognise this phenomenon. For me, it was during the pandemic when I had to train / support a lot of bootcamp grads and new entrants to the career. They were anxious to perform in their new career and interpreted that as shipping tickets as fast as possible.

        These developers were not dumb but they had... like, no drive at all to engage with problems. Most programmers should enjoy problems, not develop a kind of bad feeling behind the eyes, or a tightness in their chest. But for these folks, a problem was a threat, of a bad status update at their daily Scrum.

        Dependencies are a socially condoned shortcut to that. You can use a library and look like a sensible and pragmatic engineer. When everyone around you appears to accept this as the norm, it's too easy to just go with the flow.

        I think it is a change in the psychological demographic too. This will sound fanciful. But tech used to select for very independent, stubborn, disagreeable people. Now, agreeableness is king. And what is more agreeable than using dependencies?

        • austin-cheney 1 day ago
          The two I hear the most are:

          reinventing the wheel

          some comparison to assembly

        • 12_throw_away 1 day ago
          > They were anxious to perform in their new career and interpreted that as shipping tickets as fast as possible. [...].. they had like, no drive at all to engage with problems

          To be honest, I think these programmers understood their jobs perfectly here. Their bosses view programmers as commodities, are not concerned with robustness, maintainability, or technical merit - they want a crank they can turn that spits out features.

          • jbreckmckye 12 hours ago
            I think you are right. Those feature factory teams were the ones hiring as fast as they could; they didn't need to filter on programming fundamentals; and they could exploit the anxiety of junior developers who sensed the market was becoming competitive.
        • notmyjob 1 day ago
          Not sure about “agreeableness” but I can see group think and disagreeableness to anything that falls outside of the group think. Cargo cult coding isn’t a new thing but the demographic shift you note is real. But is that not just the commodification of programming labor?
      • IshKebab 1 day ago
        Not my experience at all. It's more like a) JS devs view NPM packages as a mark of pride and so they try to make as many as possible (there are people proud of maintaining hundreds of packages, which is obviously dumb), and b) people are lazy and will take a ready-made solution if it's available, and c) there are a lot of JavaScript developers.

        The main reasons you don't see this in other languages is they don't have so many developers, and their packaging ecosystems are generally waaay higher friction. Rust is just as easy, but way higher skill level. Python is... not awful but it's definitely still a pain to publish packages for. C++, yeah why even bother.

        If Python ever official adopts uv and we get a nice `uv publish` command then you will absolutely see the same thing there.

        • pixl97 1 day ago
          It seems in some large businesses code ownership is an issue too.

          If you NPM import that's now part of your SCA/SBOM/CI to monitor and keep secure.

          If you write code, it's now your problem to secure and manage.

      • xorcist 1 day ago
        If Javascript people were bad programmers, we wouldn't see two new frontend frameworks per year. Many of them are ambitious projects that must have had thousands of hours put in by people who know the language well.

        The observation is real however. But every culture develops its own quirks and ideas, and for some reason this has just become a fundamental part of Javascript's. It's hard to know why after the fact, but perhaps it could spark the interest of sociologists who can enlighten us.

        • skydhash 1 day ago
          There's a reason you don't see two frameworks every year in another language. Being a good programmer is recognizing when a problem is solved and actually contributing to the solution instead of recreating it. Coding a new system can be done really quickly as you're mostly focusing on the happy path. The real work is ironing out bugs and optimizing the performance.
      • crabmusket 1 day ago
        Glad to see someone else identify the anxiety at the root of the culture.

        After an npm incident in 2020 I wrote up my thoughts. I argue that this anxiety is actually somewhat unique to JS which is why we don't see a similar culture in other languages ecosystems

        https://crabmusket.net/java-scripts-ecosystem-is-uniquely-pa...

        Basically, the sources of paranoia in the ecosystem are

        1. Weak dynamic typing

        2. Runtime (browser engineers) diversity and compatibility issues

        3. Bundle size (the "physics" of code on a website)

        In combination these three things have made JS's ecosystem really psychologically reliant on other people's code.

      • duped 1 day ago
        I don't quite know how to put this thought together yet, but I've noticed that no one quite hates programming more than this class of programmers. It's like playing on a football team with people who hate football.

        A key phrase that comes up is "this is a solved problem." So what? You should want to solve it yourself, too. It's the PM's job to tell us not to.

    • nine_k 1 day ago
      Having a module for every little trivial thing allows you to only bring these modules inside the JS bundle you serve to your client. If there's a problem in one trivial-thing function, other unrelated trivial things can still be used, because they are not bundled in the same package.

      A comprehensive library might offer a more neat DX, but you'd have to ship library code you don't use. (Yes, tree-shaking exists, but still is tricky and not widespread.)

      • palmfacehn 1 day ago
        Things like this are good illustrations as to why many feel that the entire JS ecosystem is broken. Even if you have a standard lib included in a language, you wouldn't expect a bigger binary because of the standard lib. The JS solution is often more duct tape on top of a bad design. In this case tree shaking, which may or may not work as intended.
        • crabmusket 1 day ago
          I agree with you, but I'd ask- what other language needs to distribute to an unknown runtime environment over the network?

          If it's the browser's job to implement the standard library, how do you ensure that all browsers do this in a compliant and timely fashion? And if not, how do you optimise code-on-demand delivery over the internet?

          I don't deny there are/could be solutions to this. But historically JS devs have wrestled with these issues as best they can and that has shaped what we see today.

          • skydhash 1 day ago
            > what other language needs to distribute to an unknown runtime environment over the network?

            What is this unknown runtime environment? Even during the browser war, there was just an handful of browsers. And IE was the only major outlier. Checking the existence of features and polyfilling is not that complicated.

            And most time, the browser is already downloading lot of images and other resources. Arguing about bundle size is very hypocritical of developers that won't blink at adding 17 analytics modules.

            • crabmusket 1 day ago
              > Checking the existence of features and polyfilling is not that complicated.

              Judging by what we see in the world, most developers don't agree with you. And neither do I. A handful of browsers, multiplied by many versions per browser in the wild (before evergreen browsers like Chrome became widespread, but even today with e.g. Safari, or enterprise users), multiplied by a sprawling API surface (dare I say it, a standard library) is not trivial. And that's not even considering browser bugs and regressions.

              > very hypocritical of developers that won't blink

              Not a great argument, as developers don't necessarily get to choose how to add analytics, and plenty of them try to push back against doing so.

              Also, the cost of parsing and JIT'ing JS code is byte-for-byte different to the cost of decoding an image.

              • skydhash 22 hours ago
                > Judging by what we see in the world, most developers don't agree with you. And neither do I.

                From my POV, most developers just test on the most popular browser (and the latest version of that) without checking if the API is standard or its changelog. Or they do dev on the most powerful laptop while the rest of the world is still on 8gb, FHD screen with integrated gpu.

              • typpilol 22 hours ago
                Explain browser defaults to non JavaScript people is kind of eye opening I've found
          • palmfacehn 20 hours ago
            A batteries included standard lib included with the runtime is one approach. Yes, you would know upfront the version which the browser implements. From there you could dynamically load a polyfill or prompt the user to upgrade.

            Alternatively, because there are now (often ridiculous) build systems and compilation steps, we might expect similar behavior to other compiled binaries. Instead we get the worst of both worlds.

            Yes, JS as it is is some kind of standard, but at a certain point we might ask, "Why not throw out the bad designs and start from scratch?" If it takes ten years to sunset the garbage and offer a compatibility shim, that's fine. All the more reason to start now.

            A purely compiled WASM approach with first class DOM access or a clean scripting language with a versioned standard lib, either option would be better than the status quo.

            • crabmusket 20 hours ago
              > A purely compiled WASM approach

              I would love to see if a browser could like... "disaggregate" itself into WASM modules. E.g. why couldn't new JS standards be implemented in WASM and hot loaded into the browser itself from a trusted distributor when necessary?

              Missing CSS Level 5 selectors? Browser goes and grabs the reference implementation from the W3C.

              Low-level implementations could replace these for the browsers with the most demanding performance goals, but "everyone else" could benefit from at least remaining spec compatible?

              (I guess this begs the question of "what's the API that these WASM modules all have to conform to" but I dunno, I find it an interesting thought.)

              • palmfacehn 16 hours ago
                Yes, that would be a compelling change. Like a language agnostic HotJava platform. We're overdue for a more coherent approach, from the bottom up.
          • int_19h 7 hours ago
            > how do you ensure that all browsers do this in a compliant and timely fashion?

            The ecosystem somehow manages to figure this out for things like fancy CSS gradients...

        • nine_k 1 day ago
          This is because you cannot easily remove problematic stuff from the browser. It's actively being used by someone, so the vendors keep it, so it continues to be used. The process takes decades, literally.

          On the server side, of course, you can do whatever you like, see Node / Deno / Bun. But the code bundle size plays a minor role there.

      • skydhash 1 day ago
        Doesn’t the bundler already do tree shaking? Optimizing via dependency listing is very wrong.
        • tracker1 1 day ago
          Tree shaking is less than reliable... for it to work well, all the dependencies need to be TS/ESModule imports/exports and even then may not shake out properly.

          It helps, but not as much as judicious imports. I've been using Deno more for my personal projects which does have a pretty good @std library, though I do think they should keep methods that simply pass through to the Deno runtime, and should probably support working in Node and Bun as well.

      • 0cf8612b2e1e 1 day ago
        Given how fat a modern website is, I am not sure that a kitchen sink library would change much. It could actually improve things because there would be fewer redundant libraries for basic functionality.

        Say there is neoleftpad and megaleftpad - both could see widespread adoption, so you are transitively dependent on both.

        • palmfacehn 1 day ago
          There's also the option of including that standard lib with the runtime.
          • 9dev 1 day ago
            And never ever be able to correct your past mistakes, because some sites might still be using them? The web platform is no .NET runtime you can just update.
            • int_19h 7 hours ago
              Web browsers update far more often than .NET runtime, if anything. And .NET still supports a lot of deprecated stuff going all the way back to 1.0; so does Java (old-style collections, for example).

              Also, JavaScript is a shining example of "never ever be able to correct your past mistakes" already, so it's not like this is something new for the web.

          • crabmusket 1 day ago
            That is exactly what happens today. JS has a standard library. It's just not evenly distributed.
            • int_19h 6 hours ago
              JS standard library is missing very basic things like maps with value semantics for keys that aren't primitives.
    • jowea 1 day ago
      This conversation been a thing since at least the leftpad event. It's just how the js ecosystem works it seems. The default library is too small perhaps?
      • raddan 1 day ago
        Or the language is too braindead. `is-arrayish` should not even have to be a thing.
        • robrtsql 1 day ago
          I agree that it doesn't need to exist, but as far as I can tell, almost no one depends on it directly. The only person using it is the author, who uses it in some other small libraries, which are then used in a larger, nontrivial library.

          I just created a Next.js app, saw that `is-arrayish` was in my node_modules, and tried to figure out how it got there and why. Here's the chain of dependencies:

          next > sharp > color > color-string > simple-swizzle > is-arrayish

          `next` uses `sharp` for image optimization. Seems reasonable.

          `sharp` uses `color` (https://www.npmjs.com/package/color) to convert and manipulate color strings. Again, that seems reasonable. This package is maintained by Qix-.

          Everything else in the chain (color-string > simple-swizzle > is-arrayish) is also maintained by Qix-. It's obnoxious to me that he feels it is necessary to have 80 different packages, but it would also be a substantial amount of effort for the other parties to stop relying on Qix-'s stuff entirely.

          • tkiolp4 1 day ago
            That’s a tactic shitty maintainers do: write N dubious modules that no sane person would install. Write one or two valuable modules that import those N dubious modules.
    • lukebechtel 1 day ago
      It's easier to find something frustrating in large code changes than in single line imports, even if the effective code being run is the same -- the PR review looks cleaner and safer to just import something that seems "trusted".

      I'm not saying it is safer, just to the tired grug brain it can feel safer.

    • socalgal2 1 day ago
      Same reason they do in rust.

      The rust docs, a static site generator, pull in over 700 packages.

      Because it’s trivial and easy

    • jbreckmckye 1 day ago
      "JS people" don't, but certain key dependencies do, and there are social / OSS-political reasons why.

      Why do "Java people" depend on lowrie's itext? Remember the leftpad-esque incident he initiated in 2015?

    • rglover 1 day ago
      You typically don't. But a lot of packages that you do install depend on smaller stuff like this under the hood (not necessarily good and obviously better handled with bespoke code in the package, but is is what it is).
      • grishka 1 day ago
        Then the question becomes, why do developers of larger libraries import someone else's module for every little trivial thing?
        • SAI_Peregrinus 1 day ago
          Sometimes it's not someone else's module, it's their own. They break up the big library into reusable components, and publish them all separately. Essentially taking DRY to an extreme: don't have private functions, make all your implementation details part of the public API & reuse them across projects.
        • rglover 1 day ago
          Because they don't have the slightest clue what they're doing.
          • jbreckmckye 1 day ago
            It's not that either.

            There are a handful of important packages that are controlled by people who have consulting / commercial interests in OSS activity. These people have an incentive to inflate download numbers.

            There could be a collective push to move off these deps, but it takes effort and nobody has a strong incentive to be the first

    • dist-epoch 1 day ago
      This is spreading everywhere, Rust, Python, ...
      • Klonoar 1 day ago
        Rust is an interesting case to me.

        There are certainly a lot of libraries on crates.io, but I’ve noticed more projects in that ecosystem are willing to push back and resist importing unproven crates for smaller tasks. Most imported crates seem to me to be for bigger functionality that would be otherwise tedious to maintain, not something like “is this variable an array”.

        (Note that I’m not saying Rust and Cargo are completely immune to the issue here)

      • grishka 1 day ago
        Not Java, thankfully! Libraries containing 1-2 trivial classes do exist, but they're an exception rather than a rule. Might be that the process of publishing to Maven Central is just convoluted enough to deter the kinds of people who would publish such libraries.
        • Deukhoofd 1 day ago
          Also because Java, .NET, etc. all have very expansive standard libraries. You don't need to import most stuff, as it's already built-in.
          • tracker1 1 day ago
            Very true... I'm more experienced with .Net, but usually when you bring in something, it's much more of a compositional library or framework for doing something... like a testing harness (XUnit), web framework (FastEndpoints), etc. No so much in terms of basic utilities, where the std library and extensions for LINQ cover a lot of ground, even if you aren't using LINQ expressions themselves.
          • kelvinjps10 1 day ago
            But then you depend on Microsoft for everything. I prefer python where it's battery Included but you depend on a foundation
            • grishka 1 day ago
              Hasn't .net been open-source for like 10 years?
              • int_19h 1 day ago
                It is, but it's still firmly controlled by Microsoft, particularly when it comes to ecosystem evolution. Some people find that uncomfortable even if the source is open - legal right to fork is one thing, technical ability to do so and maintain said fork is another.
              • kelvinjps10 18 hours ago
                But most of the documentation and tooling is around visual studio and azure
          • szatkus 1 day ago
            I mean, Apache Commons are still widely used. But it's just a handful of libraries maintaned by one organisation.
      • adamc 1 day ago
        The difference, at least in languages like Java or Python, is that there is a pretty strong "standard" library that ships with the language, and which one can assume will be kept up-to-date. It is very hard to assume that for NPM or Rust or any other crowd-sourced library system.
    • paulddraper 1 day ago
      Which of these would you prefer to reimplement?

      Debug, chalk, ansi-styles?

      ---

      You can pretend like this is unique to JS ecosystem, but xz was compromised for 3 years.

      • craftkiller 1 day ago
        > You can pretend like this is unique to JS ecosystem, but xz was compromised for 3 years.

        Okay, but you're not suggesting that a compression algorithm is the same scale as "is-arrayish". I don't think everyone should need to reimplement LZMA but installing a library to determine if a value is an array is bordering on satire.

        • paulddraper 1 day ago
          FWIW, is-arrayish is primarily an internal dependency. The author (Qix) depends on it for the packages that actually get used, liked color and error-ex.

          But it's all one author.

          • tkiolp4 1 day ago
            They should ban Qix.
      • stathibus 1 day ago
        A common refrain here seems to be that there is no good std lib, which makes sense for something like "chalk" (used for pretty printing?)

        That being said, let's take color printing in terminal as an example. In any sane environment how complicated would that package have to be, and how much work would you expect it to take to maintain? To me the answer is "not much" and "basically never." There are pretty-print libraries for OS terminals written in compiled languages from 25 years ago that still work just fine.

        So, what else is wrong with javascript dev where something as simple as coloring console text has 32 releases and 58 github contributors?

        • crabmusket 1 day ago
          Skimming chalk's releases page, I did find some quick confirmation of what I expected: recent releases, at least breaking ones, are to do with keeping up with ecosystem changes:

          https://github.com/chalk/chalk/releases

          5.0: moving to ESM

          4.0: dropping support for Node <10

          3.0: indeed some substantive API and functionality changes

          I got to 2.0 which added truecolor support. I was amused to note also that 3.0 and 2.0 come with splashy banner images in their GitHub releases

          This is a pattern I've seen often with "connector" packages, e.g. "glue library X into framework Y". They get like 10 major versions just because they have to keep updating major versions of X and Y they are compatible with, or do some other ecosystem maintenance.

        • paulddraper 1 day ago
          > So, what else is wrong with javascript dev where something as simple as coloring console text has 32 releases and 58 github contributors?

          I see a new CLI graphics library on HN every other week.

          https://github.com/fatih/color (Go) has 23 releases and 39 contributors.

          https://github.com/BurntSushi/termcolor (Rust) has 173 contributors.

      • dsff3f3f3f 1 day ago
        I wouldn't use debug or ansi-styles. They're not even remotely close to being worth adding a dependency. Obviously none of them are trustworthy now though.
        • skydhash 1 day ago
          I wouldn’t even use chalk. Altering terminal output is easy. But it should be used sparingly.
          • dsff3f3f3f 1 day ago
            You're right. I only looked at the source for debug and ansi-styles. After looking at chalk it's insanity to add that as a dependency as well.
            • kesor 23 hours ago
              And yet it has 300M weekly downloads. I am fairly sure that most of these are not because it is a direct dependency of people's projects, but rather it is a dependency of a dependency of a dependency.
              • skydhash 22 hours ago
                I think expo and eas-cli (the expo build service) is using chalk. Never understood what those cli need colors for what can be easily done with proper spacing and some symbols.
      • homebrewer 1 day ago
        It's telling that we keep remembering xz to this day, while npm has these incidents on what feels like every single week.
        • pixl97 1 day ago
          I mean, we're catching the ones on NPM. Who know how many xz's are hidden.
    • felbane 1 day ago
      Extreme aversion to NIH syndrome, perhaps? I agree that it's weird. Sure, don't try to roll your own crypto library but the amount of `require('left-pad')` in the wild is egregious.
  • zubilent 1 day ago
    Is the npm package ecosystem fixable at this point? It seems to be flawed by design.

    Is there a way to not accept any package version less than X months old? It's not ideal because malicious changes may still have gone undetected in that time span.

    Time to deploy AI to automatically inspect packages for suspect changes.

    • mattstir 1 day ago
      It's a tricky thing because what if the update fixes a critical vulnerability? Then you'd be stuck on the exploitable version for X months longer
  • WesolyKubeczek 1 day ago
    Ugh, I almost had my github compromised two years ago with a phishing email from circleci dot net. Almost. The github login page still under that domain made me stop in my tracks.
  • andrewmcwatters 1 day ago
    Luckily this seems to be browser-specific, and not cryptocurrency malware that runs in Node.js environments, so it might be wise for us all to do some hardening on our software, and make sure we're doing things like version pinning.

    Edit: As of this morning, `npm audit` will catch this.

    • jbverschoor 1 day ago
      Run anything in some sort of container or sandbox
  • paulddraper 1 day ago
    Maintainer phished.

    Was caught quickly (hours? hard to be sure, the versions have been removed/overwritten).

    Attacker owns npmjs.help domain.

    • DDerTyp 1 day ago
      Noticed that after ten mins, contacted author immediatly and he seems to be working on it / restoring his account / removing malware on published packages.

      Kinda "proud" on it haha :D

      • jbverschoor 1 day ago
        Doesn’t npmjs do things like signing, pinning, and yanking packages, like rubygems?
  • nodesocket 1 day ago
    This is terrifying. Reminder to store your crypto in a hardware based wallet like Ledger not browser based. Stay frosty when making transfers from exchanges.
    • artooro 1 day ago
      While true, this is also an eye opening event of how much worse it could be if it was more generic and not limited to crypto wallet addresses.
      • nodesocket 1 day ago
        Seems like exchanges should have a confirmation screen that shows the destination addresses from XHR requests before processing, though I suppose the malicious script could just change the DOM showing the address you entered instead of the modified address it injected.
    • 1023bytes 1 day ago
      If an exchange got compromised there's no way you would know you're sending to the attackers address
    • nixosbestos 1 day ago
      How is it terrifying? They clicked through a 2FA reset email, a process that I have never, and will never need to go through, and seemingly one that they didn't even initiate.
      • goku12 1 day ago
        How many developers are there like him? If not him, they'll target someone else. And while you or I will never do such a thing under normal circumstances, that's a pretty simple mistake to make if you are stressed, sleep deprived or sick. We are supposed to have automatic safeguards against such simple mistakes. (We used to design stuff with the assumption that if a human mistake is possible, someone will eventually make it for sure.)
        • crooked-v 1 day ago
          Also, companies have mass popularized the whole 'click a link in an email to login' thing, which really contributes to the mistake factor.
      • nodesocket 1 day ago
        Like you’ve never made a mistake before. Blatantly blaming the maintainer is unfair. They made a mistake, it happens.
        • nixosbestos 1 day ago
          No, I have never, ever responded to an explicit ask to reset the most important security feature of my accounts, without me initiating it, and I use a password manager (lol) so, no, I will never, ever encounter this problem. Because I care about my data, safety, and integrity, and my users'. There's literally no reason ever why I would or will do a 2FA reset.

          It does happen, yes, it's not terrifying.

          • kelvinjps10 1 day ago
            The wording was similar to how GitHub started requiring 2FA. It wasn't "there is the 2FA change that initiate" it was more of starting September 10 we will starting to request 2fa
            • nixosbestos 1 day ago
              Edit: I get it, it was a pw+top phishing/proxy attack.

              Wouldn't have happened if they used passkeys or a password manager. Things that get dunked on here regularly. Hm.

          • wewtyflakes 1 day ago
            Nobody cares if you, specifically, are this diligent. The terror is because unless _absolutely everyone_ who maintains NPM packages is this diligent, then we are all vulnerable. That sounds terrifying to me!
  • pavlov 1 day ago
    The malware steals crypto in end-user browsers.

    Another one for “web3 is going great”…

    • goku12 1 day ago
      I dislike web3 and the overuse of crypto as much as you do. But look at the nature of the exploit. It isn't limited to crypto or web3. There are other secrets and sensitive information that browsers regularly hold in their memory. What about them?
      • jowea 1 day ago
        Yeah cryptoassets are probably just the easiest thing to monetize.
  • bpavuk 1 day ago
    I'll come back to this thread when someone asks me why I hate JavaScr*pt yet again. this will be one of a thousand links.
  • albi05 1 day ago
    "B-b-but passkeys are inconvenient"
    • herpdyderp 1 day ago
      I must admit I was wary of them at first but now I use them on everything I can and it's more convenient.
  • vplex 1 day ago
    [dead]
  • verhoff_noah 10 hours ago
    [flagged]
  • dist-epoch 1 day ago
    Given that most of these kind of attacks are detected relatively quickly, NPM should implement a feature where it doesn't install/upgrade packages newer than 3 days, and just use the previous version.
    • jowea 1 day ago
      What if the latest patch is (claiming to be) a security fix? Then that's 3 days of more insecurity.
    • mcintyre1994 1 day ago
      Would it be spotted quickly if nobody got the update though? It'd probably just go undetected for 3 days instead. In this case one team spotted it because their CI picked up the new version (https://jdstaerk.substack.com/p/we-just-found-malicious-code...).
      • skybrian 1 day ago
        The question is who picks up the vulnerable version first. With minimal version selection (like Go has), the people with a direct dependency on the vulnerable library go first, after running a command to update their direct dependencies. People with indirect dependencies don’t get the new version until a direct dependency does a release pointing at the vulnerable version, passing it on.

        Not sure if that would be a better result in the end. It seems like it depends on who has direct dependencies and how much testing they do. Do they pass it on or not?

  • nixosbestos 1 day ago
    Cough passkeys would've prevented this.