![]() |
|
![]() |
| Then there are the programmers who read on proggit that “OO drools, functional programming rules” or the C++ programmers who think having a 40 minute build proves how smart and tough they are, etc. |
![]() |
| > this domain was caught injecting malware on mobile devices via any site that embeds cdn.polyfill.io
I've said it before, and I'll say it again: https://httptoolkit.com/blog/public-cdn-risks/ You can reduce issues like this using subresource intergrity (SRI) but there are still tradeoffs (around privacy & reliability - see article above) and there is a better solution: self-host your dependencies behind a CDN service you control (just bunny/cloudflare/akamai/whatever is fine and cheap). In a tiny prototyping project, a public CDN is convenient to get started fast, sure, but if you're deploying major websites I would really strong recommend not using public CDNs, never ever ever ever (the World Economic Forum website is affected here, for example! Absolutely ridiculous). |
![]() |
| The assumption of many npm packages is that you have a bundler and I think rightly so because that leaves all options open regarding polyfilling, minification and actual bundling. |
![]() |
| I really miss the days of minimal/no use of JS in websites (not that I want java-applets and Flash LOL). Kind of depressing that so much of the current webdesign is walled behind javascript. |
![]() |
| The assumption shouldn't be that you have a bundler, but that your tools and runtimes support standard semantics, so you can bundle if you want to, or not bundle if you don't want to. |
![]() |
| That was my thought too but polyfill.io does do a bit more than a traditional library CDN, their server dispatches a different file depending on the requesting user agent, so only the polyfills needed by that browser are delivered and newer ones don't need to download and parse a bunch of useless code. If you check the source code they deliver from a sufficiently modern browser then it doesn't contain any code at all (well, unless they decide to serve you the backdoored version...)
https://polyfill.io/v3/polyfill.min.js OTOH doing it that way means you can't use subresource integrity, so you really have to trust whoever is running the CDN even more than usual. As mentioned in the OP, Cloudflare and Fastly both host their own mirrors of this service if you still need to care about old browsers. |
![]() |
| There was a brief period when the frontend dev world believed the most performant way to have everyone load, say, jquery, would be for every site to load it from the same CDN URL. From a trustworthy provider like Google, of course.
It turned out the browser domain sandboxing wasn’t as good as we thought, so this opened up side channel attacks, which led to browsers getting rid of cross-domain cache sharing; and of course it turns out that there’s really no such thing as a ‘trustworthy provider’ so the web dev community memory-holed that little side adventure and pivoted to npm. Which is going GREAT by the way. The advice is still out there, of course. W3schools says: > One big advantage of using the hosted jQuery from Google: > Many users already have downloaded jQuery from Google when visiting another site. As a result, it will be loaded from cache when they visit your site https://www.w3schools.com/jquery/jquery_get_started.asp Which hasn’t been true for years, but hey. |
![]() |
| >self-host your dependencies behind a CDN service you control (just bunny/cloudflare/akamai/whatever is fine and cheap).
This is not always possible, and some dependencies will even disallow it (think: third-party suppliers). Anyways, then that CDN service's BGP routes are hijacked. Then what? See "BGP Routes" on https://joshua.hu/how-I-backdoored-your-supply-chain But in general, I agree: websites pointing to random js files on the internet with questionable domain independence and security is a minefield that is already exploding in some places. |
![]() |
| This assumes that advertisers know how the traffic came to their site. The malware operators could be scamming the advertisers into paying for traffic with very low conversion rates. |
![]() |
| CF links to the same discussion on GitHub that the OP does. Seems less like they predicted it, and more like they just thought that other folks concerns were valid and amplified the message. |
![]() |
| Ah! I didn’t realize that. My new hot take is that sounds like a terrible idea and is effectively giving full control of the user’s browser to the polyfill site. |
![]() |
| And this hot take happens to be completely correct (and is why many people didn't use it, in spite of others yelling that they were needlessly re-inventing the wheel). |
![]() |
| Their point is that the result changes depending on the request. It isn't a concern about the SRI hash not getting checked, it is that you can't realistically know the what you expect in advance. |
![]() |
| In general, SRI (Subresource Integrity) should protect against this. It sounds like it wasn't possible in the Polyfill case as the returned JS was dynamic based on the browser requesting it. |
![]() |
| Always host your dependencies yourself, it's easy to do & even in the absence of a supply chain attack it helps to protect your users' privacy. |
![]() |
| MathJax [1] still recommends this way of using it:
Therefore, if you ever used MathJax, by possibly copying the above and forgetting, make sure to patch it out.[1] https://www.mathjax.org/#gettingstarted EDIT: To clarify, patch out just the polyfill (the first line in the snippet). You can of course keep using MathJax, and the second line alone should be enough. (Though still better host a copy by yourself, just in case). |
![]() |
| In times of widespread attacks, it would be better to list out actions that affected parties can take. Here is what I found:
- remove it fully (as per original author). It is no more needed - use alternate cdns (from fastly or cloudflare) Also as a good practice, use SRI (though it wouldn’t have helped in this attack) I posted a note here: https://cpn.jjude.com/@jjude/statuses/01J195H28FZWJTN7EKT9JW... Please add any actions that devs and non-devs can take to mitigate this attack. |
![]() |
| Why would the Chinese government use this to load a gambling website? I'm sure there are many better uses that would be more subtle that they could come up with this opportunity. |
![]() |
| Isn't there some hash in the script tag for these kinds of stuff? Maybe that should be mandatory or something? This broke half the internet anyway. |
![]() |
| The first time a user, who uses a phone, open a website through an ads ( google ads or facebook ) with this link, it will redirect user to a malicious website.
The request send to https://cdn.polyfill.io/v2/polyfill.min.js needs to match the following format:
The request will return the original polyfill code, appended with a piece of malicious code. This code will make a run javascript from https://www.googie-anaiytics.com/ga.js , if the device is not a laptop. You can reproduce this multiple time on the same machine by changing User agent gently, (ex: change Mozilla/5.0 to Mozilla/6.0). Sometimes the server will just timeout or return code without the injection, but it should work most of the time.The javascript on https://www.googie-anaiytics.com/ga.js will redirect users to a malicious website based on some condition check for a number of conditions before running ( useragent, screen width, ...) to ensure it is a phone, the entry point is at the end: bdtjfg||cnzfg||wolafg||mattoo||aanaly||ggmana||aplausix||statcct?setTimeout(check_tiaozhuan,-0x4*0x922+0x1ebd+0xd9b):check_tiaozhuan(); The code has some protection built-in, so if it is run on a non-suitable environment, it will attempt to relocate a lot of memory to freeze the current devices. It also re-translate all attribute name access with _0x42bcd7 . https://github.com/polyfillpolyfill/polyfill-service/issues/... |
![]() |
| Sigstore is doing a lot of interesting work in the code supply chain space. I have my fingers crossed that they find a way to replace the current application code signing racket along the way. |
If security means every maintainer of every OSS package you use has to be scrupulous, tireless, and not screw up for life, not sure what to say when this kind of thing happens other than "isn't that the only possible outcome given the system and incentives on a long enough timeline?"
Kind of like the "why is my favorite company monetizing now and using dark patterns?" Well, on an infinite timeline did you think service would remain high quality, free, well supported, and run by tireless, unselfish, unambitious benevolent dictators for the rest of your life? Or was it a foregone question that was only a matter of "when" not "if"?