-
Notifications
You must be signed in to change notification settings - Fork 137
Why is a blacklist approach taken for URI protocols, and would a whitelist be safer? #57
Comments
Good question. I bet you would have a better answer from @adon-at-work. My personal experience is that filter libraries are designed to cover as many valid use cases as possible, that's why it was giving a flexibility for developers to allow different protocols (consider mobile apps or pseudo-protocols), while we keep the output be XSS free. Solving the whiltelisting problem is actually quite trivial after applying our XSS filter here - something like As said, @adon-at-work may have a better answer if we should add that logic and enable it via an optional argument. |
Hi @tansongyang I agree whitelisting is generally better and safer. Blacklisting is used here in v1 for the following design concerns, and yet it is also safe. As this project evolves, whitelisting is surely in our mind, and I see you also noticed #53 too. Would be great if you can help code reviewing it, and feel free to leave us more comments on that. :) |
Hi @adon-at-work, thanks for the explanation. That answer makes sense to me. I'm going to go ahead and close the issue. However, I'd be interested in knowing if there's a live list of scriptable protocols out there that the current blacklist was pulled from. |
I was taking a look at version 1.2.6 and I see that the URI filters take a blacklist approach:
URI_BLACKLIST_PROTOCOLS
, xss-filters.js, line 59. A similar approach is taken forCSS_BLACKLIST
.What are the advantages of doing things this way? It seems to me that a whitelist is more secure by default and has the benefit of being more future-proof. The only downside that I can think of that the filter could be too strict, but that could be worked around with a default whitelist that's good enough for most use cases, plus an interface for users to add to the whitelist.
The text was updated successfully, but these errors were encountered: