Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
From: Bill Pennington (billp_at_boarder.org)
Date: Wed Aug 07 2002 - 22:28:33 CDT
Wow the day I am away from my machine all kinds of fun stuff breaks out. :-)
Just the other day I was thinking about using some sort of Apache module
(mod_rewrite??) to build rewrite rules for URLs. Simple rewriting <,>, and &
to there HTML equivs. It has be forever since I played with Apache modules
but I think that could be a nice way to handle any URL based XSS issues.
On 8/7/02 1:27 PM, "Ben Mord" <bmordicon-nicholson.com> wrote:
> I *really* like your idea for the client-side solution to XSS. (As I will
> argue, though, it will sadly never happen.)
> It is clean and future-proof. It would allow the client-side HTML and
> scripting languages to change and evolve without requiring constant updates
> to your tag syntax. Server-side filtering solutions on the other hand will
> always be playing catch-up with new scripting features in the client-side
> languages. As client-side scripting advances yield new XSS vulnerabilities,
> the server-side filters must then be updated. And there is always the
> potential for the filters to conflict with the application requirements.
> Furthermore, your client-side suggestion even encapsulates the burden of XSS
> defense where it properly belongs - on the client side. From an OO
> coherence/cohesion view, it is cleaner. As you point out, it even gives the
> user control over how safe they want to be. The fundamental weakness which
> causes XSS vulnerabilities in the first place is in the client-side
> scripting language. The problem is specifically the in-band mixing of
> control signals (aka scripting) with the more harmless data and typesetting
> of html. If these were in two separate channels (e.g. different parts of a
> multi-part MIME), then XSS would be a non-issue. Your client-side suggestion
> very elegantly patches this DHTML defect by allowing the server-side
> programmer to locally separate these two channels, wherever needed.
> But alas, your client-side solution will never happen despite its elegant
> simplicity and inherent future-proofness. This is because while being
> forward-looking (future-proof and clean), it is not backward-looking. There
> would be a transition period where programmers would need to *both* use your
> tag *and* filter to protect the older browsers that don't yet support your
> tag. Of course, if you're already filtering to protect the older browsers,
> then there is no need for your tag. Being inherently lazy and short-sighted,
> this means nobody would start using the tag, at least not until *after* all
> popular browsers support it. But there is no incentive for browsers to start
> supporting it if we are conditioned to think of this as a server-side issue,
> so they won't. Catch-22. Too bad. Alas, it must remain a server-side burden,
> for legacy reasons.
> -----Original Message-----
> From: Doug Sibley [mailto:doug.sibleybmo.com]
> Sent: Wednesday, August 07, 2002 11:31 AM
> To: webappsecsecurityfocus.com
> Subject: Easy End to XSS
> I think what we need to come up with is an easy way for
> developers to create web-apps with user content in them
> that isn't vulnerable to XSS. Expecting developers to
> filter everything nicely and teaching them about
> won't be successful (just think of all the egregious
> security errors you've run across when dealing with web
> We need something like a tag a developer can use to
> mark content as untrusted to the browser:
> <untrusted level="allow_static_html"
> randomChallenge="AX7KSLKXLKJ23N"> $USER_INPUT
> </untrusted randomChallenge="AX7KSLKXLKJ23N">. Here the
> malicious user couldn't insert any active code because
> the browswer's rendering engine would turn off active
> code (scripting, java, plugins, etc.) until a
> </untrusted> with the correct server-generated
> randomChallenge. The default security level would
> disable all HTML (leaving just plain-text). The
> disadvantage to this solution is that (a) browsers
> would need updating, and (b) good random numbers would
> be needed.
> We could also go for a server solution/language
> solution with embedded directives in pages. The format
> would be similarly, a custom tag such as <untrusted
> level="no_html"> $USER_INPUT </untrusted> but it
> wouldn't need browser changes or a random challenge.
> The untrusted tag would get parsed and anything between
> the tag pair would be sanitized before being sent to
> the browser by the types of filters that are now
> proposed to filter XSS. For various languages the tag
> may instead be a function call or other mechanism.
> The main benefit of such a solution (I would favor
> server-based) is that of simplicity. The developer
> would only have to do one simple thing in order to
> guard against XSS and if other characters (i.e. some
> weird unicode sequence) or new browser 'feature'
> provides a new XSS attack method, only one library
> needs to be changed versus every application created.
> Another application of the technique would be a
> function (e.g. safeSelectSQL =
> sanitizeSelectQuery($INPUT_QUERY)) that sanitizes
> simple select queries from webpages so that a user
> input could be pasted into a select query without fear
> of database comprimise (of course, a new database user
> with read-only priviliges could be created but this is
> unlikely to happen from experience).
> My main point is that I think the ideas are there for
> how to protect ourselves, we just need to get these
> ideas into tools and into the hands of the developers
> that need them.
> Applied Cryptographer
> Department of Information Security
> Bank of Montreal Group of Companies
> Work: 416-513-5296 | Cell: 416-529-4642
> Lock it. Password it. Protect it. Information Security
> Verrouillage. Mot de passe. Protection : SÚcuritÚ
> These are my personal opinions and not those of the
> Bank of Montreal Group of Companies.