Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
From: Alex Russell (alex_at_netWindows.org)
Date: Wed Aug 07 2002 - 18:11:48 CDT
On Wednesday 07 August 2002 17:07, you wrote:
> >> It is clean and future-proof.
> >I dissagree.
> You argue it is not clean or secure, which I will counter-argue below. But
> I'd love to hear why it isn't future-proof.
when was the last time a client-side spec stood still? Things get depricated,
implemented poorly, etc...
While it is probably future-proof in theory, the reality of any such tagging
scheme is likely to be less than rosy or future proof for actual
implementors. Security is very different from many other web-facing
technologies (CSS and HTML are good examples). Breakage of the laid out
requirements should not be silent or invisible or easily recovered. The
system should fail closed, yet we find that most web browsers are not written
in a way that makes a client-side security mechanism trustworthy, even if
it's only purpose is to preserve it's own integrity (IE Zones anyone?)
> (I think that is my weaker point, so I'm expecting most counter-arguments
> to focus there. I'm also expecting some interesting counter-arguments
> regarding my characterization of XSS as a design flaw in DHTML due to the
> in-band mingling of scripting with typesetting, when they could be two
> separate channels. I expect this b/c in-band signaling seems to be standard
> practice these days.)
> Suppose you are making a web interface to email. Now suppose a web
> developer emails another web developer some source code. If you filtered <
> and >, their source would be unreadable. What you'd rather do is turn off
> browser interpretation. But today there's no simple, future-proof, generic
> way to do this that is not vulnerable to attack. (That I know of. Anybody?)
> The proposed tag, however, could accommodate this application requirement
> quite easily.
The deny-by-default rule fails securely in this case, which is all we really
care about (from a security perspective). Can it cope with any number of
boundary conditions (server->browser, database->server, client->server)? Yes,
and such a scheme can be made to mangle correctly at each of those boundaries
(which would indeed be "playing catchup"), but that in no way affects the
security posture of said filter. Simply it's utility for certain cases.
Arguably, you'd be playing catch up far more inefficiently were such a task
delegated to the client side. Instead of having to tweak some rules on a
server, you would have to tweak rules on every client. It'd be like trying to
update Snort filters on every browser known to man.
> You misunderstood me. I strongly *agree* with your assertion that defense
> should always be the responsibility of the true stake holder, e.g. you
> should never trust a client to implement security logic to protect your
> server. But if you study Doug's suggestion carefully, then you will notice
> that it is not the attacker's browser that we are trying to trust. (Yes,
> that would be stupid.) Rather, it is the victim's browser. Huge difference.
Victims are attackers (of either themselves or of others) so long as they are
dupes of the attacker. Trusting a user not to become a dupe (which is your
fundamental assumption) is not a viable course of action. Unless you OWN the
memory space, you WILL loose. A fundamental lesson in filtering input is to
NEVER EVER trust the client. Besides, who is going to protect the victim from
him/herself? The victim?
> In fact, XSS is unusual in that it is one browser attacking another
> browser, via your server. So strategically, it is a different situation
> than most web application vulnerabilities.
But isn't different than a lot of other vulnerabilities in other
protocols/problem domains. Credit Card fraud is rarely undertaken against the
CC system, but rather against individual cardholders. Examples of military
situations analgous to this are infinite (attack the populace, or means of
production, not the command). It's not a new problem, nor is it's solution
unique. You place safeguards where they are most economically viable, and at
this point, putting safeguards on the server is a no-brainer.
-- Alex Russell alexSecurePipe.com alexnetWindows.org