Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
From: matthew wollenweber (mwollenwebergmail.com)
Date: Mon Oct 15 2007 - 15:24:54 CDT
Personally, I don't understand the current trend in fuzzer research to go
obtain full code coverage. Sure, it's nice to check everything and have a
fuzzer traverse all the functions in the code, but maybe that's at the cost
of doing it all poorly. If you have a fixed amount of time to do the
assessment, I'd rather spend the time where it's needed. As you said, it's
better to thoroughly test the code in spots where the bugs are.
I do like instrumenting fuzzing and measuring where the fuzzer is being
effective and/or spending it's time. It's useful both to see where the
problems cluster and/or to give the thing a kick if it gets stuck.
While it's not for web apps, I find the work the Greg Hoglund and his guys
at HBGary have done to be a step in the right direction. His tool isn't
really meant for fuzzing (at least from my limited knowledge of it), but it
takes an RE approach to find what's important and focus there. To achieve
this, it measures function traversal, but rather than focusing everywhere,
it filters out irrelevant functions (background noise). For example, if you
want to debug a complex crypto routine that's attached to a graphical
display, you don't want to waste your time in the graphics.
Most of Hoglund's recent talks have featured at least snippets of HBGary
Inspector for anyone interested. Unfortunately, the software itself has too
many zeros in the price tag it for most people to buy.
On 10/15/07, Dave Aitel <daveimmunityinc.com> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> He compared NTOSpider/Appscan/Webinspect - and NTOSpider "won".
> Without the full vulnerability reports and the VM's of the vulnerable
> apps, I'm not going to dwell on the comparison of tools, except to say
> it's interesting, but I will say that all this focus on "code
> coverage" is a bit strange. Vulnerabilities, like fish, tend to
> cluster in particular places. Having 10% code coverage is perfectly ok
> if it's the code that has the bugs. And you can't see race conditions
> with code coverage tools.
> Also, most of the value of instrumentation is that when built into
> your attack tool you get a real-time human-usable view into the guts
> of the application. This is why I don't think byte-code
> instrumentation has huge advantages over just hooking Win32 API's. But
> I don't have a byte-code parser yet either. :>
> Speaking of race conditions, I'm happy to announce that Immunity has
> += Paul Starzetz (http://marc.info/?a=107032640300001&r=1&w=2).
> - -dave
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.6 (GNU/Linux)
> -----END PGP SIGNATURE-----
> Dailydave mailing list
mwollenwebergmail.com | mjwcyberwart.com
Dailydave mailing list