Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email firstname.lastname@example.org
From: Greg Shipley (gshipleyneohapsis.com)
Date: Wed Aug 01 2001 - 23:43:17 CDT
Ah! So there is someone from Mier on this list. Cool.
Alright, before I launch into what will probably turn out to be a pretty
large e-mail, a few notes:
First off, you're a sport for stepping forward. I'm probably going to be
in the hot seat, too, as soon as that NWC article comes out. "Shipley!
You suck! How come you didn't test <insert some crazy evasion technique>?
How come vendor Y wasn't in there? Hey, you were obviously paid off by
<insert any vendor who wins>. Hey, you didn't apply patch <some patch
that came out after we tested>. Hey, our product really does do OPSEC
with your juicer - fix that features chart!"
I can hardly wait.
Second, I don't want to sling any mud here, so while my comments are
usually quite direct, please understand that I'm being, well, me. :)
Third, my original comment was admittedly pretty close to a cheap shot.
However, I think the Mier tests left me with more questions then answers.
Much of "how and why" still confuses me. More on this below.
Ok, onto my spewage:
On Wed, 1 Aug 2001, Kevin Brown wrote:
> We tested a single product 3 times with 3 different traffic
> streams, using port 0, port 63, and port 50,000
> respectively. This is clearly stated in the report, and all
> of the results are fully disclosed for any reader to
> evaluate. Why did we do this? Because we wanted to
> demonstrate how this particular product would perform under
> various network conditions.
> I noticed that you too are doing a review of IDS products,
> and that you too have chosen to define a narrow scenario for
> which you are evaluating. There is nothing wrong with this
> obviously as the goal is to better educate readers so they
> can make an informed decision, much like what we do. You
> are not claiming to have written the end-all review, much
> like we did not do.
You bring up quite a few points here, but rather then address them
one-by-one I'm going to attempt to bring them together for clarity's sake.
I have a number of complaints with the Mier tests, but my main complaint
centers around the fact that Mier claims to be an industry leading testing
organization ("Miercom's reputation as the leading independent product
test center is unquestioned," according to Mier's literature) yet the
tests performed have no relevancy in the real world. I don't understand
who benefits from this, and I'll argue that this does more harm then good.
IMNSHO, reports such as these undermine movements for 3rd party
IMHO, our industry NEEDS to embrace 3rd party validation, objective
testing, and generally more "pairs of eyes" looking over things. We, as a
community, continue to pay the price for product shortcomings. Crikey,
take IIS for example - how much are we loosing on that, right now? The
entire IT industry has proven that it can't keep its act together, and
that outside parties need to help keep things moving forward.
However, every time an organization launches an external, supposedly
non-biased service that outputs questionable material, it gives the
"movement" a black eye. <cheap shot> I can think of one four-letter group
that has done this in the firewall space already. </cheap shot>
So back to the testing - let's talk about the TCP port 0 and port 63
results. While *I* can read Mier's report, spot that TCP port 0 traffic
is utter garbage, does Mier expect everyone to know this? Does the
average consumer know this? Should they? IMHO, the report is still
misleading. For example, the report states:
"Conclusions Performance testing conducted by Miercom demonstrated that
the Intrusion.com's SecureNet Gig, tested with one Gigabit interface, was
able to detect up to 98 percent of the attacks sent to it with a maximum
background throughput rate of 690.86 Mbps. Further, it demonstrated that
it could detect intrusions, even when operating at a maximum background
throughput of 986.94 Mbps."
IMHO, this is NOT along the lines of "better educating readers so they can
make informed decisions." I read the above to say that Intrusion.com's
product cruises in a 690Mbps environment - no problem. When in fact, I
don't believe that to be true. I'm open to the idea that maybe I'm being
a little harsh here, so if others on this list read the above differently,
please chime in. (I won't even get started on the lack of session and pps
data). But something tells me that if I find a carrier-class network,
hell, ANY real network at 690Mbps, and deploy these units, that I'm
not going to get these results...or anything close to them. (and what
attacks did you guys use, BTW?)
To provide a little background, I first saw this report at SANS when
Intrusion.com sales folks were pimping it from their booth. (see
often go to these conferences and visit vendor booths in stealth mode so
that the PR people don't (a) shuffle me away from the marketing folks that
are spinning things HARD, and (b) so that the hired assassins from product
vendors don't have a clear head-shot. :)
Having some history in IDS testing, I first thought "Wow, those
Intrusion.com guys are kicking butt!" Then I read the fine print, and got
angry. And from what I hear, so did many IDS vendors, as they had to put
on their spin-control hats and get to work. That costs money, and time,
and we've got enough FUD hitting the wire already.
So again, I ask, who is testing like this helping outside of Mier and
Intrusion.com? It's not the consumer. It's not the other vendors. It's
not other testing houses that attempt to do reputable work. And IMHO, its
not the community.
> Having said all of this, we do welcome feedback about our test
> results. But you have touched on an Information-age-old debate.
> What is real world traffic?
This has been debated quite a bit on this (and other lists) in the past.
How long have you guys been lurking? Regardless, there have been numerous
traffic studies performed (see CAIDA for a good starting point), and you
can always do sampling on a dozen clients or so. Do all networks looks
the same? I agree with you: no, absolutely not. However, there are some
common things you will see on MOST networks - things like more TCP traffic
then UDP traffic, packet size trends, a certain percentage of native
I've got a decent list of criteria for this specifically, but let's hold
off on that for now. (more on this later)
But port 63? port 0? Heck, why not just blast IPX down that wire and be
done with it? We could through some encapsulated appletalk in there for
kicks. :) Seriously though, depending on the discard algorithms used
(which varies from vendor to vendor) this kills the tests - period. I'm
sure some of the QA and/or vendor contacts on this list could provide
quite a few more details then I, so I'll end this point here.
> We have encountered a dilemma when trying to test Gigabit speed IDS
> products. Almost all of the products that generate real-world Layer 7
> traffic don't allow you to adjust for packet sizes (without an
> unreasonable amount of effort), and the devices that allow you to
> control packet size don't generate Layer 7 traffic. And our
> experience has been that what breaks an IDS is more often packets per
> second than Layer 7 content, although both are relevant. So until
> someone provides us with a tool that generates valid Layer 7 traffic
> at Gigabit wire speed for all packet sizes ranging from 64 to 1518
> Bytes, we have to make do with what we have.
First off, I agree with you that these are concepts that any IDS testing
lab struggles with, but as a commercial entity FOCUSSED ON TESTING,
shouldn't you guys be leading the charge on this, rather then scratching
your heads looking for a solution? When worse comes to worse, and
companies like Spirent and IXIA don't come through, you still have two
a) script it yourself. HTTP? SMTP? FTP? SMB/CIFS? All scriptable.
b) code it yourself
When one of the Network Computing labs couldn't find a usable tool to
pound the crap out of mail servers, one of their internal guys (Mike Lee)
coded up a TCL-based SMTP mail bomber. Not to sound like an ass, but
that's what hard-core testing is about - finding or building solutions to
make it work.
[Side NOTE: In fact, I'm going to go so far as to put my money where my
mouth is. This criteria crap needs to stop - I'm going to spearhead an
IDS testing criteria and tool set. If anyone is interested in
participating, e-mail me off-line. Enough already - this needs to end.]
> FYI, we have considered capturing real traffic streams off
> of real networks and replaying them, but we have not found a
> product that can reliably capture an entire Gigabit stream
> (including all 7 layers) and replay it.
Er, I'm not real sure how you capture gigabit streams without catching
Layer 7 - do the layers separate? :) Ok, ok, that *WAS* a cheap shot.
See, I'm at least trying to keep it humorous! :)
> I have asked the question before on this list how vendors
> generate Gigabit-speed traffic for testing in their labs.
> No one responded. I am open to alternatives and better
> testing methods if someone can give me some good feedback.
> So what would you recommend for generating traffic for
> testing of Gigabit speed IDS products?
Some vendors use custom tools. Some replay multiple captured sessions.
Some use tools like IXIA and Smartbits, and combine multiple tool suites.
And some kick it old school. :)
But it appears that Mier opted for scientific method (a good thing)
while completely disregarding relevance (a bad thing). To quote one of my
co-workers, Mike Scher:
"I cannot imagine where we'd be today scientifically if our particle
physics people of the 1920s decided to reliably, repeatably, and
certifiably drop lead weights from towers because of a lack of quality
Ultimately, here is my take on testing this stuff: Is it hard to do right?
Yes. Is it easy to screw up? Yes. Does it take some serious time,
effort, and money? You bet. But if an organization is going to do this
type of work, and especially make bold claims and get paid for it, then
IMHO they should take the time to attempt to do it right.
IMNSHO, you can choose to do it right, you can choose not to do it, or you
can choose to do it partially and be damn clear about it. But IMHO,
anything outside of those three is doing the community a disservice.
And, IMHO, these (this?) reports did just that.
P.S. And no, the NWC tests this year didn't have hard-core performance
benchmarking. Everything was puking at 40Mbps anyway. *grin*