Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
From: Floydman (floydian_99_at_yahoo.com)
Date: Wed Aug 28 2002 - 12:15:26 CDT
Hi everybody. I'm glad someone came up with the idea
of a honeypot client (in the terms of client software
being exploited instead of server software), because I
have been working on something like this for a while.
But like Bill mentionned, the hardest part of it is to
have a user (typical if possible) to interact with the
clients so they become active. Client software,
unlike server, doesn't normally listen on a port, and
is simply idle when not used.
So, maybe this means we have to take honeypot
technologies on the user desktop, since this is where
the user resides. Now, I know that what I'm saying
here goes to the opposite of what a honeypot is, and
we can't turn production machines into honeypots. The
most important reason behind it is for sniffing
purposes. But that being said, is there a way to
implement honeypot-related technologies into a
production network to monitor for client-side
exploits, without turning this network into a typical
honeypot? I think so.
During my research period, I found a document entitled
"Protecting against the unknown", by Mixter
which is a theorical paper, with some examples
implemented in Unix, about what I was trying to do in
the Windows world. At the same time, honeypots became
very popular (relatively speaking, of course). Which
brings me back to my own work, "Securing the internal
Microsoft network", "ComLog, a Win32 command prompt
logger" and "LogAgent 2.1, log file recollection
tools" (all available at
www.geocities.com/floydian_99. ComLog and LogAgent
executables can be downloaded from
To resume all these papers, let's just say that the
idea is to put access-control and logging ability to
the most application possible on all network nodes and
to collect these logs centrally, in a secured machine.
On Unix, this is relatively easy to do since the OS
and application source code is often available via
Open Source, and when a toold doesn't exist, that
admin will often craft a tool to fit his needs (it
also happens in the Win32 world, but less frequently).
Now, in many of real world's networks, it is frequent
to see workstations with little or no security in
place, sharing open and outdated antivirus software
that logs alerts only locally. In Windows, there is
very little log in itself, and what is existing is
often ignored on the user's PCs. The idea behind my
paper is to implement at least some of Mixter's
recommandations on the Windows platform. First, by
centralizing application log files (LogAgent).
Second, by putting personnal firewalls on each machine
(and collecting logs, of course). Lastly, by securing
and patching the OS and applications to remove known
vulnerabilities (Pedestal Software makes a tool to do
this remotely and automatically and it is free to try
for 30 days www.pedestalsoftware.com). With this in
place, you get 1) a more secure environment, and 2)
more and better information than you ever had about
your network activity than ever before. This is not a
honeynet, since there is not sniffing in place. But
sniffinf is not everything, as the Honeynet Project
learned when a cracker used cryptcat instead of
netcat. They fixed this by trojaning bash to log the
sessions. ComLog does this for Windows NT/2K. So, in
a production point of view, even if a hack occurs, if
it occurs via the command prompt, you have all the
data you need to determine what is going on.
So, back to the honeystick or client-side honeypots
point of view, with an environment like this in place,
you are not that far from having a honeypot
environment, sniffing excluded (unless we put filters,
but there are chances to miss something good, which is
why dedicated honeypots were made in the first place).
It solve the problem of user-interaction by having
real users doing real work, only on security-enhanced
machines instead of wide-open ones. Another option is
to setup a more vulnerable environment in place, but
still woth strong logging, and have dedicated
individuals to play the users, as this would be part
of the HoneyStick administration. Sniffing could be
put back, but the users behavior may be biased.
Also, my model doesn't take the Event Viewer logs into
account. Being able to centralize these files, and if
possible converted in ascii format so it can be
treated just like any other log file would be a major
improvement. Also, network IDS log files, such as
snort, could be added to our central log file
repository, and the main firewall logs as well. Using
LogAgent, you can monitor actively these logs in real
time in a console. My next project is to eventually
create a new type of IDS engine, log-based IDS (others
engines being network-based or host-based, or even
some kind of hybrid). What will be great of this IDS
engine is that it will be vendor independant and be
able to accomodate any kind of log file we make it
watch, so it does not replace the other IDS engines,
but complement it and enhance it by combining it with
other security activity information on the network.
But this is a whole new thing.
So, what do you people think? Is this something close
enough to achieve this? Anybody else have other ideas
to complement this?
At 11:39 AM 28/08/2002, Bill McCarty wrote:
Hi Lee and all,
Thanks, Lee, for sharing your idea. I find it
interesting, but I'm not sure that it's practical.
Mind you, I'd be delighted to be proven wrong <g>. Let
me present an argument that suggests that your ideal
may be impractical. Then, we can hope for an effective
For the scheme to work, the honeystick must include
clients that actively open connections to servers. For
the moment, let's confine discussion to HTTP clients
and servers. Certainly, in a real application, we'd be
likely to relax this constraint.
The HTTP client must select a host and access a web
page on the host. This might be crudely accomplished
by generating a random 32-bit number, using the number
as an IPv4 address, and accessing the default web
The question I pose is whether a default web page is
likely to contain malware that triggers on reading the
page. My guess is that some interaction -- such as
submitting a form -- is likely to be required in order
to trigger the malware.
Certainly, it's not necessary in principle that the
user interact with the page containing the malware,
since many forms of malware could operate
automatically. But, my guess is that the malware will
more often reside on a page to which the default page
links and will require interaction with the user. This
is an empirical question that I believe my own
(limited) experience inadequate to answer. But, the
combined experience of list members may lead to a more
If interaction is required, a simple-minded honeystick
client is unlikely to be productive. It would likely
have to scan many systems before triggering malware.
And, I'm uncertain that it would be practical to build
a honeystick HTTP client smart enough to properly
interact with an arbitrary web page. Hence, I suspect
that your idea is impractical.
On the other hand, if interaction isn't required, but
malware generally resides on other than the default
page, the client could be written as a spider that
traverses entire web sites. It might interleave the
exploration of multiple sites in order to search in a
more nearly breadth-first manner, which I expect to be
more productive than a depth-first manner. But, the
question of optimal procedure is itself an empirical
question that is best answered based on a larger
reservoir of experience than possessed by an
--On Tuesday, August 27, 2002 10:58 AM +0100 Lee
Brotherston <leenerds.org.uk> wrote:
Has anyone experimented with a HoneyPot that runs a
as opposed to server?
Do You Yahoo!?
Yahoo! Finance - Get real-time stock quotes