Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
From: Chris Keladis (Chris.Keladiscmc.cwo.net.au)
Date: Mon May 21 2001 - 00:29:31 CDT
At 06:16 PM 5/20/01 -0800, auto125268hushmail.com wrote:
>In a similar frame to my last mail, I have been trying to understand the
>Unicode issue. From what I can get, it seems that the problem is relatively
>simple in nature but difficult to solve. If Unicode allows people to specify
>the same character in many different ways, then how do you set about
>a filter to make sure you are not passing in dangerous Unicoded commands
>? I can see the obvious laborious way but surely that is a huge processing
You decode the Unicode (properly), then filter out what you don't want ".."
or "\" etc etc..
Microsoft simply stuffed up their multiple passes of checking Unicode, and
the characters slipped past their regular syntactic checks.
They did it again with the recent CGI filename decode-twice vulnerability
except that was with hex characters.
Agreed, it is overhead. (More or less is debatable)