Proper Firewalling

Kelly Anderson cbxbiker at
Sat May 1 17:12:37 PDT 2004

Dagmar d'Surreal wrote:
> On Thu, 2004-04-29 at 13:27, Kelly and Jennifer Anderson wrote:
>>Hash: SHA1
>>Dagmar d'Surreal wrote:
>>| Not intending to be too experienced, but are you attempting to filter
>>| and manage your system activity logs with tcl or bash?  Perl was
>>| practically designed for system administration functions.
>>Yes, that is true.  That is almost exactly the reason why Perl doesn't
>>belong on a firewall.  Tools that work great for system admin purposes
>>work extremely well in exploiting systems.  My current firewall config
>>is down to 40 Megs.  It snorts to a database on another machine.
> Don't tell me you're actually trying to imply that perl is something
> someone can exploit.  You really have to be using some rather broken
> things for that to be even possible.

Broken code exists in all projects of any significance.  Bugs exist in 
proportion to number of lines of source code.  This is the number one 
reason for eliminating extraneous function and code from a "true" 
firewall.  ANYTHING extra on a system is a possible exploit or may be 
used to facilitate hiding the exploit.  One danger of Perl in particular 
is that it gives scriptable access to virtually all system calls.  If 
there is a flaw in a Linux system call, a cracker will have easy access 
to the flaw without a need to compile code.  The same reason for 
avoiding C compilers, assemblers and linkers on a firewall.  You want to 
make the cracker's life difficult to impossible.

>>~  It's also a
>>| lot easier to deal with parsing lists in perl than it is in bash, which
>>| is why I was considering it.  I didn't want to use it because it would
>>| merely introduce another dependency (regardless of whether or not perl
>>| is sure to be installed already) to the init scripts, as well as more
>>| subshells that would just slow things down.
>>Yes, dependancies are exactly what you want to eliminate on a firewall.
>>~ A firewall is a single purpose machine.  The less that is installed on
>>the machine the better.  Less to exploit, less to monitor, less to fix.
>>~ Logs can easily be sent to another machine that can do any darn thing a
>>person wants in any language they want.
> For somewhat inobvious reasons log files should be protected in at least
> some minor way against modification _on the original host_ and effort
> expended to maintain them _there_ until they are reviewed, even if you
> send the information to a central log host elsewhere.  This way should
> you ever need to use the buggers in court, you can testify that the data
> you bring to court is the original log file, and demonstrate that the
> chain of evidence has been preserved on your part, and that the log
> messages could not have been surreptitiously modified on another machine
> unrelated to the investigation.  Lawyers and judges are getting smarter,
> but occasionally they have been known to get _that_ picky over evidence
> collection.
I did not suggest logging in only one location.  I did suggest log based 
analysis can happen elsewhere.  Duplicity is certainly an advantage.
>>I know how Perlies like their language! ;)  Any excuse to write
>>something in Perl.  I use it when necessary.
> Perl excels at reducing development time, and some tasks become
> nightmarishly complex in bash, if you're that desperate to reduce
> dependencies.  I have a log cycler that I unimaginatively named "James"
> that I've been toting around for about six years now, which I wrote to
> permanently solve the problem of log management on any Unix I have to
> administer.  A few times I've just rewritten it from scratch to make it
> more self-documenting.  The type of validation I do on the configuration
> file, and the various sanity checks it does on everything before moving
> the information around (and feeding it to a database or remote host)
> would just about be impossible as cleanly in bash, and doing it in C
> would be a huge pain in the neck.  Of course, it's not like the B module
> doesn't exist.

All of the virtues of Perl you expound are features crackers are just as 
likely to love.  Why give them the tools to use for their attacks and 

> Complex tasks need a high level language.  I just wanted to keep perl
> out of the init sequence because I machines should be able to boot as
> fast as possible, with no extra wait time reading new binaries from the
> disk, or really anything that's not likely already in cache.  Plus, an
> init script must be reliable 100% of the time because reliable
> unattended reboots are a requisite for configuration of a machine you
> need to maximize the availability of.  It's one thing to have to
> occasionally reboot a machine, it's entirely another for it to _not come
> back up_ without there being serious and wide-sweeping filesystem
> corruption or catastrophic hardware failure.  (The more of the
> filesystem that's used during init, the more likely filesystem
> corruption is to bring the entire operation to a screeching and remotely
> undiagnosable stop, etc etc)

Being concerned about a 2 or even 10 second difference in boot time is 
insignificant in the case of a firewall or even a robust workstation. 
The concern over boot times originated in Microsoft-Land and there it 
should stay.  We are talking Linux which is designed to stay up 
indefinitely.  My firewalls are up for months at a time and get reset 
only for kernel upgrades.  My firewalls do not have corruption issues. 
The combination of journaling filesystems and read-only partitions 
ensure that.  I have a scripted installation procedure that will allow 
me to replace a firewall in 30 minutes in the event of hardware failures.

Feel free to put what you like on your firewalls.  My firewalls will 
continue to be Pearl free.


Kelly Anderson

More information about the hlfs-dev mailing list