Proper Firewalling

Dagmar d'Surreal dagmar.wants at nospam.com
Sat May 1 23:09:35 PDT 2004


On Sat, 2004-05-01 at 19:12, Kelly Anderson wrote:
> Dagmar d'Surreal wrote:
> > On Thu, 2004-04-29 at 13:27, Kelly and Jennifer Anderson wrote:
>
> >>Yes, that is true.  That is almost exactly the reason why Perl doesn't
> >>belong on a firewall.  Tools that work great for system admin purposes
> >>work extremely well in exploiting systems.  My current firewall config
> >>is down to 40 Megs.  It snorts to a database on another machine.
> > 
> > 
> > Don't tell me you're actually trying to imply that perl is something
> > someone can exploit.  You really have to be using some rather broken
> > things for that to be even possible.
> 
> Broken code exists in all projects of any significance.  Bugs exist in 
> proportion to number of lines of source code.  This is the number one 
> reason for eliminating extraneous function and code from a "true" 
> firewall.  ANYTHING extra on a system is a possible exploit or may be 
> used to facilitate hiding the exploit.  One danger of Perl in particular 
> is that it gives scriptable access to virtually all system calls.  If 
> there is a flaw in a Linux system call, a cracker will have easy access 
> to the flaw without a need to compile code.  The same reason for 
> avoiding C compilers, assemblers and linkers on a firewall.  You want to 
> make the cracker's life difficult to impossible.

Perl is a higher level language than bash.  This usually means you can
do things in _fewer_ lines of perl than you can in bash script.  Perl
also does a lot of things to warn code developers that they're doing
something stupid at runtime.  Newbie coders that ignore this are usually
the very same ones at fault when this becomes something that doesn't get
used and unsafe code is deployed.  

While it's true that a cracker _could_ write something in perl on the
system to compromise it, the majority of the time they will be putting
together their tools elsewhere.  Any hacker worth his salt can copy
binary files to any machine they can issue commands to, so it's not like
they can't do their dev work elsewhere.  The odds of someone even doing
this are rather low.  Unless you're dealing with something reasonably
high-profile, crackers typically have little incentive to keep going on
one machine that's secured when there's literally thousands of other
easier boxes available for the taking.  The cost-benefit analysis for
protecting against targeted attacks lands firmly at $0 for sites where
the need for security is relatively low.  The few (and I do mean few,
I've met some of them personally and it's mind-boggling just how damn
creative and skilled some of the best are, but they are definitely rare,
even taking the whole world population into account) who are the most
likely to be taking particular targets and become utterly determined to
get into them, _will_.  It might take them a month, or a year, but they
_will_ get in.  Your only rational hope against these people is
detection to the degree that it rivals the preventative measures in
resources. 

Furthermore, the basic perl package doesn't give you access to /all/ the
system calls, but just the ones necessary to be portable across
platforms.  Now... If you remove access to the perl package from group
and other rights, and simply only allow root processes to access it,
it's only useful to attackers _after_ they gain complete control of the
system, where it's almost a moot point what they do because you've lost
if some kind of alarm doesn't go out.

Now, I agree with you that it's possible to run a firewall without perl,
and generally I *do* eliminate compilers and other developmental tools
from important firewalls, but this is because they are production
machines, no _development_ should be taking place on them, and only
thoroughly tested software should be getting installed to them.  There
is however, a degree to which security measures can impede efficient use
of the system.  Trying to tell the VP of a small company why their
network needs a cheap firewall is one thing, but it's entirely another
machine to get them to buy the idea that the project is going to cost
them roughly 2x as much because of the purchase of a second machine just
to watch the first one.  It's not a cost that's going to fly or be
reasonable in small offices, because it raises the per-seat cost of IT
in the office a significant degree without showing any obvious profit. 
It sucks, but it's reality.  Sometimes, and often, you have to do all
the log analysis on the firewall, have scripts to monitor it in realtime
and notify admins if something unusual happens, and hope no attacker is
aware of the fact that to remain undetected they'd have to create a DoS
condition to keep the alarm from going out, because you can't afford to
put more resources into it.

Then there's the home users who just need a simple firewall.  It doesn't
make much sense for someone who just wants to keep their soft, fleshy
WindowsXP machines protected against the continual background noise of
automated vulnerability scans on the Internet to have to put up a second
machine to perform the log analysis... It's just too expensive.

Then there's the issue of personal time... unless there's a _lot_ of
something important being protected, it's not necessary to go to the
lengths of removing everything down to simple tools from the machine and
do all the really paranoid stuff, because people have lives and family
members that need attention, too.

> >>~  It's also a
> >>| lot easier to deal with parsing lists in perl than it is in bash, which
> >>| is why I was considering it.  I didn't want to use it because it would
> >>| merely introduce another dependency (regardless of whether or not perl
> >>| is sure to be installed already) to the init scripts, as well as more
> >>| subshells that would just slow things down.
> >>
> >>Yes, dependancies are exactly what you want to eliminate on a firewall.
> >>~ A firewall is a single purpose machine.  The less that is installed on
> >>the machine the better.  Less to exploit, less to monitor, less to fix.
> >>~ Logs can easily be sent to another machine that can do any darn thing a
> >>person wants in any language they want.
> > 
> > 
> > For somewhat inobvious reasons log files should be protected in at least
> > some minor way against modification _on the original host_ and effort
> > expended to maintain them _there_ until they are reviewed, even if you
> > send the information to a central log host elsewhere.  This way should
> > you ever need to use the buggers in court, you can testify that the data
> > you bring to court is the original log file, and demonstrate that the
> > chain of evidence has been preserved on your part, and that the log
> > messages could not have been surreptitiously modified on another machine
> > unrelated to the investigation.  Lawyers and judges are getting smarter,
> > but occasionally they have been known to get _that_ picky over evidence
> > collection.
> > 
> I did not suggest logging in only one location.  I did suggest log based 
> analysis can happen elsewhere.  Duplicity is certainly an advantage.

Er... doh... self-correction.  s/Duplication and redundancy of
function/Duplicity/;

> >>I know how Perlies like their language! ;)  Any excuse to write
> >>something in Perl.  I use it when necessary.
> > 
> > Perl excels at reducing development time, and some tasks become
> > nightmarishly complex in bash, if you're that desperate to reduce
> > dependencies.  I have a log cycler that I unimaginatively named "James"
> > that I've been toting around for about six years now, which I wrote to
> > permanently solve the problem of log management on any Unix I have to
> > administer.  A few times I've just rewritten it from scratch to make it
> > more self-documenting.  The type of validation I do on the configuration
> > file, and the various sanity checks it does on everything before moving
> > the information around (and feeding it to a database or remote host)
> > would just about be impossible as cleanly in bash, and doing it in C
> > would be a huge pain in the neck.  Of course, it's not like the B module
> > doesn't exist.
> 
> All of the virtues of Perl you expound are features crackers are just as 
> likely to love.  Why give them the tools to use for their attacks and 
> exploits?

The good ones don't need your tools.  Most of the bad ones couldn't use
them, but already have their binary ircbots and exploits ready to go so
they wouldn't use them anyway.  But... you have to balance the utility
the tool gives the administrators with the utility it would give to
ne'er-do-wells.
 
> > Complex tasks need a high level language.  I just wanted to keep perl
> > out of the init sequence because I machines should be able to boot as
> > fast as possible, with no extra wait time reading new binaries from the
> > disk, or really anything that's not likely already in cache.  Plus, an
> > init script must be reliable 100% of the time because reliable
> > unattended reboots are a requisite for configuration of a machine you
> > need to maximize the availability of.  It's one thing to have to
> > occasionally reboot a machine, it's entirely another for it to _not come
> > back up_ without there being serious and wide-sweeping filesystem
> > corruption or catastrophic hardware failure.  (The more of the
> > filesystem that's used during init, the more likely filesystem
> > corruption is to bring the entire operation to a screeching and remotely
> > undiagnosable stop, etc etc)
> 
> Being concerned about a 2 or even 10 second difference in boot time is 
> insignificant in the case of a firewall or even a robust workstation. 
> The concern over boot times originated in Microsoft-Land and there it 
> should stay.  We are talking Linux which is designed to stay up 
> indefinitely.  My firewalls are up for months at a time and get reset 
> only for kernel upgrades.  My firewalls do not have corruption issues. 
> The combination of journaling filesystems and read-only partitions 
> ensure that.  I have a scripted installation procedure that will allow 
> me to replace a firewall in 30 minutes in the event of hardware failures.

You seemed to overlook where I explained that reducing the number of
parts involved removes the chance something could go wrong during a very
critical time when _neither the users nor the administrators of the
system have no remote access to it whatsoever_.  This time represents a
complete failure to preserve Availability of resources, and should be
reduced regardless of any kind of pissing contest some admins make out
of it.  It's only in very few cases where you can make 24/7 availability
and no remote administration policies workable without someone camping
the server room, and it's downright impractical to do so in many cases. 
Addionally, neither journaling filesystems, nor read-only hardware
configurations can completely protect you against a disk malfunction.

It's good that you know how to remotely install software.  At my last
job I wound up doing insane installs with Jumpstart that literally
spanned the globe.  ...so you and I both know that doing so isn't
cheap.  Again, it's not always reasonable to take the measures you seem
to be representing as mandatory.  The measures taken to protect
resources should be equal to the value of the resources, and _no
further_ or you're just wasting time.
-- 
The email address above is phony because my penis is already large enough, kthx. 
              AIM: evilDagmar  Jabber: evilDagmar at jabber.org




More information about the hlfs-dev mailing list