netfilter firewalling problems and solutions

Tarek W. mailinglists1 at
Thu Feb 19 17:16:31 PST 2004

On Thu, 2004-02-19 at 19:29, Dagmar d'Surreal wrote:
> On Wed, 2004-02-18 at 12:39, Tarek W. wrote:
> > On Tue, 2004-02-17 at 19:09, Dagmar d'Surreal wrote: [snipped]
> > > Okay, I'm going to take the problems I know about and solve them one at
> > > a time, with explanations as to why...  Sorry for the delay in posting
> > > this...  (note that there will be a distinct lack of code in this email,
> > > see previous email)
> > > 
> > > #1. Netfilter needs to implement a deny-by-default policy, but currently
> > > no hook exists in the init.d scripts.  
> > > 
> > >     Seeing as how this is atomically tied to the initialization of the
> > > network, I suspect the best place for this is simply going to be by
> > > patching the init.d/network script to execute the seven or so lines it
> > > takes to set the default policy to deny, to dump all rules and wipe all
> > > chains (in that order) before going on to initialize any interfaces. 
> > > (The same thing should also be done when bringing down the interfaces). 
> > > This *would* be a problem for service daemons, since we'll need to have
> > > rules for them, but when the interfaces are down, many of them crash
> > > and/or die the next time they look at their sockets, so it's not as if
> > > doing an init.d/network stop && init.d/network start is as trouble-free
> > > as it would first appear.  (People who do this by using telinit to
> > > change to runlevel 2, and back again will not have this problem, which
> > > is the cleanest way to do that anyway).  The only inobvious thing here
> > > is that at this time we should also (on multiple interface boxen) build
> > > a chain for each interface for INPUT, FORWARD, and OUTPUT for the sake
> > > of optimizing filter flow.  (I should probably break this last bit out
> > > into it's own justification bullet-point)
> > 
> > have a single ruleset active at all time:
> > 
> > a) allow initiating outgoing connections (allow NEW out and
> > 
> > b) turning off forwarding (ip_forward and FORWARD)
> Good luck using a firewall without any forwarding.  Such a machine is
> generally called a proxy.

no, a firewall filters packets and/or connections, a gateway forwards
and a proxy relies on daemons to relay connections.

> > > #2. We may have interfaces which need to be initialized using DHCP and a
> > > default drop policy will prevent this from happening (although there are
> > > circumstances with older kernels in which this wouldn't actually happen
> > > with some DHCP implementations).
> > > 
> > >   Well, since I scribbled together the service/dhclient script in about
> > > two minutes and we know the interface name at that time, there's no
> > > reason we can't add a hook into service/dhclient to add the two allow
> > > rules necessary to facilitate DHCP requests and responses, tied to that
> > > interface.   Since we'll have already added our chains in
> > > init.d/network, the rules can be dropped into the proper chain.  The
> > > script should also flush and destroy the custom chains after a dhclient
> > > -r for network stop.  (*1)
> > 
> > a script would definitely work unless the dhcpd can force address
> > changes, can it?! I forget...
> I wasn't talking about a dhcpd.

I mentioned dhcpd cos it's the only one that comes to mind which talks
ip and hooks onto the stack in places where netfilter can't see.

also because it's the only one that comes to mind that u can't restrict
to a client's mac/ip combo. early in the negotiation, u can restrict per
client mac, later on, u can use the mac/ip combo.

> > several issues spring to mind:
> > 
> > a) should we enforce a trusted dhcpd concept (per ip or per mac)?!
> Read the RFC on DHCP and you'll see why not.
> > 	a1) if we do, why not have the rules in place all the time
> Because if you don't remove them when you bring down an
> interface/service, you can't cleanly *add* them when you add an
> interface/service without risking duplicating the rules.

ok, the best way imo to implement per-daemon initialization of iptables
rules is with a modular design:

1) u have a user-defined chain with the daemon's requirements for proper
connectivity always living in memory

2) when the daemon runs, u insert/add a rule -j(umping) to the
user-defined chain

the jumps will not be all for built-in chains, we should create a
tree-like structure which logically separates security zones (dmz, lan,
...) and if need be individual machines, at a lower-level, "services"
should be separated by daemons (either local to the machine or in the
dmz) and regular nodes which use the firewall as a gateway

and, we could supplement the reader with two configs, either gateway or
firewall (which will prohibit forwarding) and protect the machine

the earlier 4-points should allow us to have a very minimal section for
each daemon/service (nat for example) which will contain the rules
needed to satisfy the requirements of the daemon/service (see 1) while
pointing the reader to the rules which make these "modules" become
active or not (see 2)

> > [global] let's keep the discussion of overhead for a later date, I know
> > we can minimize the overhead no matter what setup we have
> > 
> > > 
> > > #3. Handling service daemons is slightly more complex in that each is
> > > likely to need a few rules of it's own.
> > > 
> > >   While it might seem more convenient to lob all of these rules into one
> > > script so they are all in the same place, their existence is atomically
> > > tied to the active presence of a service daemon.  For this reason we're
> > > better off putting rules to allow each activity into the init.d script
> > > for that daemon.  Starting them at this time means that all our IP
> > > addresses should be available (and identifiable) allowing us to be more
> > > specific in our rules.  It doesn't appear that we will run into trouble
> > > with windows of opportunity since our default policies will still be
> > > DROP from before the interfaces are brought up.
> > 
> > I say, when the services r installed, include a firewall module for that
> > service. activate (-j service_module in relevant chains) when the daemon
> > is run through the sysvinit script
> So you want a chain jump just for each service?  Seems like extra steps
> to me.

extra step but it allows us to implement overlapping rules in their own
sphere of influence. say u want to disallow forwarding ident requests to
the dmz (in daemon sublevel, u -j(ump) to module_limit_ident) but  want
to allow ident requests *to* workstations on the lan (in lan sublevel, u
-j(ump) to module_allow_ident)

> > > #4. Admins may use the kill command to stop daemons, or they may die,
> > > and be started (quite properly) by invoking init.d/whateverservice
> > > start, while firewalling rules are already present to allow their
> > > traffic, duplicating existing firewalling rules.
> > > 
> > >   Since there is no case in which the daemon should be running before
> > > the init.d/whateverservice start script is invoked, until we start
> > > adding surefire checks into these to avoid starting the same service
> > > when it's already running, the simplest (and probably best) solution to
> > > this is to run a set of rule deletions for that service prior to adding
> > > the allow rules for it.
> > 
> > indeed, with a proper design, should be a rule or two, while keeping the
> > module in memory (will explain the concept of module at a later date,
> > for now, just think of it as a bunch of iptables rules that r not
> > visited by any packets unless it is activated by -j(umping) to the
> > module's user-defined chain)
> So how is this any different from a _chain_?

u want to have all relevant rules injected on daemon start and wiped on
daemon stop, I say, inject/wipe the relevant -j(ump) rule

> > > #5. Something may have broken and netfilter may not currently be
> > > available for the kernel on boot-up (administrator error, filesystem
> > > corruption, malicious user, etc), leaving the possibility that services
> > > may start without firewall rules to limit their access.
> > > 
> > >   This is a pretty serious issue.  For this reason, we should probably
> > > be checking the exit status of all rule insert/appends (although not for
> > > any rule deletions or flushes, since these can exit with non-zero status
> > > without actually indicating a show-stopper) and bail with exit status 1
> > > if an error occurs.  Lack of in-kernel firewalling (even for a
> > > particular type of traffic, for example, if someone oopses and forgets
> > > to add TCP support to netfilter) when the configuration expects
> > > netfilter to be available should be considered a show-stopper and
> > > require immediate administrative intervention ...however the machine
> > > should not _stop_ booting entirely, but continue so that the
> > > administrator can login to a console to fix things.
> > 
> > very nice idea... we could label an interface as administrative if
> > existent, so that the admin can login remotely, and do not bring up any
> > other interfaces if the firewall script fails
> No.  That would be a contextual fallacy waiting to happen.  If
> firewalling is broken, there's a near-certainty that running a login
> service will wind up allowing access to it from _anywhere_ instead of
> the restricted list of source IPs that firewalling would be allowing.

how is this so if the interfaces rn't brought up and I meant we could
provide it as an option, again, the modular firewall design

> > > #6. Even though a service daemon may be chrooted, it could still be
> > > compromised and the uid used to make connections to other machines on
> > > our network or on other networks.
> > > 
> > >   Thankfully netfilter has a facility that allows it to see which
> > > uid/gid is tied to traffic on the local machine.  We can use this to our
> > > advantage in the case of things like bind, where we can not only add
> > > allow rules to let it query external nameservers, but limit it so that
> > > it _only_ be used to talk to external nameservers.  This is going to
> > > require a little more complexity than some people are used to, but is
> > > good stuff for implementing mandatory controls.  If the default policy
> > > is to DROP packets, then using "--uid-owner named" when we add the allow
> > > rule will prevent say, squid's role account from being used to exploit
> > > remote nameservers either externally or internally on our network
> > > without a complete root compromise.  Obversely, a nameserver making
> > > queries to the internet and serving queries to the intranet might not
> > > have any reason whatsoever to be initiating nameservice queries to
> > > anything on the intranet.  This specific a rule can also be applied to
> > > the localhost interface to (for example) restrict service daemons
> > > ability to access other service daemons (again, squid would have no
> > > business talking to portmap, but could be allowed to make nameservice
> > > queries on the localhost interface).  -m owner should be used whenever
> > > we can do so cleanly.  (...although this seems really anal-retentive, it
> > > still appears to add value to the configuration.  Don't think these
> > > kinds of exploits don't happen.  I've seen all of this in the past year
> > > at least once.)
> > 
> > again, with the concept of modules, all users (as opposed to the people
> > writing the book) will have a very simple procedure to follow
> -- 

anyway, I have been following this mailing list since inception but
failed to find the book until now, R. Connolly provided the url in a
later email, will check it out and have more concrete examples after
having seen what's been done so far.

More information about the hlfs-dev mailing list