[LONG] Maintaining several identical installations of (B)LFS

Dagmar d'Surreal dagmar.wants at nospam.com
Wed Jul 30 10:50:18 PDT 2003


On Wed, 2003-07-30 at 08:58, Alexander E. Patrakov wrote:
> Hi all,
> the problem is as follows. In the classroom, we have 15 computers which must 
> have both Windows 2000 (maybe we'll move to XP) and Linux (of course, from 
> scratch). While maintaining Windows installation is not among my duties, I 
> have to ensure that the installations of Linux are good and as identical as 
> possible (there are some hardware differences).

_Why_ from scratch?  When trying to maintain numerous machines, building
everything from scratch is no longer a sane and reasonable option.  You
need to either start using a packaging tool (like rpm or something) or
just give in and use a canned distro like Mandrake.

> In the previous year, this problem was addressed by using a NFS-root 
> installation of LFS. We used modified Etherboot as a bootloader, so there was 
> even no need for non-NTFS partitions on workstations. All per-machine 
> configuration was handled by a special script that mounted (--bind) 
> machine-specific directories under /etc/local, /var and /tmp. DHCP was 
> configured in such a way that n-th machine from the door always gets the 
> address 192.168.1.(n+20), so these per-machine directories are always the 
> same. I will mail the script upon request.

Umm... side note.  Most DHCP server implementations can identify
machines by MAC address and by client name.  It would have probably been
easier than cooking up some script.

> That approach made the administration extremely simple (much simpler than in 
> Windows with Norton Ghost): do something on one computer, and others will 
> benefit from it immediately. But it had a drawback: either the network speed 
> or the speed of the hard drive on the NFS server is an important limiting 
> factor, which manifests itself starting with 7-10 clients. The NFS server is 
> a rather old Pentium and it does not run LFS. I am not sure that I will be 
> allowed to move the NFS server onto a new LFS box, which is currently acting 
> as a web and mail server.
> 
> Hdparm reports that the disk has speed 20 MB/s on the old server. Since we use 
> Promise ATA RAID (mirroring) on the new server, hdparm cannot be used as a 
> benchmark. It is known that a single drive of the same model gives 38 MB/s. I 
> think that those figures say nothing: seeks are probably the real bottleneck.

Welcome to Why IDE RAID Sucks.  BTW, hdparm could be used as a
benchmark, just badly.  Bonnie++ is much better at doing that.

> Also, we are probably unable to use the full speed of our network. The HUB 
> (sorry, nobody is going to buy a switch) indicates the network utilization of 
> 30% (i.e. 30 Mbit/s) dufing dd if=/dev/zero of=/some/file/on/nfs (other 
> clients are typing something in OpenOffice). The old server and 50% of the 
> clients has 3c905 as a network card. Other 50% of the clients and a new 
> server have Intel EtherExpress 100 and complain about downgrading to 
> half-duplex. During the high load, there are occasional messages of duplex 
> mismatch also from 3c905 driver.

You won't buy a switch but in a later email you're glibly talking about
gigE??  If you are intending to run 100bT then _get a switch_.  Switches
do not have to be expensive to suck less than using a hub.  (Frankly, to
suck more than a hub, a switch would have to be on fire... and at least
a third charcoal already.)  I paid $40 for the 8port Hawking switch I
have.  There's no rule that says you can't chain 4 of those cheap little
monsters together.  Buy a switch.  Switches are good, m'kay?

> While I have some time, I want to investigate some alternatives to our 
> NFS-root installation. It is probably possible to add another Pentium 2 400 
> as a second NFS server, and maybe a Pentium 4 2.4 (the current web server) 
> may also do something if my boss will allow that. Variants involving load 
> balancing between two NFS servers will be considered if there are means of 
> easy software updates without the risk of ending up with different contents 
> of two servers. Non-NFS-root variants allowing easy updates will also be 
> considered.

The P2 400 has far more than enough power to act as a web server for a
classroom, or even a mid-sized middle school.  Stop right there talking
about load-balancing the NFS servers.  That would be the equivalent of
putting a lowered suspension and wide tires on a Yugo.  Your major
problem is that you're using a hub and have a bunch of network-intensive
applications going.

> So I would like to know what is used by you for similar tasks (keeping several 
> installations of BLFS identical).

Packages.  Build binary packages.  (My last job involved me both
installing and maintaining machines in different facilities around the
_planet_.  Enough different places that not even the greatest of travel
fans would have been able to do them in person.)  Otherwise you're
pretty dewmed.  Better yet, use someone else's distro like Debian or
Mandrake as both will allow the use of relatively local software
repositories, and it's only a few shell script commands to check the
status of all currently installed packages to know to pull and install
new versions if something gets mangled or out of sync.

> Also: do we need to continuously mount home directories from an NFS server in 
> the case of a non-NFS-root installation? Windows admins tell me (without 
> meaningful explanation) to find a way to copy a use's home directory onto a 
> hard disk entirely upon login and copy it back upon logout.

Rule #3.  Anything Windows users tell you to do is stupid, by
definition.  This is definitely one of those things.  That's not even
necessary under Windows, which makes things especially sad.  Unless
you've assigned seats in that classroom, it's probably _better_ if you
have NFS-mounted home directories.  If the entire filesystem of that
many computers is no longer being accessed over NFS, you shouldn't have
any problem with remote homedirs.  (Let me guess, it was Windows users
telling you you didn't want a switch because it would make your traffic
unreliable or something...)
-- 
The email address above is just as phony as it looks, and for obvious reasons.
Instant messaging contact nfo: AIM: evilDagmar  Jabber: evilDagmar at jabber.org




More information about the blfs-support mailing list