[LONG] Maintaining several identical installations of (B)LFS

Dagmar d'Surreal dagmar.wants at nospam.com
Thu Jul 31 13:55:00 PDT 2003

On Thu, 2003-07-31 at 00:06, Alexander E. Patrakov wrote:
> On Wednesday 30 July 2003 23:47, Dagmar d'Surreal wrote:
> > On Wed, 2003-07-30 at 08:58, Alexander E. Patrakov wrote:
> > > Hi all,
> > > the problem is as follows. In the classroom, we have 15 computers which
> > > must have both Windows 2000 (maybe we'll move to XP) and Linux (of
> > > course, from scratch). While maintaining Windows installation is not
> > > among my duties, I have to ensure that the installations of Linux are
> > > good and as identical as possible (there are some hardware differences).

Oh, on the point of hardware differences, if these differences involve
PCI cards, installing hotplugd makes all that just "go away".

> > _Why_ from scratch?  When trying to maintain numerous machines, building
> > everything from scratch is no longer a sane and reasonable option.  You
> > need to either start using a packaging tool (like rpm or something) or
> > just give in and use a canned distro like Mandrake.
> I already use Slackware package manager together with checkinstall. And why 
> LFS... Optimization is NOT the reason of choosing it here. Rather, full 
> information about each package, much less bugs, ability to import patches 
> from bug tracking systems, ability to customize the system as I want (not as 
> those guys in RedHat want). Consider fixing a Kate bug in RedHat :-)

Okay, then perhaps you could have been clearer that you were building
transplantable binary packages instead of building things from scratch
on each machine.  ...and SRPMS are actually pretty easy to work with
once you beat the installation of RPM into submission.  (The default
settings stink.)

> > Umm... side note.  Most DHCP server implementations can identify
> > machines by MAC address and by client name.  It would have probably been
> > easier than cooking up some script.
> Yes, and my script just uses this fact.

You do not need a _script_ for this.  Use ISC's dhcpd and man
dhcpd.conf, paying special attention to the "host", "hardware", and
"group" directives.  

> > You won't buy a switch but in a later email you're glibly talking about
> > gigE??
> > Buy a switch.  Switches are good, m'kay?
> I said "probably", and I didn't mean that gigE will be used here. This is an 
> unchecked recommendation for others. BTW, with 7-8 workstations, I would not 
> ask questions and would continue to use NFS-root. But there are 15 
> workstations.

You really, really, really need to pester them to replace that hub with
a switch.  That will clear up most of your bandwidth issues almost
immediately.  (I'm a bit astonished you managed to find a 100bT hub with
that many ports.  They're rare now because they reek.) 

> Also, I don't have right to buy anything and all recommendations from admins 
> get ignored here. They wasted money and bought that P4 monster (now 
> practically unused, only mail server with 5 users and a web server with 10 
> visits a day) instead of a switch.

If the admininstrator recommendations are really and truly being
ignored, get you resume in order.  Seriously, that is unbelieveably

> > > So I would like to know what is used by you for similar tasks (keeping
> > > several installations of BLFS identical).
> >
> > Packages.  Build binary packages.  (My last job involved me both
> > installing and maintaining machines in different facilities around the
> > _planet_.  Enough different places that not even the greatest of travel
> > fans would have been able to do them in person.)  Otherwise you're
> > pretty dewmed.  Better yet, use someone else's distro like Debian or
> > Mandrake as both will allow the use of relatively local software
> > repositories, and it's only a few shell script commands to check the
> > status of all currently installed packages to know to pull and install
> > new versions if something gets mangled or out of sync.
> That applies to packages, but the problem with nondefault settings remains. 
> I.e.: How do I schedule the update in /etc/sane.d so all users can use the 
> new scanner via saned?

That part is _easy_.  To be safe you might want to integrate GnuPG into
the equation, but just have each machine immediately after it brings up
it's network interface grab a file whose name is based on the machines
IP address (like "settings-" and an optional .sig
file) from a local webserver (basic Apache or Tux would work fine) whose
name is based on the machines IP address and can untar the it into /.

To get _really_ kinky, you could start using BOOTP & TFTP and get your
particular rootfs for Linux off the network, the location of which would
be dictated by the particular boot image... although this is
considerably more obnoxious to do with x86 machines than it is with say,
a bunch of Sparc 5's.  

Like someone else here said, a small local filesystem would be a really,
really good idea.  You could probably get away with putting /, /tmp,
/var, and swap on a local disk in under 384Mb of space using four
logical partitions.  This would eliminate a number of problems for you,
and mounting /usr and /home over NFS would then be a cakewalk.
The email address above is just as phony as it looks, and for obvious reasons.
Instant messaging contact nfo: AIM: evilDagmar  Jabber: evilDagmar at jabber.org

More information about the blfs-support mailing list