[blfs-dev] New page

Qrux qrux.qed at gmail.com
Wed Feb 8 17:59:21 PST 2012


On Feb 8, 2012, at 4:11 PM, Bruce Dubbs wrote:

> I've added mdadm and a new page, About RAID, to the book.
> 
> I'd appreciate feedback on ways to improve it.
> 
> http://www.linuxfromscratch.org/blfs/view/svn/postlfs/raid.html

On a related note...

(As always, it goes without saying that all the volunteer work for BLFS is wonderful, so please don't take any of this wrong way.)

* * *

New pages are great, but...How close is BLFS to getting to a "release" w.r.t LFS-7.0?  Just two days ago, iproute and zlib failed to wget.  Not sure about iproute (I think it's a server error), but zlib updated to 1.26 and they didn't keep the old version around (srsly, who does that?).  I had to fall back to using the LFS repository version (which is fine).

A while ago, we discussed cutting it into...I think "Volumes" was the term everyone liked.  I still strongly believe there's "core"--i.e., "relatively more important"--stuff.  For me, I've got:

	* sudo
	* bc
	* openssl
	* wget
	* <CA certs>
	* tcp_wrappers
	* sysstat
	* openssh
	* ntp
	* cpio (I'm an rsync nub; I'd accept rsync in place here)
	* hdparm
	* which
	* net-tools (I can't stand 'ip'; I prefer 'ifconfig', and so does BIND)
	* bonnie++

I use sudo to help me with my scripts.  I use sysstat to watch disk I/O (I like to keep close tabs on what--and how--my VMs are doing...I use this system with Xen).  NTP because you have to run that on bare-metal, and no system should be without accurate time.  hdparm because any (modern?) system should have lots of read-ahead.  I also use bonnie++ to help monitor disk performance, and do benchmarks to make sure things are okay.  I also have:

	* LVM2
	* Net::DNS (perl/cpan) for BIND testing
	* bind
	* tcl
	* expect
	* db
	* pcre
	* cyrus-sasl
	* postfix (but I could see any MTA being here)

Beyond this, it gets pretty personal (bridge-utils, Xen, pine, courier-imap, and the reest of my LAPP stack).  But, my point is, can't we call some of those things above "Volume 1a", and try to keep that in sync with LFS?

I understand the argument about doing BOTH postfix and exim, and BOTH cpio and rsync, so people have choices.  I could see cutting out sysstat and cpio. And, I could see a lot of arguments being valid for including stuff like PAM and a bunch of other security libraries.  But, basically, I don't care about specifically where that line is drawn, but just that one gets drawn.  And, that subset (however small) comes with a "big-O guarantee"...something like: "This subset of BLFS will be updated within 3 weeks of an LFS release."

Yes, I understand it's seemingly arbitrary.  But, if you want to take this up another notch, it's gotta have releases.  And, folks here are bright.  I'm sure a list could get made.  And, if someone feels strongly about moving a package from one subset to another subset...Folks could "convene" and make a pretty fast decision.  These kinds of decisions get made all the time.  There are lots of arbitrary decisions.  Making some more--for the purpose of getting to a scheduled release cycle (even if it's just relative to LFS) would be, IMO, a great start.

This constant moving target is way hard to accept as someone who wants to deploy the system.  Of particular note are the bootscripts.  I know they may be a special case, (with the need for them to integrate with the LFS scripts), but having to follow those scripts around is...brutal.  God forbid you accidentally deleted the snapshot you got.  Then you'd have to set out and either try to track down which version of which package goes with your build.  Or, you could just bump everything up to "current"--which is exactly the tail-chase I'm trying to avoid.

How about organizing things as <Volume><Subset> where release claims get made about specific parts of the book.  For example, in Vol1a, you could have the core security stuff, including openssl and openssh (for headless access).  In Vol1b, you could have some really "core" (it's hard to get away from this idea) system tools (ntp, rsync, wget, hdparm, net-tools).  And just start defining release targets (w.r.t. LFS releases) for each section.  Stuff like Gnome or KDE could get pushed back.  Or, you could say, Vol2a is the X server.  And, that will be released 3 weeks after V1a.  A schedule for different parts, with overlap, based on maintainer(s) for each subset.

I'd go on, but that'd be long-winded.  Thought I'd check in first.  ;)

	Q





More information about the blfs-dev mailing list