Putting new update-website script online ASAP

Anderson Lizardo lizardo at linuxfromscratch.org
Mon Nov 15 06:55:49 PST 2004

On Sunday 14 November 2004 11:58, Jeremy Huntwork wrote:
> I think that would be great. We need to get that in place. Can you give
> me a bit more of a detailed walkthrough of the script?  I'd like to
> fully understand what it's doing so I can help if you aren't around for
> some reason.

The update-website.mk script is a self-run makefile that has various rules 
containing the instructions to build each site section. The entire website 
tree is now a working copy of the "www", "hints" and "patches" repositories. 
Instead of removing the entire tree and doing a "svn checkout", this new 
script does a simple "svn update" to update entire tree. This is both more 
reliable (because it does not remove entire trees) and faster (because only 
what changed on the repos is merged into the working copy).

the main rules are: update-alfs update-blfs update-hints update-lfs 
update-patches update-www. Their names are self-explainable, I suppose :) 
each rule recreates any "derived" content (e.g. news.html files are generated 
from templates + news-YYYY.txt), but *only when a file it depends upon is 
modified*. This procedure improves speed considerably.

Another significant improvement is that once the new script goes live, will 
there be no need for a fixperms.sh script fixing permissions periodically for 
the files on the website tree. All directories on the tree have setgid, 
meaning that the group onwership is inherited from the parent dir (which will 
have "svnwww" group onwership). Also the umask is set to 002 before the 
script runs, so all files or directories are group writable by default.

Now, how the entire scheme will be setup: each sub-project will optionally 
have a post-commit script which runs the following:

/home/lizardo/scripts/update-website-hook.sh "$REPOS" "$REV" update-<project>


/home/lizardo/scripts/update-website-hook.sh "$REPOS" "$REV" update-alfs

Usually those scripts will not take more than a few seconds. But if some 
project decides to not have this line on their post-commit script, a crontab 
entry needs to be created for it with the same "execution frequency" of the 
current update-website.sh script, so the website section specific for that 
project will have the same "up-to-date feeling" of the current website. 
Currently, I have the following on my crontab:

00 02,14 * * * \
/home/lizardo/scripts/update-website-hook.sh "/home/svn/repositories/ALFS" 
"HEAD" update-alfs; \
/home/lizardo/scripts/update-website-hook.sh "/home/svn/repositories/BLFS" 
"HEAD" update-blfs; \
/home/lizardo/scripts/update-website-hook.sh "/home/svn/repositories/hints" 
"HEAD" update-hints; \
/home/lizardo/scripts/update-website-hook.sh "/home/svn/repositories/LFS" 
"HEAD" update-lfs; \
/home/lizardo/scripts/update-website-hook.sh "/home/svn/repositories/patches" 
"HEAD" update-patches

"update-www" is not run here because we already have it on our post-commit 

One last thing that needs tweaking before we can use this new system is 
regarding permissions inside the website tree. Currently the website tree is 
group-owned by lfswww. This is a problem for the post-commit script, which 
assumes all users of the svnwww group have write permissions to the tree. 
Additionally, once each project has the post-commit hook script setup, all 
users of each svn* group need to have write access to project-specific 

The easiest solution is put everyone with commit privileges (i.e. all users 
from the svn* groups) into the lfswww group and keep the website tree 
group-owned by lfswww. Another more complex solution is to have a hierachy of 
group ownership on the website tree, something like the following:

TARGETDIR -> svnwww
TARGETDIR/lfs -> svnlfs
TARGETDIR/blfs -> svnblfs
TARGETDIR/alfs -> svnalfs

Where TARGETDIR is /home/httpd/www.linuxfromscratch.org

> It looks like it will backup the entire current site to 
> /var/tmp before it generates the new one, is that right?

update-website.mk has a rule for website backup, but does not currently uses 
it on any other rule because an entire website backup on each commit is not 
practical. The rule can, though, be used as a cron job that runs, for 
example, once a day (Jeremy U. seems to be working on this).

Anderson Lizardo
lizardo at linuxfromscratch.org

More information about the website mailing list