[Fwd: Output of fcron job: '/usr/bin/render-blfs-book.sh']
gerard at linuxfromscratch.org
Sun Aug 22 10:15:05 PDT 2004
On Sun, 2004-08-22 at 08:42, Bruce Dubbs wrote:
> This is not a critical error, but we are getting into some kind of race
> condition with cron scripts. My script runs at 0130 and normally takes
> about 20 minutes to build blfs. Last night, it took 39 minutes and some
> other script was in the process of changing something on
> blfs/downloads/cvs and had not changed the permissions back yet. The
> permissions are correct now, but my script failed to update the tar'ed
> blfs book.
> What can we do to coordinate these scripts better?
The race condition would be due to the website update script -- it
touches upon all of the /home/httpd/www.linuxfromscratch.org directory.
It runs at 02:00 server time.
How we can prevent this? If you look at the /usr/bin/render-lfs-book.sh
script you will notice it creates a lockfile to make sure if the script
is run again by something/somebody it won't try to generate the book
twice or more times which wastes CPU and will cause their own set of
We should probably have all of our scripts that somehow update files in
the website look for the various lock files. In particular, the
run-update-website.sh script should not run if render-lfs-book,
blfs-book, *edguide, etc. are running. The website script would go into
a loop (like the lfs-render script. It'll sleep for 10 minutes and try
again) until there isn't anything running that would interfere with its
operation. Likewise, then the website script is running it creates its
own lockfile (even if it's in a loop waiting for something else) and the
render-*lfs scripts should not attempt to run if the website area is
This isn't hard to implement, it just has to be coordinated with the
website team. Anderson in particular takes care of these usual script
issues. I have CCed the website list on this email.
/* If Linux doesn't have the solution, you have the wrong problem */
More information about the website