Package Management is an often requested addition to the LFS Book. A Package Manager tracks the installation of files, making it easier to remove and upgrade packages. A good package manager will also handle the configuration files specially to keep the user configuration when the package is reinstalled or upgraded. Before you begin to wonder, NO—this section will not talk about nor recommend any particular package manager. What it does provide is a roundup of the more popular techniques and how they work. The perfect package manager for you may be among these techniques, or it may be a combination of two or more of these techniques. This section briefly mentions issues that may arise when upgrading packages.
Some reasons why no package manager is mentioned in LFS or BLFS include:
Dealing with package management takes the focus away from the goals of these books—teaching how a Linux system is built.
There are multiple solutions for package management, each having its strengths and drawbacks. Finding one solution that satisfies all audiences is difficult.
There are some hints written on the topic of package management. Visit the Hints Project and see if one of them fits your needs.
A Package Manager makes it easy to upgrade to newer versions when they are released. Generally the instructions in the LFS and BLFS books can be used to upgrade to the newer versions. Here are some points that you should be aware of when upgrading packages, especially on a running system.
If the Linux kernel needs to be upgraded (for example, from 5.10.17 to 5.10.18 or 5.11.1), nothing else needs to be rebuilt. The system will keep working fine thanks to the well-defined interface between the kernel and userspace. Specifically, Linux API headers need not be (and should not be, see the next item) upgraded along with the kernel. You will merely need to reboot your system to use the upgraded kernel.
If the Linux API headers or Glibc need to be upgraded to a newer version, (e.g., from Glibc-2.31 to Glibc-2.32), it is safer to rebuild LFS. Though you may be able to rebuild all the packages in their dependency order, we do not recommend it.
If a package containing a shared library is updated, and if
the name of the library changes, then any packages
dynamically linked to the library must be recompiled, to link
against the newer library. (Note that there is no correlation
between the package version and the name of the library.) For
example, consider a package foo-1.2.3 that installs a shared
library with the name libfoo.so.1
. Suppose you upgrade the
package to a newer version foo-1.2.4 that installs a shared
library with the name libfoo.so.2
. In this case, any packages
that are dynamically linked to libfoo.so.1
need to be recompiled to link
against libfoo.so.2
in order to
use the new library version. You should not remove the old
libraries until all the dependent packages have been
recompiled.
If a package containing a shared library is updated, and the
name of the library doesn't change, but the version number of
the library file
decreases (for example, the library is still named
libfoo.so.1
, but the name of
the library file is changed from libfoo.so.1.25
to libfoo.so.1.24
), you should remove the
library file from the previously installed version
(libfoo.so.1.25
in this case).
Otherwise, a ldconfig command (invoked
by yourself from the command line, or by the installation of
some package) will reset the symlink libfoo.so.1
to point to the old library
file because it seems to be a “newer”
version; its version number is larger. This situation may
arise if you have to downgrade a package, or if the authors
change the versioning scheme for library files.
If a package containing a shared library is updated, and the
name of the library doesn't change, but a severe issue
(especially, a security vulnerability) is fixed, all running
programs linked to the shared library should be restarted.
The following command, run as root
after the update is complete, will
list which processes are using the old versions of those
libraries (replace libfoo
with the name of the
library):
grep -l -e 'libfoo
.*deleted' /proc/*/maps |
tr -cd 0-9\\n | xargs -r ps u
If OpenSSH is being used to access the system and it is linked to the updated library, you must restart the sshd service, then logout, login again, and rerun the preceding ps command to confirm that nothing is still using the deleted libraries.
If an executable program or a shared library is overwritten, the processes using the code or data in that program or library may crash. The correct way to update a program or a shared library without causing the process to crash is to remove it first, then install the new version. The install command provided by coreutils has already implemented this, and most packages use that command to install binary files and libraries. This means that you won't be troubled by this issue most of the time. However, the install process of some packages (notably Mozilla JS in BLFS) just overwrites the file if it exists; this causes a crash. So it's safer to save your work and close unneeded running processes before updating a package.
The following are some common package management techniques. Before making a decision on a package manager, do some research on the various techniques, particularly the drawbacks of each particular scheme.
Yes, this is a package management technique. Some folks do not need a package manager because they know the packages intimately and know which files are installed by each package. Some users also do not need any package management because they plan on rebuilding the entire system whenever a package is changed.
This is a simplistic package management technique that does not
need a special program to manage the packages. Each package is
installed in a separate directory. For example, package foo-1.1
is installed in /usr/pkg/foo-1.1
and a symlink is made from /usr/pkg/foo
to /usr/pkg/foo-1.1
. When a new version foo-1.2
comes along, it is installed in /usr/pkg/foo-1.2
and the previous symlink is
replaced by a symlink to the new version.
Environment variables such as PATH
,
LD_LIBRARY_PATH
, MANPATH
, INFOPATH
and
CPPFLAGS
need to be expanded to
include /usr/pkg/foo
. If you
install more than a few packages, this scheme becomes
unmanageable.
This is a variation of the previous package management technique.
Each package is installed as in the previous scheme. But instead
of making the symlink via a generic package name, each file is
symlinked into the /usr
hierarchy.
This removes the need to expand the environment variables. Though
the symlinks can be created by the user, many package managers
use this approach, and automate the creation of the symlinks. A
few of the popular ones include Stow, Epkg, Graft, and Depot.
The installation script needs to be fooled, so the package thinks
it is installed in /usr
though in
reality it is installed in the /usr/pkg
hierarchy. Installing in this manner
is not usually a trivial task. For example, suppose you are
installing a package libfoo-1.1. The following instructions may
not install the package properly:
./configure --prefix=/usr/pkg/libfoo/1.1 make make install
The installation will work, but the dependent packages may not
link to libfoo as you would expect. If you compile a package that
links against libfoo, you may notice that it is linked to
/usr/pkg/libfoo/1.1/lib/libfoo.so.1
instead of /usr/lib/libfoo.so.1
as
you would expect. The correct approach is to use the DESTDIR
variable to direct the installation. This
approach works as follows:
./configure --prefix=/usr make make DESTDIR=/usr/pkg/libfoo/1.1 install
Most packages support this approach, but there are some which do
not. For the non-compliant packages, you may either need to
install the package manually, or you may find that it is easier
to install some problematic packages into /opt
.
In this technique, a file is timestamped before the installation of the package. After the installation, a simple use of the find command with the appropriate options can generate a log of all the files installed after the timestamp file was created. A package manager that uses this approach is install-log.
Though this scheme has the advantage of being simple, it has two drawbacks. If, during installation, the files are installed with any timestamp other than the current time, those files will not be tracked by the package manager. Also, this scheme can only be used when packages are installed one at a time. The logs are not reliable if two packages are installed simultaneously from two different consoles.
In this approach, the commands that the installation scripts perform are recorded. There are two techniques that one can use:
The LD_PRELOAD
environment variable
can be set to point to a library to be preloaded before
installation. During installation, this library tracks the
packages that are being installed by attaching itself to various
executables such as cp, install, mv and tracking the system
calls that modify the filesystem. For this approach to work, all
the executables need to be dynamically linked without the suid or
sgid bit. Preloading the library may cause some unwanted
side-effects during installation. Therefore, it's a good idea to
perform some tests to ensure that the package manager does not
break anything, and that it logs all the appropriate files.
Another technique is to use strace, which logs all the system calls made during the execution of the installation scripts.
In this scheme, the package installation is faked into a separate tree as previously described in the symlink style package management section. After the installation, a package archive is created using the installed files. This archive is then used to install the package on the local machine or even on other machines.
This approach is used by most of the package managers found in the commercial distributions. Examples of package managers that follow this approach are RPM (which, incidentally, is required by the Linux Standard Base Specification), pkg-utils, Debian's apt, and Gentoo's Portage system. A hint describing how to adopt this style of package management for LFS systems is located at https://www.linuxfromscratch.org/hints/downloads/files/fakeroot.txt.
The creation of package files that include dependency information is complex, and beyond the scope of LFS.
Slackware uses a tar-based system for package archives. This system purposely does not handle package dependencies as more complex package managers do. For details of Slackware package management, see https://www.slackbook.org/html/package-management.html.
This scheme, unique to LFS, was devised by Matthias Benkmann, and is available from the Hints Project. In this scheme, each package is installed as a separate user into the standard locations. Files belonging to a package are easily identified by checking the user ID. The features and shortcomings of this approach are too complex to describe in this section. For the details please see the hint at https://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt.
One of the advantages of an LFS system is that there are no files
that depend on the position of files on a disk system. Cloning an
LFS build to another computer with the same architecture as the
base system is as simple as using tar on the LFS partition that
contains the root directory (about 900MB uncompressed for a basic
LFS build), copying that file via network transfer or CD-ROM / USB
stick to the new system, and expanding it. After that, a few
configuration files will have to be changed. Configuration files
that may need to be updated include: /etc/hosts
, /etc/fstab
, /etc/passwd
, /etc/group
, /etc/shadow
, /etc/ld.so.conf
, /etc/sysconfig/rc.site
, /etc/sysconfig/network
, and /etc/sysconfig/ifconfig.eth0
.
A custom kernel may be needed for the new system, depending on differences in system hardware and the original kernel configuration.
There have been some reports of issues when copying between similar but not identical architectures. For instance, the instruction set for an Intel system is not identical with the AMD processor's instructions, and later versions of some processors may provide instructions that are unavailable with earlier versions.
Finally, the new system has to be made bootable via Section 10.4, “Using GRUB to Set Up the Boot Process”.