Details on this package are located in Section 8.20.2, “Contents of Binutils.”
Copyright © 1999-2024 Gerard Beekmans
Copyright © 1999-2024, Gerard Beekmans
All rights reserved.
This book is licensed under a Creative Commons License.
Computer instructions may be extracted from the book under the MIT License.
Linux® is a registered trademark of Linus Torvalds.
My journey to learn and better understand Linux began back in 1998. I had just installed my first Linux distribution and had quickly become intrigued with the whole concept and philosophy behind Linux.
There are always many ways to accomplish a single task. The same can be said about Linux distributions. A great many have existed over the years. Some still exist, some have morphed into something else, yet others have been relegated to our memories. They all do things differently to suit the needs of their target audience. Because so many different ways to accomplish the same end goal exist, I began to realize I no longer had to be limited by any one implementation. Prior to discovering Linux, we simply put up with issues in other Operating Systems as you had no choice. It was what it was, whether you liked it or not. With Linux, the concept of choice began to emerge. If you didn't like something, you were free, even encouraged, to change it.
I tried a number of distributions and could not decide on any one. They were great systems in their own right. It wasn't a matter of right and wrong anymore. It had become a matter of personal taste. With all that choice available, it became apparent that there would not be a single system that would be perfect for me. So I set out to create my own Linux system that would fully conform to my personal preferences.
To truly make it my own system, I resolved to compile everything from source code instead of using pre-compiled binary packages. This “perfect” Linux system would have the strengths of various systems without their perceived weaknesses. At first, the idea was rather daunting. I remained committed to the idea that such a system could be built.
After sorting through issues such as circular dependencies and compile-time errors, I finally built a custom-built Linux system. It was fully operational and perfectly usable like any of the other Linux systems out there at the time. But it was my own creation. It was very satisfying to have put together such a system myself. The only thing better would have been to create each piece of software myself. This was the next best thing.
As I shared my goals and experiences with other members of the Linux community, it became apparent that there was a sustained interest in these ideas. It quickly became plain that such custom-built Linux systems serve not only to meet user specific requirements, but also serve as an ideal learning opportunity for programmers and system administrators to enhance their (existing) Linux skills. Out of this broadened interest, the Linux From Scratch Project was born.
This Linux From Scratch book is the central core around that project. It provides the background and instructions necessary for you to design and build your own system. While this book provides a template that will result in a correctly working system, you are free to alter the instructions to suit yourself, which is, in part, an important part of this project. You remain in control; we just lend a helping hand to get you started on your own journey.
I sincerely hope you will have a great time working on your own Linux From Scratch system and enjoy the numerous benefits of having a system that is truly your own.
--
Gerard Beekmans
gerard AT linuxfromscratch D0T org
There are many reasons why you would want to read this book. One of the questions many people raise is, “why go through all the hassle of manually building a Linux system from scratch when you can just download and install an existing one?”
One important reason for this project's existence is to help you learn how a Linux system works from the inside out. Building an LFS system helps demonstrate what makes Linux tick, and how things work together and depend on each other. One of the best things this learning experience can provide is the ability to customize a Linux system to suit your own unique needs.
Another key benefit of LFS is that it gives you control of the system without relying on someone else's Linux implementation. With LFS, you are in the driver's seat. You dictate every aspect of your system.
LFS allows you to create very compact Linux systems. With other distributions you are often forced to install a great many programs you neither use nor understand. These programs waste resources. You may argue that with today's hard drives and CPUs, wasted resources are no longer a consideration. Sometimes, however, you are still constrained by the system's size, if nothing else. Think about bootable CDs, USB sticks, and embedded systems. Those are areas where LFS can be beneficial.
Another advantage of a custom built Linux system is security. By compiling the entire system from source code, you are empowered to audit everything and apply all the security patches you want. You don't have to wait for somebody else to compile binary packages that fix a security hole. Unless you examine the patch and implement it yourself, you have no guarantee that the new binary package was built correctly and adequately fixes the problem.
The goal of Linux From Scratch is to build a complete and usable foundation-level system. If you do not wish to build your own Linux system from scratch, you may nevertheless benefit from the information in this book.
There are too many good reasons to build your own LFS system to list them all here. In the end, education is by far the most important reason. As you continue your LFS experience, you will discover the power that information and knowledge can bring.
The primary target architectures of LFS are the AMD/Intel x86 (32-bit) and x86_64 (64-bit) CPUs. On the other hand, the instructions in this book are also known to work, with some modifications, with the Power PC and ARM CPUs. To build a system that utilizes one of these alternative CPUs, the main prerequisite, in addition to those on the next page, is an existing Linux system such as an earlier LFS installation, Ubuntu, Red Hat/Fedora, SuSE, or some other distribution that targets that architecture. (Note that a 32-bit distribution can be installed and used as a host system on a 64-bit AMD/Intel computer.)
The gain from building on a 64-bit system, as compared to a 32-bit system, is minimal. For example, in a test build of LFS-9.1 on a Core i7-4790 CPU based system, using 4 cores, the following statistics were measured:
Architecture Build Time Build Size
32-bit 239.9 minutes 3.6 GB
64-bit 233.2 minutes 4.4 GB
As you can see, on the same hardware, the 64-bit build is only 3% faster (and 22% larger) than the 32-bit build. If you plan to use LFS as a LAMP server, or a firewall, a 32-bit CPU may be good enough. On the other hand, several packages in BLFS now need more than 4 GB of RAM to be built and/or to run; if you plan to use LFS as a desktop, the LFS authors recommend building a 64-bit system.
The default 64-bit build that results from LFS is a “pure” 64-bit system. That is, it supports 64-bit executables only. Building a “multi-lib” system requires compiling many applications twice, once for a 32-bit system and once for a 64-bit system. This is not directly supported in LFS because it would interfere with the educational objective of providing the minimal instructions needed for a basic Linux system. Some of the LFS/BLFS editors maintain a multilib fork of LFS, accessible at https://www.linuxfromscratch.org/~thomas/multilib/index.html. But that's an advanced topic.
Building an LFS system is not a simple task. It requires a certain level of existing knowledge of Unix system administration in order to resolve problems and correctly execute the commands listed. In particular, as an absolute minimum, you should already know how to use the command line (shell) to copy or move files and directories, list directory and file contents, and change the current directory. It is also expected that you know how to use and install Linux software.
Because the LFS book assumes at least this basic level of skill, the various LFS support forums are unlikely to provide you with much assistance in these areas. You will find that your questions regarding such basic knowledge will likely go unanswered (or you will simply be referred to the LFS essential pre-reading list).
Before building an LFS system, we urge you to read these articles:
Software-Building-HOWTO https://tldp.org/HOWTO/Software-Building-HOWTO.html
This is a comprehensive guide to building and installing “generic” Unix software packages under Linux. Although it was written some time ago, it still provides a good summary of the basic techniques used to build and install software.
Beginner's Guide to Installing from Source https://moi.vonos.net/linux/beginners-installing-from-source/
This guide provides a good summary of the basic skills and techniques needed to build software from source code.
The structure of LFS follows Linux standards as closely as possible. The primary standards are:
Linux Standard Base (LSB) Version 5.0 (2015)
The LSB has four separate specifications: Core, Desktop, Languages, and Imaging. Some parts of Core and Desktop specifications are architecture specific. There are also two trial specifications: Gtk3 and Graphics. LFS attempts to conform to the LSB specifications for the IA32 (32-bit x86) or AMD64 (x86_64) architectures discussed in the previous section.
Many people do not agree with these requirements. The main purpose of the LSB is to ensure that proprietary software can be installed and run on a compliant system. Since LFS is source based, the user has complete control over what packages are desired; you may choose not to install some packages that are specified by the LSB.
While it is possible to create a complete system that will pass the LSB certification tests “from scratch,” this can't be done without many additional packages that are beyond the scope of the LFS book. Installation instructions for some of these additional packages can be found in BLFS.
LSB Core: |
Bash, Bc, Binutils, Coreutils, Diffutils, File, Findutils, Gawk, GCC, Gettext, Glibc, Grep, Gzip, M4, Man-DB, Procps, Psmisc, Sed, Shadow, SysVinit, Tar, Util-linux, Zlib |
LSB Desktop: |
None |
LSB Languages: |
Perl |
LSB Imaging: |
None |
LSB Gtk3 and LSB Graphics (Trial Use): |
None |
LSB Core: |
At, Batch (a part of At), BLFS Bash Startup Files, Cpio, Ed, Fcrontab, LSB-Tools, NSPR, NSS, Linux-PAM, Pax, Sendmail (or Postfix or Exim), Time |
LSB Desktop: |
Alsa, ATK, Cairo, Desktop-file-utils, Freetype, Fontconfig, Gdk-pixbuf, Glib2, GLU, Icon-naming-utils, Libjpeg-turbo, Libxml2, Mesa, Pango, Xdg-utils, Xorg |
LSB Languages: |
Libxml2, Libxslt |
LSB Imaging: |
CUPS, Cups-filters, Ghostscript, SANE |
LSB Gtk3 and LSB Graphics (Trial Use): |
GTK+3 |
LSB Core: |
install_initd,
|
LSB Desktop: |
|
LSB Languages: |
/usr/bin/python (LSB requires Python2 but LFS and BLFS only provide Python3) |
LSB Imaging: |
None |
LSB Gtk3 and LSB Graphics (Trial Use): |
|
The goal of LFS is to build a complete and usable foundation-level system—including all the packages needed to replicate itself—and providing a relatively minimal base from which to customize a more complete system based on the user's choices. This does not mean that LFS is the smallest system possible. Several important packages are included that are not, strictly speaking, required. The list below documents the reasons each package in the book has been included.
Acl
This package contains utilities to administer Access Control Lists, which are used to define fine-grained discretionary access rights for files and directories.
Attr
This package contains programs for managing extended attributes on file system objects.
Autoconf
This package supplies programs for producing shell scripts that can automatically configure source code from a developer's template. It is often needed to rebuild a package after the build procedure has been updated.
Automake
This package contains programs for generating Make files from a template. It is often needed to rebuild a package after the build procedure has been updated.
Bash
This package satisfies an LSB core requirement to provide a Bourne Shell interface to the system. It was chosen over other shell packages because of its common usage and extensive capabilities.
Bc
This package provides an arbitrary precision numeric processing language. It satisfies a requirement for building the Linux kernel.
Binutils
This package supplies a linker, an assembler, and other tools for handling object files. The programs in this package are needed to compile most of the packages in an LFS system.
Bison
This package contains the GNU version of yacc (Yet Another Compiler Compiler) needed to build several of the LFS programs.
Bzip2
This package contains programs for compressing and decompressing files. It is required to decompress many LFS packages.
Check
This package provides a test harness for other programs.
Coreutils
This package contains a number of essential programs for viewing and manipulating files and directories. These programs are needed for command line file management, and are necessary for the installation procedures of every package in LFS.
DejaGNU
This package supplies a framework for testing other programs.
Diffutils
This package contains programs that show the differences between files or directories. These programs can be used to create patches, and are also used in many packages' build procedures.
E2fsprogs
This package supplies utilities for handling the ext2, ext3 and ext4 file systems. These are the most common and thoroughly tested file systems that Linux supports.
Expat
This package yields a relatively small XML parsing library. It is required by the XML::Parser Perl module.
Expect
This package contains a program for carrying out scripted dialogues with other interactive programs. It is commonly used for testing other packages.
File
This package contains a utility for determining the type of a given file or files. A few packages need it in their build scripts.
Findutils
This package provides programs to find files in a file system. It is used in many packages' build scripts.
Flex
This package contains a utility for generating programs that recognize patterns in text. It is the GNU version of the lex (lexical analyzer) program. It is required to build several LFS packages.
Gawk
This package supplies programs for manipulating text files. It is the GNU version of awk (Aho-Weinberg-Kernighan). It is used in many other packages' build scripts.
GCC
This is the Gnu Compiler Collection. It contains the C and C++ compilers as well as several others not built by LFS.
GDBM
This package contains the GNU Database Manager library. It is used by one other LFS package, Man-DB.
Gettext
This package provides utilities and libraries for the internationalization and localization of many packages.
Glibc
This package contains the main C library. Linux programs will not run without it.
GMP
This package supplies math libraries that provide useful functions for arbitrary precision arithmetic. It is needed to build GCC.
Gperf
This package produces a program that generates a perfect hash function from a set of keys. It is required by Udev .
Grep
This package contains programs for searching through files. These programs are used by most packages' build scripts.
Groff
This package contributes programs for processing and formatting text. One important function of these programs is to format man pages.
GRUB
This is the Grand Unified Boot Loader. It is the most flexible of several boot loaders available.
Gzip
This package contains programs for compressing and decompressing files. It is needed to decompress many packages in LFS.
Iana-etc
This package provides data for network services and protocols. It is needed to enable proper networking capabilities.
Inetutils
This package supplies programs for basic network administration.
Intltool
This package contributes tools for extracting translatable strings from source files.
IProute2
This package contains programs for basic and advanced IPv4 and IPv6 networking. It was chosen over the other common network tools package (net-tools) for its IPv6 capabilities.
Kbd
This package produces key-table files, keyboard utilities for non-US keyboards, and a number of console fonts.
Kmod
This package supplies programs needed to administer Linux kernel modules.
Less
This package contains a very nice text file viewer that allows scrolling up or down when viewing a file. Many packages use it for paging the output.
Libcap
This package implements the userspace interfaces to the POSIX 1003.1e capabilities available in Linux kernels.
Libelf
The elfutils project provides libraries and tools for ELF files and DWARF data. Most utilities in this package are available in other packages, but the library is needed to build the Linux kernel using the default (and most efficient) configuration.
Libffi
This package implements a portable, high level programming interface to various calling conventions. Some programs may not know at the time of compilation what arguments are to be passed to a function. For instance, an interpreter may be told at run-time about the number and types of arguments used to call a given function. Libffi can be used in such programs to provide a bridge from the interpreter program to compiled code.
Libpipeline
The Libpipeline package supplies a library for manipulating pipelines of subprocesses in a flexible and convenient way. It is required by the Man-DB package.
Libtool
This package contains the GNU generic library support script. It wraps the complexity of using shared libraries into a consistent, portable interface. It is needed by the test suites in other LFS packages.
Libxcrypt
This package provides the libcrypt
library needed by various
packages (notably, Shadow) for hashing passwords. It
replaces the obsolete libcrypt
implementation in Glibc.
Linux Kernel
This package is the Operating System. It is the Linux in the GNU/Linux environment.
M4
This package provides a general text macro processor useful as a build tool for other programs.
Make
This package contains a program for directing the building of packages. It is required by almost every package in LFS.
Man-DB
This package contains programs for finding and viewing man pages. It was chosen instead of the man package because of its superior internationalization capabilities. It supplies the man program.
Man-pages
This package provides the actual contents of the basic Linux man pages.
Meson
This package provides a software tool for automating the building of software. The main goal of Meson is to minimize the amount of time that software developers need to spend configuring a build system. It's required to build Systemd, as well as many BLFS packages.
MPC
This package supplies arithmetic functions for complex numbers. It is required by GCC.
MPFR
This package contains functions for multiple precision arithmetic. It is required by GCC.
Ninja
This package furnishes a small build system with a focus on speed. It is designed to have its input files generated by a higher-level build system, and to run builds as fast as possible. This package is required by Meson.
Ncurses
This package contains libraries for terminal-independent handling of character screens. It is often used to provide cursor control for a menuing system. It is needed by a number of the packages in LFS.
Openssl
This package provides management tools and libraries relating to cryptography. These supply cryptographic functions to other packages, including the Linux kernel.
Patch
This package contains a program for modifying or creating files by applying a patch file typically created by the diff program. It is needed by the build procedure for several LFS packages.
Perl
This package is an interpreter for the runtime language PERL. It is needed for the installation and test suites of several LFS packages.
Pkgconf
This package contains a program which helps to configure compiler and linker flags for development libraries. The program can be used as a drop-in replacement of pkg-config, which is needed by the building system of many packages. It's maintained more actively and slightly faster than the original Pkg-config package.
Procps-NG
This package contains programs for monitoring processes. These programs are useful for system administration, and are also used by the LFS Bootscripts.
Psmisc
This package produces programs for displaying information about running processes. These programs are useful for system administration.
Python 3
This package provides an interpreted language that has a design philosophy emphasizing code readability.
Readline
This package is a set of libraries that offer command-line editing and history capabilities. It is used by Bash.
Sed
This package allows editing of text without opening it in a text editor. It is also needed by many LFS packages' configure scripts.
Shadow
This package contains programs for handling passwords securely.
Sysklogd
This package supplies programs for logging system messages, such as those emitted by the kernel or daemon processes when unusual events occur.
SysVinit
This package provides the init program, the parent of all the other processes on a running Linux system.
Udev
This package is a device manager. It dynamically controls the ownership, permissions, names, and symbolic links of device nodes in the /dev directory when devices are added to or removed from the system.
Tar
This package provides archiving and extraction capabilities of virtually all the packages used in LFS.
Tcl
This package contains the Tool Command Language used in many test suites.
Texinfo
This package supplies programs for reading, writing, and converting info pages. It is used in the installation procedures of many LFS packages.
Util-linux
This package contains miscellaneous utility programs. Among them are utilities for handling file systems, consoles, partitions, and messages.
Vim
This package provides an editor. It was chosen because of its compatibility with the classic vi editor and its huge number of powerful capabilities. An editor is a very personal choice for many users. Any other editor can be substituted, if you wish.
Wheel
This package supplies a Python module that is the reference implementation of the Python wheel packaging standard.
XML::Parser
This package is a Perl module that interfaces with Expat.
XZ Utils
This package contains programs for compressing and decompressing files. It provides the highest compression generally available and is useful for decompressing packages in XZ or LZMA format.
Zlib
This package contains compression and decompression routines used by some programs.
Zstd
This package supplies compression and decompression routines used by some programs. It provides high compression ratios and a very wide range of compression / speed trade-offs.
To make things easier to follow, there are a few typographical conventions used throughout this book. This section contains some examples of the typographical format found throughout Linux From Scratch.
./configure --prefix=/usr
This form of text is designed to be typed exactly as seen unless otherwise noted in the surrounding text. It is also used in the explanation sections to identify which of the commands is being referenced.
In some cases, a logical line is extended to two or more physical lines with a backslash at the end of the line.
CC="gcc -B/usr/bin/" ../binutils-2.18/configure \ --prefix=/tools --disable-nls --disable-werror
Note that the backslash must be followed by an immediate return. Other whitespace characters like spaces or tab characters will create incorrect results.
install-info: unknown option '--dir-file=/mnt/lfs/usr/info/dir'
This form of text (fixed-width text) shows screen output, usually
as the result of commands issued. This format is also used to
show filenames, such as /etc/ld.so.conf
.
Please configure your browser to display fixed-width text with
a good monospaced font, with which you can distinguish the
glyphs of Il1
or O0
clearly.
Emphasis
This form of text is used for several purposes in the book. Its main purpose is to emphasize important points or items.
https://www.linuxfromscratch.org/
This format is used for hyperlinks both within the LFS community and to external pages. It includes HOWTOs, download locations, and websites.
cat > $LFS/etc/group << "EOF"
root:x:0:
bin:x:1:
......
EOF
This format is used when creating configuration files. The first
command tells the system to create the file $LFS/etc/group
from whatever is typed on the
following lines until the sequence End Of File (EOF) is
encountered. Therefore, this entire section is generally typed as
seen.
<REPLACED TEXT>
This format is used to encapsulate text that is not to be typed as seen or for copy-and-paste operations.
[OPTIONAL TEXT]
This format is used to encapsulate text that is optional.
This format is used to refer to a specific manual (man) page. The
number inside parentheses indicates a specific section inside the
manuals. For example, passwd has two man pages. Per
LFS installation instructions, those two man pages will be
located at /usr/share/man/man1/passwd.1
and /usr/share/man/man5/passwd.5
. When the book
uses passwd(5) it is
specifically referring to /usr/share/man/man5/passwd.5
. man passwd will print the first
man page it finds that matches “passwd,” which will be /usr/share/man/man1/passwd.1
. For this example,
you will need to run man 5
passwd in order to read the page being specified.
Note that most man pages do not have duplicate page names in
different sections. Therefore, man
<program
name>
is generally sufficient. In
the LFS book these references to man pages are also hyperlinks,
so clicking on such a reference will open the man page rendered
in HTML from Arch Linux manual pages.
This book is divided into the following parts.
Part I explains a few important notes on how to proceed with the LFS installation. This section also provides meta-information about the book.
Part II describes how to prepare for the building process—making a partition, downloading the packages, and compiling temporary tools.
Part III provides instructions for building the tools needed for constructing the final LFS system.
Part IV guides the reader through the building of the LFS system—compiling and installing all the packages one by one, setting up the boot scripts, and installing the kernel. The resulting Linux system is the foundation on which other software can be built to expand the system as desired. At the end of this book, there is an easy to use reference listing all of the programs, libraries, and important files that have been installed.
Part V provides information about the book itself including acronyms and terms, acknowledgments, package dependencies, a listing of LFS boot scripts, licenses for the distribution of the book, and a comprehensive index of packages, programs, libraries, and scripts.
The software used to create an LFS system is constantly being updated and enhanced. Security warnings and bug fixes may become available after the LFS book has been released. To check whether the package versions or instructions in this release of LFS need any modifications—to repair security vulnerabilities or to fix other bugs—please visit https://www.linuxfromscratch.org/lfs/errata/development/ before proceeding with your build. You should note any changes shown and apply them to the relevant sections of the book as you build the LFS system.
In addition, the Linux From Scratch editors maintain a list of security vulnerabilities discovered after a book has been released. To read the list, please visit https://www.linuxfromscratch.org/lfs/advisories/ before proceeding with your build. You should apply the changes suggested by the advisories to the relevant sections of the book as you build the LFS system. And, if you will use the LFS system as a real desktop or server system, you should continue to consult the advisories and fix any security vulnerabilities, even when the LFS system has been completely constructed.
The LFS system will be built by using an already installed Linux distribution (such as Debian, OpenMandriva, Fedora, or openSUSE). This existing Linux system (the host) will be used as a starting point to provide necessary programs, including a compiler, linker, and shell, to build the new system. Select the “development” option during the distribution installation to include these tools.
There are many ways to install a Linux distribution and the defaults are usually not optimal for building an LFS system. For suggestions on setting up a commercial distribution see: https://www.linuxfromscratch.org/hints/downloads/files/partitioning-for-lfs.txt.
As an alternative to installing a separate distribution on your machine, you may wish to use a LiveCD from a commercial distribution.
Chapter 2 of this book describes how to create a new Linux native partition and file system, where the new LFS system will be compiled and installed. Chapter 3 explains which packages and patches must be downloaded to build an LFS system, and how to store them on the new file system. Chapter 4 discusses the setup of an appropriate working environment. Please read Chapter 4 carefully as it explains several important issues you should be aware of before you begin to work your way through Chapter 5 and beyond.
Chapter 5 explains the installation of the initial tool chain, (binutils, gcc, and glibc) using cross-compilation techniques to isolate the new tools from the host system.
Chapter 6 shows you how to cross-compile basic utilities using the just built cross-toolchain.
Chapter 7 then enters a "chroot" environment, where we use the new tools to build all the rest of the tools needed to create the LFS system.
This effort to isolate the new system from the host distribution may seem excessive. A full technical explanation as to why this is done is provided in Toolchain Technical Notes.
In Chapter 8 the full-blown LFS system is built. Another advantage provided by the chroot environment is that it allows you to continue using the host system while LFS is being built. While waiting for package compilations to complete, you can continue using your computer as usual.
To finish the installation, the basic system configuration is set up in Chapter 9, and the kernel and boot loader are created in Chapter 10. Chapter 11 contains information on continuing the LFS experience beyond this book. After the steps in this chapter have been implemented, the computer is ready to boot into the new LFS system.
This is the process in a nutshell. Detailed information on each step is presented in the following chapters. Items that seem complicated now will be clarified, and everything will fall into place as you commence your LFS adventure.
Here is a list of the packages updated since the previous release of LFS.
Upgraded to:
Bash-5.2.37
Bc-7.0.3
Expat-2.6.4
File-5.46
Flit-core-3.10.1
Gawk-5.3.1
Gettext-0.23
Iana-Etc-20241206
IPRoute2-6.12.0
Kbd-2.7
Less-668
Libcap-2.73
Libelf from Elfutils-0.192
Libpipeline-1.5.8
Libtool-2.5.4
Linux-6.12.5
Man-DB-2.13.0
MarkupSafe-3.0.2
Meson-1.6.0
OpenSSL-3.4.0
Python-3.13.1
Setuptools-75.6.0
Sysklogd-2.6.2
Systemd-257
SysVinit-3.11
Tcl-8.6.15
Texinfo-7.1.1
Tzdata-2024b
Udev from Systemd-257
Vim-9.1.0927
Wheel-0.45.1
Xz-5.6.3
Added:
binutils-2.43.1-upstream_fix-1.patch
Removed:
This is version r12.2-59 of the Linux From Scratch book, dated December 20th, 2024. If this book is more than six months old, a newer and better version is probably already available. To find out, please check one of the mirrors via https://www.linuxfromscratch.org/mirrors.html.
Below is a list of changes made since the previous release of the book.
Changelog Entries:
2024-12-15
[bdubbs] - Update to vim-9.1.0927. Addresses #4500.
[bdubbs] - Update to iana-etc-20241206. Addresses #5006.
[bdubbs] - Update to systemd-257. Fixes #5559.
[bdubbs] - Update to Python-3.13.1. Fixes #5605.
[bdubbs] - Update to libcap-2.73. Fixes #5604.
[bdubbs] - Update to linux-6.12.5. Fixes #5607.
[bdubbs] - Update to kbd-2.7. Fixes #5608.
[bdubbs] - Update to gettext-0.23. Fixes #5603.
2024-12-01
[bdubbs] - Update to iana-etc-20241122. Addresses #5006.
[bdubbs] - Update to file-5.46. Fixes #5601.
[bdubbs] - Update to iproute2-6.12.0. Fixes #5597.
[bdubbs] - Update to libtool-2.5.4. Fixes #5598.
[bdubbs] - Update to linux-6.12.1. Fixes #5586.
[bdubbs] - Update to setuptools-75.6.0 (Python Module). Fixes #5599.
[bdubbs] - Update to wheel-0.45.1 (Python Module). Fixes #5600.
2024-11-15
[bdubbs] - Update to vim-9.1.0866. Addresses #4500.
[bdubbs] - Update to iana-etc-20241024. Addresses #5006.
[bdubbs] - Update to wheel-0.45.0 (Python Module). Fixes #5593.
[bdubbs] - Update to setuptools-75.5.0 (Python Module). Fixes #5595.
[bdubbs] - Update to linux-6.11.8. Fixes #5582.
[bdubbs] - Update to libcap-2.72. Fixes #5594.
2024-11-08
2024-10-25
2024-10-25
[bdubbs] - Update to iana-etc-20241015. Addresses #5006.
[bdubbs] - Update to vim-9.1.0813. Addresses #4500.
[bdubbs] - Update to xz-5.6.3. Fixes #5572.
[bdubbs] - Update to sysvinit-3.11. Fixes #5581.
[bdubbs] - Update to setuptools-75.2.0. Fixes #5577.
[bdubbs] - Update to Python3-3.13.0. Fixes #5575.
[bdubbs] - Update to openssl-3.4.0. Fixes #5582.
[bdubbs] - Update to meson-1.6.0. Fixes #5580.
[bdubbs] - Update to markupsafe-3.0.2. Fixes #5576.
[bdubbs] - Update to linux-6.11.5. Fixes #5574.
[bdubbs] - Update to less-668. Fixes #5578.
[bdubbs] - Update to elfutils-0.192. Fixes #5579.
2024-10-03
[bdubbs] - Revert back to tcl8.6.15.
2024-10-01
[bdubbs] - Update to Python3-3.12.7. Fixes #5571.
[bdubbs] - Update to tcl9.0.0. Fixes #5570.
[bdubbs] - Update to linux-6.11.1. Fixes #5556.
[bdubbs] - Update to libtool-2.5.3. Fixes #5569.
[bdubbs] - Update to iproute2-6.11.0. Fixes #5561.
[bdubbs] - Update to bash-5.2.37. Fixes #5567.
[bdubbs] - Update to bc-7.0.3. Fixes #5568.
2024-09-20
[bdubbs] - Update to vim-9.1.0738. Addresses #4500.
[bdubbs] - Update to texinfo-7.1.1. Fixes #5558.
[bdubbs] - Update to tcl8.6.15. Fixes #5562.
[bdubbs] - Update to sysklogd-2.6.2. Fixes #5557.
[bdubbs] - Update to setuptools-75.1.0. Fixes #5560.
[bdubbs] - Update to meson-1.5.2. Fixes #5566.
[bdubbs] - Update to iana-etc-20240912. Addresses #5006.
[bdubbs] - Update to gawk-5.3.1. Fixes #5564.
[bdubbs] - Update to bc-7.0.2. Fixes #5563.
2024-09-07
[bdubbs] - Update to tzdata-2024b. Fixes #5554.
[bdubbs] - Update to systemd-256.5. Fixes #5551.
[bdubbs] - Update to setuptools-74.1.2. Fixes #5546.
[bdubbs] - Update to python3-3.12.6. Fixes #5555.
[bdubbs] - Update to openssl-3.3.2. Fixes #5552.
[bdubbs] - Update to man-db-2.13.0. Fixes #5550.
[bdubbs] - Update to linux-6.10.8. Fixes #5545.
[bdubbs] - Update to libpipeline-1.5.8. Fixes #5548.
[bdubbs] - Update to expat-2.6.3. Fixes #5553.
[bdubbs] - Update to bc-7.0.1. Fixes #5547.
2024-09-01
[bdubbs] - LFS-12.2 released.
If during the building of the LFS system you encounter any errors, have any questions, or think there is a typo in the book, please start by consulting the list of Frequently Asked Questions (FAQ), located at https://www.linuxfromscratch.org/faq/.
The linuxfromscratch.org
server hosts a number of mailing lists used for the
development of the LFS project. These lists include the main
development and support lists, among others. If you cannot
find an answer to your problem on the FAQ page, the next step
would be to search the mailing lists at https://www.linuxfromscratch.org/search.html.
For information on the different lists, how to subscribe, archive locations, and additional information, visit https://www.linuxfromscratch.org/mail.html.
Several members of the LFS community offer assistance via
Internet Relay Chat (IRC). Before using this support, please
make sure your question is not already answered in the LFS
FAQ or the mailing list archives. You can find the IRC
network at irc.libera.chat
.
The support channel is named #lfs-support.
The LFS project has a number of world-wide mirrors to make accessing the website and downloading the required packages more convenient. Please visit the LFS website at https://www.linuxfromscratch.org/mirrors.html for a list of current mirrors.
In case you've hit an issue building one package with the LFS instruction, we strongly discourage posting the issue directly onto the upstream support channel before discussing via a LFS support channel listed in Section 1.4, “Resources.” Doing so is often quite inefficient because the upstream maintainers are rarely familiar with LFS building procedure. Even if you've really hit an upstream issue, the LFS community can still help to isolate the information wanted by the upstream maintainers and make a proper report.
If you must ask a question directly via an upstream support channel, you shall at least note that many upstream projects have the support channels separated from the bug tracker. The “bug” reports for asking questions are considered invalid and may annoy upstream developers for these projects.
If an issue or a question is encountered while working through this book, please check the FAQ page at https://www.linuxfromscratch.org/faq/#generalfaq. Questions are often already answered there. If your question is not answered on that page, try to find the source of the problem. The following hint will give you some guidance for troubleshooting: https://www.linuxfromscratch.org/hints/downloads/files/errors.txt.
If you cannot find your problem listed in the FAQ, search the mailing lists at https://www.linuxfromscratch.org/search.html.
We also have a wonderful LFS community that is willing to offer assistance through the mailing lists and IRC (see the Section 1.4, “Resources” section of this book). However, we get several support questions every day, and many of them could have been easily answered by going to the FAQ or by searching the mailing lists first. So, for us to offer the best assistance possible, you should first do some research on your own. That allows us to focus on the more unusual support needs. If your searches do not produce a solution, please include all the relevant information (mentioned below) in your request for help.
Apart from a brief explanation of the problem being experienced, any request for help should include these essential things:
The version of the book being used (in this case r12.2-59)
The host distribution and version being used to create LFS
The output from the Host System Requirements script
The package or section the problem was encountered in
The exact error message, or a clear description of the problem
Note whether you have deviated from the book at all
Deviating from this book does not mean that we will not help you. After all, LFS is about personal preference. Being up-front about any changes to the established procedure helps us evaluate and determine possible causes of your problem.
If something goes wrong while running the configure script, review
the config.log
file. This file
may contain errors encountered during configure which were not
printed to the screen. Include the relevant lines if you need to ask
for help.
Both the screen output and the contents of various files are useful in determining the cause of compilation problems. The screen output from the configure script and the make run can be helpful. It is not necessary to include the entire output, but do include all of the relevant information. Here is an example of the type of information to include from the make screen output.
gcc -D ALIASPATH=\"/mnt/lfs/usr/share/locale:.\"
-D LOCALEDIR=\"/mnt/lfs/usr/share/locale\"
-D LIBDIR=\"/mnt/lfs/usr/lib\"
-D INCLUDEDIR=\"/mnt/lfs/usr/include\" -D HAVE_CONFIG_H -I. -I.
-g -O2 -c getopt1.c
gcc -g -O2 -static -o make ar.o arscan.o commands.o dir.o
expand.o file.o function.o getopt.o implicit.o job.o main.o
misc.o read.o remake.o rule.o signame.o variable.o vpath.o
default.o remote-stub.o version.o opt1.o
-lutil job.o: In function `load_too_high':
/lfs/tmp/make-3.79.1/job.c:1565: undefined reference
to `getloadavg'
collect2: ld returned 1 exit status
make[2]: *** [make] Error 1
make[2]: Leaving directory `/lfs/tmp/make-3.79.1'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/lfs/tmp/make-3.79.1'
make: *** [all-recursive-am] Error 2
In this case, many people would just include the bottom section:
make [2]: *** [make] Error 1
This is not enough information to diagnose the problem, because it only notes that something went wrong, not what went wrong. The entire section, as in the example above, is what should be saved because it includes the command that was executed and all the associated error messages.
An excellent article about asking for help on the Internet is available online at http://catb.org/~esr/faqs/smart-questions.html. Read this document, and follow the hints. Doing so will increase the likelihood of getting the help you need.
In this chapter, the host tools needed for building LFS are checked and, if necessary, installed. Then a partition which will host the LFS system is prepared. We will create the partition itself, create a file system on it, and mount it.
The LFS editors recommend that the system CPU have at least four cores and that the system have at least 8 GB of memory. Older systems that do not meet these requirements will still work, but the time to build packages will be significantly longer than documented.
Your host system should have the following software with the
minimum versions indicated. This should not be an issue for
most modern Linux distributions. Also note that many
distributions will place software headers into separate
packages, often in the form of
or <package-name>
-devel
.
Be sure to install those if your distribution provides them.
<package-name>
-dev
Earlier versions of the listed software packages may work, but have not been tested.
Bash-3.2 (/bin/sh should be a symbolic or hard link to bash)
Binutils-2.13.1 (Versions greater than 2.43.1 are not recommended as they have not been tested)
Bison-2.7 (/usr/bin/yacc should be a link to bison or a small script that executes bison)
Coreutils-8.1
Diffutils-2.8.1
Findutils-4.2.31
Gawk-4.0.1 (/usr/bin/awk should be a link to gawk)
GCC-5.2 including the C++ compiler, g++ (Versions greater than 14.2.0 are not recommended as they have not been tested). C and C++ standard libraries (with headers) must also be present so the C++ compiler can build hosted programs
Grep-2.5.1a
Gzip-1.3.12
Linux Kernel-5.4
The reason for the kernel version requirement is that we specify that version when building glibc in Chapter 5 and Chapter 8, so the workarounds for older kernels are not enabled and the compiled glibc is slightly faster and smaller. As at Dec 2024, 5.4 is the oldest kernel release still supported by the kernel developers. Some kernel releases older than 5.4 may be still supported by third-party teams, but they are not considered official upstream kernel releases; read https://kernel.org/category/releases.html for the details.
If the host kernel is earlier than 5.4 you will need to replace the kernel with a more up-to-date version. There are two ways you can go about this. First, see if your Linux vendor provides a 5.4 or later kernel package. If so, you may wish to install it. If your vendor doesn't offer an acceptable kernel package, or you would prefer not to install it, you can compile a kernel yourself. Instructions for compiling the kernel and configuring the boot loader (assuming the host uses GRUB) are located in Chapter 10.
We require the host kernel to support UNIX 98 pseudo
terminal (PTY). It should be enabled on all desktop or
server distros shipping Linux 5.4 or a newer kernel. If
you are building a custom host kernel, ensure
CONFIG_UNIX98_PTYS
is set
to y
in the kernel
configuration.
M4-1.4.10
Make-4.0
Patch-2.5.4
Perl-5.8.8
Python-3.4
Sed-4.1.5
Tar-1.22
Texinfo-5.0
Xz-5.0.0
Note that the symlinks mentioned above are required to build an LFS system using the instructions contained within this book. Symlinks that point to other software (such as dash, mawk, etc.) may work, but are not tested or supported by the LFS development team, and may require either deviation from the instructions or additional patches to some packages.
To see whether your host system has all the appropriate versions, and the ability to compile programs, run the following commands:
cat > version-check.sh << "EOF"
#!/bin/bash
# A script to list version numbers of critical development tools
# If you have tools installed in other directories, adjust PATH here AND
# in ~lfs/.bashrc (section 4.4) as well.
LC_ALL=C
PATH=/usr/bin:/bin
bail() { echo "FATAL: $1"; exit 1; }
grep --version > /dev/null 2> /dev/null || bail "grep does not work"
sed '' /dev/null || bail "sed does not work"
sort /dev/null || bail "sort does not work"
ver_check()
{
if ! type -p $2 &>/dev/null
then
echo "ERROR: Cannot find $2 ($1)"; return 1;
fi
v=$($2 --version 2>&1 | grep -E -o '[0-9]+\.[0-9\.]+[a-z]*' | head -n1)
if printf '%s\n' $3 $v | sort --version-sort --check &>/dev/null
then
printf "OK: %-9s %-6s >= $3\n" "$1" "$v"; return 0;
else
printf "ERROR: %-9s is TOO OLD ($3 or later required)\n" "$1";
return 1;
fi
}
ver_kernel()
{
kver=$(uname -r | grep -E -o '^[0-9\.]+')
if printf '%s\n' $1 $kver | sort --version-sort --check &>/dev/null
then
printf "OK: Linux Kernel $kver >= $1\n"; return 0;
else
printf "ERROR: Linux Kernel ($kver) is TOO OLD ($1 or later required)\n" "$kver";
return 1;
fi
}
# Coreutils first because --version-sort needs Coreutils >= 7.0
ver_check Coreutils sort 8.1 || bail "Coreutils too old, stop"
ver_check Bash bash 3.2
ver_check Binutils ld 2.13.1
ver_check Bison bison 2.7
ver_check Diffutils diff 2.8.1
ver_check Findutils find 4.2.31
ver_check Gawk gawk 4.0.1
ver_check GCC gcc 5.2
ver_check "GCC (C++)" g++ 5.2
ver_check Grep grep 2.5.1a
ver_check Gzip gzip 1.3.12
ver_check M4 m4 1.4.10
ver_check Make make 4.0
ver_check Patch patch 2.5.4
ver_check Perl perl 5.8.8
ver_check Python python3 3.4
ver_check Sed sed 4.1.5
ver_check Tar tar 1.22
ver_check Texinfo texi2any 5.0
ver_check Xz xz 5.0.0
ver_kernel 5.4
if mount | grep -q 'devpts on /dev/pts' && [ -e /dev/ptmx ]
then echo "OK: Linux Kernel supports UNIX 98 PTY";
else echo "ERROR: Linux Kernel does NOT support UNIX 98 PTY"; fi
alias_check() {
if $1 --version 2>&1 | grep -qi $2
then printf "OK: %-4s is $2\n" "$1";
else printf "ERROR: %-4s is NOT $2\n" "$1"; fi
}
echo "Aliases:"
alias_check awk GNU
alias_check yacc Bison
alias_check sh Bash
echo "Compiler check:"
if printf "int main(){}" | g++ -x c++ -
then echo "OK: g++ works";
else echo "ERROR: g++ does NOT work"; fi
rm -f a.out
if [ "$(nproc)" = "" ]; then
echo "ERROR: nproc is not available or it produces empty output"
else
echo "OK: nproc reports $(nproc) logical cores are available"
fi
EOF
bash version-check.sh
LFS is designed to be built in one session. That is, the instructions assume that the system will not be shut down during the process. This does not mean that the system has to be built in one sitting. The issue is that certain procedures must be repeated after a reboot when resuming LFS at different points.
These chapters run commands on the host system. When restarting, be certain of one thing:
Procedures performed as the root
user after Section 2.4 must
have the LFS environment variable set FOR THE ROOT USER.
The /mnt/lfs partition must be mounted.
These two chapters must be done as user
lfs
. A su - lfs command must
be issued before performing any task in these chapters.
If you don't do that, you are at risk of installing
packages to the host, and potentially rendering it
unusable.
The procedures in General Compilation Instructions are critical. If there is any doubt a package has been installed correctly, ensure the previously expanded tarball has been removed, then re-extract the package, and complete all the instructions in that section.
The /mnt/lfs partition must be mounted.
A few operations, from “Changing
Ownership” to “Entering the Chroot
Environment,” must be done as the
root
user, with the LFS
environment variable set for the root
user.
When entering chroot, the LFS environment variable must
be set for root
. The
LFS variable is not used after the chroot environment
has been entered.
The virtual file systems must be mounted. This can be
done before or after entering chroot by changing to a
host virtual terminal and, as root
, running the commands in
Section 7.3.1,
“Mounting and Populating /dev” and Section 7.3.2,
“Mounting Virtual Kernel File Systems.”
Like most other operating systems, LFS is usually installed on a dedicated partition. The recommended approach to building an LFS system is to use an available empty partition or, if you have enough unpartitioned space, to create one.
A minimal system requires a partition of around 10 gigabytes (GB). This is enough to store all the source tarballs and compile the packages. However, if the LFS system is intended to be the primary Linux system, additional software will probably be installed which will require additional space. A 30 GB partition is a reasonable size to provide for growth. The LFS system itself will not take up this much room. A large portion of this requirement is to provide sufficient free temporary storage as well as for adding additional capabilities after LFS is complete. Additionally, compiling packages can require a lot of disk space which will be reclaimed after the package is installed.
Because there is not always enough Random Access Memory (RAM)
available for compilation processes, it is a good idea to use a
small disk partition as swap
space. This is used by the kernel to store seldom-used data and
leave more memory available for active processes. The
swap
partition for an LFS
system can be the same as the one used by the host system, in
which case it is not necessary to create another one.
Start a disk partitioning program such as cfdisk or fdisk with a command line
option naming the hard disk on which the new partition will be
created—for example /dev/sda
for
the primary disk drive. Create a Linux native partition and a
swap
partition, if needed.
Please refer to cfdisk(8) or
fdisk(8) if you do
not yet know how to use the programs.
For experienced users, other partitioning schemes are possible. The new LFS system can be on a software RAID array or an LVM logical volume. However, some of these options require an initramfs, which is an advanced topic. These partitioning methodologies are not recommended for first time LFS users.
Remember the designation of the new partition (e.g.,
sda5
). This book will refer to
this as the LFS partition. Also remember the designation of the
swap
partition. These names
will be needed later for the /etc/fstab
file.
Requests for advice on system partitioning are often posted on the LFS mailing lists. This is a highly subjective topic. The default for most distributions is to use the entire drive with the exception of one small swap partition. This is not optimal for LFS for several reasons. It reduces flexibility, makes sharing of data across multiple distributions or LFS builds more difficult, makes backups more time consuming, and can waste disk space through inefficient allocation of file system structures.
A root LFS partition (not to be confused with the
/root
directory) of twenty
gigabytes is a good compromise for most systems. It
provides enough space to build LFS and most of BLFS, but is
small enough so that multiple partitions can be easily
created for experimentation.
Most distributions automatically create a swap partition. Generally the recommended size of the swap partition is about twice the amount of physical RAM, however this is rarely needed. If disk space is limited, hold the swap partition to two gigabytes and monitor the amount of disk swapping.
If you want to use the hibernation feature (suspend-to-disk) of Linux, it writes out the contents of RAM to the swap partition before turning off the machine. In this case the size of the swap partition should be at least as large as the system's installed RAM.
Swapping is never good. For mechanical hard drives you can generally tell if a system is swapping by just listening to disk activity and observing how the system reacts to commands. With an SSD you will not be able to hear swapping, but you can tell how much swap space is being used by running the top or free programs. Use of an SSD for a swap partition should be avoided if possible. The first reaction to swapping should be to check for an unreasonable command such as trying to edit a five gigabyte file. If swapping becomes a normal occurrence, the best solution is to purchase more RAM for your system.
If the boot disk has been partitioned with a GUID Partition Table (GPT), then a small, typically 1 MB, partition must be created if it does not already exist. This partition is not formatted, but must be available for GRUB to use during installation of the boot loader. This partition will normally be labeled 'BIOS Boot' if using fdisk or have a code of EF02 if using the gdisk command.
The Grub Bios partition must be on the drive that the BIOS uses to boot the system. This is not necessarily the drive that holds the LFS root partition. The disks on a system may use different partition table types. The necessity of the Grub Bios partition depends only on the partition table type of the boot disk.
There are several other partitions that are not required, but should be considered when designing a disk layout. The following list is not comprehensive, but is meant as a guide.
/boot – Highly recommended. Use this partition to store kernels and other booting information. To minimize potential boot problems with larger disks, make this the first physical partition on your first disk drive. A partition size of 200 megabytes is adequate.
/boot/efi – The EFI System Partition, which is needed for booting the system with UEFI. Read the BLFS page for details.
/home – Highly recommended. Share your home directory and user customization across multiple distributions or LFS builds. The size is generally fairly large and depends on available disk space.
/usr – In LFS, /bin
,
/lib
, and /sbin
are symlinks to their
counterparts in /usr
.
So /usr
contains all
the binaries needed for the system to run. For LFS a
separate partition for /usr
is normally not needed. If you
create it anyway, you should make a partition large
enough to fit all the programs and libraries in the
system. The root partition can be very small (maybe
just one gigabyte) in this configuration, so it's
suitable for a thin client or diskless workstation
(where /usr
is mounted
from a remote server). However, you should be aware
that an initramfs (not covered by LFS) will be needed
to boot a system with a separate /usr
partition.
/opt – This directory is most useful for BLFS, where multiple large packages like KDE or Texlive can be installed without embedding the files in the /usr hierarchy. If used, 5 to 10 gigabytes is generally adequate.
/tmp – A separate /tmp partition is rare, but useful
if configuring a thin client. This partition, if
used, will usually not need to exceed a couple of
gigabytes. If you have enough RAM, you can mount a
tmpfs
on /tmp to make
access to temporary files faster.
/usr/src – This partition is very useful for providing a location to store BLFS source files and share them across LFS builds. It can also be used as a location for building BLFS packages. A reasonably large partition of 30-50 gigabytes provides plenty of room.
Any separate partition that you want automatically mounted
when the system starts must be specified in the
/etc/fstab
file. Details
about how to specify partitions will be discussed in
Section 10.2,
“Creating the /etc/fstab File”.
A partition is just a range of sectors on a disk drive, delimited by boundaries set in a partition table. Before the operating system can use a partition to store any files, the partition must be formatted to contain a file system, typically consisting of a label, directory blocks, data blocks, and an indexing scheme to locate a particular file on demand. The file system also helps the OS keep track of free space on the partition, reserve the needed sectors when a new file is created or an existing file is extended, and recycle the free data segments created when files are deleted. It may also provide support for data redundancy, and for error recovery.
LFS can use any file system recognized by the Linux kernel, but the most common types are ext3 and ext4. The choice of the right file system can be complex; it depends on the characteristics of the files and the size of the partition. For example:
is suitable for small partitions that are updated infrequently such as /boot.
is an upgrade to ext2 that includes a journal to help recover the partition's status in the case of an unclean shutdown. It is commonly used as a general purpose file system.
is the latest version of the ext family of file systems. It provides several new capabilities including nano-second timestamps, creation and use of very large files (up to 16 TB), and speed improvements.
Other file systems, including FAT32, NTFS, JFS, and XFS are useful for specialized purposes. More information about these file systems, and many others, can be found at https://en.wikipedia.org/wiki/Comparison_of_file_systems.
LFS assumes that the root file system (/) is of type ext4. To
create an ext4
file system on
the LFS partition, issue the following command:
mkfs -v -t ext4 /dev/<xxx>
Replace <xxx>
with the name of the LFS partition.
If you are using an existing swap
partition, there is no need to format
it. If a new swap
partition was
created, it will need to be initialized with this command:
mkswap /dev/<yyy>
Replace <yyy>
with the name of the swap
partition.
Throughout this book, the environment variable LFS
will be used several times. You should
ensure that this variable is always defined throughout the LFS
build process. It should be set to the name of the directory
where you will be building your LFS system - we will use
/mnt/lfs
as an example, but you
may choose any directory name you want. If you are building LFS
on a separate partition, this directory will be the mount point
for the partition. Choose a directory location and set the
variable with the following command:
export LFS=/mnt/lfs
Having this variable set is beneficial in that commands such as mkdir -v $LFS/tools can be typed literally. The shell will automatically replace “$LFS” with “/mnt/lfs” (or whatever value the variable was set to) when it processes the command line.
Do not forget to check that LFS
is
set whenever you leave and reenter the current working
environment (such as when doing a su to root
or another user). Check that the
LFS
variable is set up properly
with:
echo $LFS
Make sure the output shows the path to your LFS system's
build location, which is /mnt/lfs
if the provided example was
followed. If the output is incorrect, use the command given
earlier on this page to set $LFS
to the correct directory name.
One way to ensure that the LFS
variable is always set is to edit the .bash_profile
file in both your personal
home directory and in /root/.bash_profile
and enter the export
command above. In addition, the shell specified in the
/etc/passwd
file for all users
that need the LFS
variable must be
bash to ensure that the /root/.bash_profile
file is incorporated as
a part of the login process.
Another consideration is the method that is used to log into
the host system. If logging in through a graphical display
manager, the user's .bash_profile
is not normally used when a
virtual terminal is started. In this case, add the export
command to the .bashrc
file for
the user and root
. In
addition, some distributions use an "if" test, and do not run
the remaining .bashrc
instructions for a non-interactive bash invocation. Be sure
to place the export command ahead of the test for
non-interactive use.
Now that a file system has been created, the partition must be
mounted so the host system can access it. This book assumes
that the file system is mounted at the directory specified by
the LFS
environment variable
described in the previous section.
Strictly speaking, one cannot “mount a partition.” One mounts the file system embedded in that partition. But since a single partition can't contain more than one file system, people often speak of the partition and the associated file system as if they were one and the same.
Create the mount point and mount the LFS file system with these commands:
mkdir -pv $LFS
mount -v -t ext4 /dev/<xxx>
$LFS
Replace <xxx>
with the name of the LFS partition.
If you are using multiple partitions for LFS (e.g., one for
/
and another for /home
), mount them like this:
mkdir -pv $LFS mount -v -t ext4 /dev/<xxx>
$LFS mkdir -v $LFS/home mount -v -t ext4 /dev/<yyy>
$LFS/home
Replace <xxx>
and <yyy>
with
the appropriate partition names.
Ensure that this new partition is not mounted with permissions
that are too restrictive (such as the nosuid
or nodev
options). Run the mount command without any
parameters to see what options are set for the mounted LFS
partition. If nosuid
and/or
nodev
are set, the partition must
be remounted.
The above instructions assume that you will not restart your
computer throughout the LFS process. If you shut down your
system, you will either need to remount the LFS partition
each time you restart the build process, or modify the host
system's /etc/fstab
file to
automatically remount it when you reboot. For example, you
might add this line to your /etc/fstab
file:
/dev/<xxx>
/mnt/lfs ext4 defaults 1 1
If you use additional optional partitions, be sure to add them also.
If you are using a swap
partition, ensure that it is enabled using the swapon command:
/sbin/swapon -v /dev/<zzz>
Replace <zzz>
with the name of the swap
partition.
Now that the new LFS partition is open for business, it's time to download the packages.
This chapter includes a list of packages that need to be downloaded in order to build a basic Linux system. The listed version numbers correspond to versions of the software that are known to work, and this book is based on their use. We highly recommend against using different versions, because the build commands for one version may not work with a different version, unless the different version is specified by an LFS erratum or security advisory. The newest package versions may also have problems that require work-arounds. These work-arounds will be developed and stabilized in the development version of the book.
For some packages, the release tarball and the (Git or SVN) repository snapshot tarball for that release may be published with similar or even identical file names. But the release tarball may contain some files which are essential despite not stored in the repository (for example, a configure script generated by autoconf), in addition to the contents of the corresponding repository snapshot. The book uses release tarballs whenever possible. Using a repository snapshot instead of a release tarball specified by the book will cause problems.
Download locations may not always be accessible. If a download location has changed since this book was published, Google (https://www.google.com/) provides a useful search engine for most packages. If this search is unsuccessful, try one of the alternative means of downloading at https://www.linuxfromscratch.org/lfs/mirrors.html#files.
Downloaded packages and patches will need to be stored
somewhere that is conveniently available throughout the entire
build. A working directory is also required to unpack the
sources and build them. $LFS/sources
can be used both as the place to
store the tarballs and patches and as a working directory. By
using this directory, the required elements will be located on
the LFS partition and will be available during all stages of
the building process.
To create this directory, execute the following command, as
user root
, before starting the
download session:
mkdir -v $LFS/sources
Make this directory writable and sticky. “Sticky” means that even if multiple users have write permission on a directory, only the owner of a file can delete the file within a sticky directory. The following command will enable the write and sticky modes:
chmod -v a+wt $LFS/sources
There are several ways to obtain all the necessary packages and patches to build LFS:
The files can be downloaded individually as described in the next two sections.
For stable versions of the book, a tarball of all the needed files can be downloaded from one of the mirror sites listed at https://www.linuxfromscratch.org/mirrors.html#files.
The files can be downloaded using wget and a wget-list as described below.
To download all of the packages and patches by using wget-list-sysv as an input to the wget command, use:
wget --input-file=wget-list-sysv --continue --directory-prefix=$LFS/sources
Additionally, starting with LFS-7.0, there is a separate file,
md5sums, which can be
used to verify that all the correct packages are available
before proceeding. Place that file in $LFS/sources
and run:
pushd $LFS/sources md5sum -c md5sums popd
This check can be used after retrieving the needed files with any of the methods listed above.
If the packages and patches are downloaded as a
non-root
user, these files will
be owned by the user. The file system records the owner by its
UID, and the UID of a normal user in the host distro is not
assigned in LFS. So the files will be left owned by an unnamed
UID in the final LFS system. If you won't assign the same UID
for your user in the LFS system, change the owners of these
files to root
now to avoid this
issue:
chown root:root $LFS/sources/*
Read the security advisories before downloading packages to figure out if a newer version of any package should be used to avoid security vulnerabilities.
The upstream sources may remove old releases, especially when those releases contain a security vulnerability. If one URL below is not reachable, you should read the security advisories first to figure out if a newer version (with the vulnerability fixed) should be used. If not, try to download the removed package from a mirror. Although it's possible to download an old release from a mirror even if this release has been removed because of a vulnerability, it's not a good idea to use a release known to be vulnerable when building your system.
Download or otherwise obtain the following packages:
Home page: https://savannah.nongnu.org/projects/acl
Download: https://download.savannah.gnu.org/releases/acl/acl-2.3.2.tar.xz
MD5 sum: 590765dee95907dbc3c856f7255bd669
Home page: https://savannah.nongnu.org/projects/attr
Download: https://download.savannah.gnu.org/releases/attr/attr-2.5.2.tar.gz
MD5 sum: 227043ec2f6ca03c0948df5517f9c927
Home page: https://www.gnu.org/software/autoconf/
Download: https://ftp.gnu.org/gnu/autoconf/autoconf-2.72.tar.xz
MD5 sum: 1be79f7106ab6767f18391c5e22be701
Home page: https://www.gnu.org/software/automake/
Download: https://ftp.gnu.org/gnu/automake/automake-1.17.tar.xz
MD5 sum: 7ab3a02318fee6f5bd42adfc369abf10
Home page: https://www.gnu.org/software/bash/
Download: https://ftp.gnu.org/gnu/bash/bash-5.2.37.tar.gz
MD5 sum: 9c28f21ff65de72ca329c1779684a972
Home page: https://git.gavinhoward.com/gavin/bc
Download: https://github.com/gavinhoward/bc/releases/download/7.0.3/bc-7.0.3.tar.xz
MD5 sum: ad4db5a0eb4fdbb3f6813be4b6b3da74
Home page: https://www.gnu.org/software/binutils/
Download: https://sourceware.org/pub/binutils/releases/binutils-2.43.1.tar.xz
MD5 sum: 9202d02925c30969d1917e4bad5a2320
Home page: https://www.gnu.org/software/bison/
Download: https://ftp.gnu.org/gnu/bison/bison-3.8.2.tar.xz
MD5 sum: c28f119f405a2304ff0a7ccdcc629713
Download: https://www.sourceware.org/pub/bzip2/bzip2-1.0.8.tar.gz
MD5 sum: 67e051268d0c475ea773822f7500d0e5
Home page: https://libcheck.github.io/check
Download: https://github.com/libcheck/check/releases/download/0.15.2/check-0.15.2.tar.gz
MD5 sum: 50fcafcecde5a380415b12e9c574e0b2
Home page: https://www.gnu.org/software/coreutils/
Download: https://ftp.gnu.org/gnu/coreutils/coreutils-9.5.tar.xz
MD5 sum: e99adfa059a63db3503cc71f3d151e31
Home page: https://www.gnu.org/software/dejagnu/
Download: https://ftp.gnu.org/gnu/dejagnu/dejagnu-1.6.3.tar.gz
MD5 sum: 68c5208c58236eba447d7d6d1326b821
Home page: https://www.gnu.org/software/diffutils/
Download: https://ftp.gnu.org/gnu/diffutils/diffutils-3.10.tar.xz
MD5 sum: 2745c50f6f4e395e7b7d52f902d075bf
Home page: https://e2fsprogs.sourceforge.net/
Download: https://downloads.sourceforge.net/project/e2fsprogs/e2fsprogs/v1.47.1/e2fsprogs-1.47.1.tar.gz
MD5 sum: 75e6d1353cbe6d5728a98fb0267206cb
Home page: https://sourceware.org/elfutils/
Download: https://sourceware.org/ftp/elfutils/0.192/elfutils-0.192.tar.bz2
MD5 sum: a6bb1efc147302cfc15b5c2b827f186a
Home page: https://libexpat.github.io/
Download: https://prdownloads.sourceforge.net/expat/expat-2.6.4.tar.xz
MD5 sum: 101fe3e320a2800f36af8cf4045b45c7
Home page: https://core.tcl.tk/expect/
Download: https://prdownloads.sourceforge.net/expect/expect5.45.4.tar.gz
MD5 sum: 00fce8de158422f5ccd2666512329bd2
Home page: https://www.darwinsys.com/file/
Download: https://astron.com/pub/file/file-5.46.tar.gz
MD5 sum: 459da2d4b534801e2e2861611d823864
Home page: https://www.gnu.org/software/findutils/
Download: https://ftp.gnu.org/gnu/findutils/findutils-4.10.0.tar.xz
MD5 sum: 870cfd71c07d37ebe56f9f4aaf4ad872
Home page: https://github.com/westes/flex
Download: https://github.com/westes/flex/releases/download/v2.6.4/flex-2.6.4.tar.gz
MD5 sum: 2882e3179748cc9f9c23ec593d6adc8d
Home page: https://pypi.org/project/flit-core/
Download: https://pypi.org/packages/source/f/flit-core/flit_core-3.10.1.tar.gz
MD5 sum: a3381dd58e23e9826c5199b1f70318b0
Home page: https://www.gnu.org/software/gawk/
Download: https://ftp.gnu.org/gnu/gawk/gawk-5.3.1.tar.xz
MD5 sum: 4e9292a06b43694500e0620851762eec
Home page: https://gcc.gnu.org/
Download: https://ftp.gnu.org/gnu/gcc/gcc-14.2.0/gcc-14.2.0.tar.xz
MD5 sum: 2268420ba02dc01821960e274711bde0
Home page: https://www.gnu.org/software/gdbm/
Download: https://ftp.gnu.org/gnu/gdbm/gdbm-1.24.tar.gz
MD5 sum: c780815649e52317be48331c1773e987
Home page: https://www.gnu.org/software/gettext/
Download: https://ftp.gnu.org/gnu/gettext/gettext-0.23.tar.xz
MD5 sum: 9f4f6040ac1022278ea26d28f37b1688
Home page: https://www.gnu.org/software/libc/
Download: https://ftp.gnu.org/gnu/glibc/glibc-2.40.tar.xz
MD5 sum: b390feef233022114950317f10c4fa97
The Glibc developers maintain a Git branch containing patches considered worthy for Glibc-2.40 but unfortunately developed after Glibc-2.40 release. The LFS editors will issue a security advisory if any security fix is added into the branch, but no actions will be taken for other newly added patches. You may review the patches yourself and incorporate some patches if you consider them important.
Home page: https://www.gnu.org/software/gmp/
Download: https://ftp.gnu.org/gnu/gmp/gmp-6.3.0.tar.xz
MD5 sum: 956dc04e864001a9c22429f761f2c283
Home page: https://www.gnu.org/software/gperf/
Download: https://ftp.gnu.org/gnu/gperf/gperf-3.1.tar.gz
MD5 sum: 9e251c0a618ad0824b51117d5d9db87e
Home page: https://www.gnu.org/software/grep/
Download: https://ftp.gnu.org/gnu/grep/grep-3.11.tar.xz
MD5 sum: 7c9bbd74492131245f7cdb291fa142c0
Home page: https://www.gnu.org/software/groff/
Download: https://ftp.gnu.org/gnu/groff/groff-1.23.0.tar.gz
MD5 sum: 5e4f40315a22bb8a158748e7d5094c7d
Home page: https://www.gnu.org/software/grub/
Download: https://ftp.gnu.org/gnu/grub/grub-2.12.tar.xz
MD5 sum: 60c564b1bdc39d8e43b3aab4bc0fb140
Home page: https://www.gnu.org/software/gzip/
Download: https://ftp.gnu.org/gnu/gzip/gzip-1.13.tar.xz
MD5 sum: d5c9fc9441288817a4a0be2da0249e29
Home page: https://www.iana.org/protocols
Download: https://github.com/Mic92/iana-etc/releases/download/20241206/iana-etc-20241206.tar.gz
MD5 sum: 8ed4c07cada287f55207577976d6a37f
Home page: https://www.gnu.org/software/inetutils/
Download: https://ftp.gnu.org/gnu/inetutils/inetutils-2.5.tar.xz
MD5 sum: 9e5a6dfd2d794dc056a770e8ad4a9263
Home page: https://freedesktop.org/wiki/Software/intltool
Download: https://launchpad.net/intltool/trunk/0.51.0/+download/intltool-0.51.0.tar.gz
MD5 sum: 12e517cac2b57a0121cda351570f1e63
Home page: https://www.kernel.org/pub/linux/utils/net/iproute2/
Download: https://www.kernel.org/pub/linux/utils/net/iproute2/iproute2-6.12.0.tar.xz
MD5 sum: bc789bd210bc5d1ca6c64ea1c87d6979
Home page: https://jinja.palletsprojects.com/en/3.1.x/
Download: https://pypi.org/packages/source/J/Jinja2/jinja2-3.1.4.tar.gz
MD5 sum: 02ca9a6364c92e83d14b037bef4732bc
Home page: https://kbd-project.org/
Download: https://www.kernel.org/pub/linux/utils/kbd/kbd-2.7.tar.xz
MD5 sum: bf40be5bea1b62e691410f5c6e0bbd6b
Home page: https://github.com/kmod-project/kmod
Download: https://www.kernel.org/pub/linux/utils/kernel/kmod/kmod-33.tar.xz
MD5 sum: c451c4aa61521adbe8af147f498046f8
Home page: https://www.greenwoodsoftware.com/less/
Download: https://www.greenwoodsoftware.com/less/less-668.tar.gz
MD5 sum: d72760386c5f80702890340d2f66c302
Download: https://www.linuxfromscratch.org/lfs/downloads/development/lfs-bootscripts-20240825.tar.xz
MD5 sum: d7662d5fffef71fc196b8ab320279e26
Home page: https://sites.google.com/site/fullycapable/
Download: https://www.kernel.org/pub/linux/libs/security/linux-privs/libcap2/libcap-2.73.tar.xz
MD5 sum: 0e186df9de9b1e925593a96684fe2e32
Home page: https://sourceware.org/libffi/
Download: https://github.com/libffi/libffi/releases/download/v3.4.6/libffi-3.4.6.tar.gz
MD5 sum: b9cac6c5997dca2b3787a59ede34e0eb
Home page: https://libpipeline.nongnu.org/
Download: https://download.savannah.gnu.org/releases/libpipeline/libpipeline-1.5.8.tar.gz
MD5 sum: 17ac6969b2015386bcb5d278a08a40b5
Home page: https://www.gnu.org/software/libtool/
Download: https://ftp.gnu.org/gnu/libtool/libtool-2.5.4.tar.xz
MD5 sum: 22e0a29df8af5fdde276ea3a7d351d30
Home page: https://github.com/besser82/libxcrypt/
Download: https://github.com/besser82/libxcrypt/releases/download/v4.4.36/libxcrypt-4.4.36.tar.xz
MD5 sum: b84cd4104e08c975063ec6c4d0372446
Home page: https://www.kernel.org/
Download: https://www.kernel.org/pub/linux/kernel/v6.x/linux-6.12.5.tar.xz
MD5 sum: 33a827ff7dea6908e7615d0766f1018e
The Linux kernel is updated quite frequently, many times due to discoveries of security vulnerabilities. The latest available stable kernel version may be used, unless the errata page says otherwise.
For users with limited speed or expensive bandwidth who wish to update the Linux kernel, a baseline version of the package and patches can be downloaded separately. This may save some time or cost for a subsequent patch level upgrade within a minor release.
Home page: https://lz4.org/
Download: https://github.com/lz4/lz4/releases/download/v1.10.0/lz4-1.10.0.tar.gz
MD5 sum: dead9f5f1966d9ae56e1e32761e4e675
Home page: https://www.gnu.org/software/m4/
Download: https://ftp.gnu.org/gnu/m4/m4-1.4.19.tar.xz
MD5 sum: 0d90823e1426f1da2fd872df0311298d
Home page: https://www.gnu.org/software/make/
Download: https://ftp.gnu.org/gnu/make/make-4.4.1.tar.gz
MD5 sum: c8469a3713cbbe04d955d4ae4be23eeb
Home page: https://www.nongnu.org/man-db/
Download: https://download.savannah.gnu.org/releases/man-db/man-db-2.13.0.tar.xz
MD5 sum: 97ab5f9f32914eef2062d867381d8cee
Home page: https://www.kernel.org/doc/man-pages/
Download: https://www.kernel.org/pub/linux/docs/man-pages/man-pages-6.9.1.tar.xz
MD5 sum: 4d56775b6cce4edf1e496249e7c01c1a
Home page: https://palletsprojects.com/p/markupsafe/
Download: https://pypi.org/packages/source/M/MarkupSafe/markupsafe-3.0.2.tar.gz
MD5 sum: cb0071711b573b155cc8f86e1de72167
Home page: https://mesonbuild.com
Download: https://github.com/mesonbuild/meson/releases/download/1.6.0/meson-1.6.0.tar.gz
MD5 sum: 0031ea392f8ef97eeadfe1906c5cc5b4
Home page: https://www.multiprecision.org/
Download: https://ftp.gnu.org/gnu/mpc/mpc-1.3.1.tar.gz
MD5 sum: 5c9bc658c9fd0f940e8e3e0f09530c62
Home page: https://www.mpfr.org/
Download: https://ftp.gnu.org/gnu/mpfr/mpfr-4.2.1.tar.xz
MD5 sum: 523c50c6318dde6f9dc523bc0244690a
Home page: https://www.gnu.org/software/ncurses/
Download: https://invisible-mirror.net/archives/ncurses/ncurses-6.5.tar.gz
MD5 sum: ac2d2629296f04c8537ca706b6977687
Home page: https://ninja-build.org/
Download: https://github.com/ninja-build/ninja/archive/v1.12.1/ninja-1.12.1.tar.gz
MD5 sum: 6288992b05e593a391599692e2f7e490
Home page: https://www.openssl-library.org/
Download: https://github.com/openssl/openssl/releases/download/openssl-3.4.0/openssl-3.4.0.tar.gz
MD5 sum: 34733f7be2d60ecd8bd9ddb796e182af
Home page: https://savannah.gnu.org/projects/patch/
Download: https://ftp.gnu.org/gnu/patch/patch-2.7.6.tar.xz
MD5 sum: 78ad9937e4caadcba1526ef1853730d5
Home page: https://www.perl.org/
Download: https://www.cpan.org/src/5.0/perl-5.40.0.tar.xz
MD5 sum: cfe14ef0709b9687f9c514042e8e1e82
Home page: https://github.com/pkgconf/pkgconf
Download: https://distfiles.ariadne.space/pkgconf/pkgconf-2.3.0.tar.xz
MD5 sum: 833363e77b5bed0131c7bc4cc6f7747b
Home page: https://gitlab.com/procps-ng/procps/
Download: https://sourceforge.net/projects/procps-ng/files/Production/procps-ng-4.0.4.tar.xz
MD5 sum: 2f747fc7df8ccf402d03e375c565cf96
Home page: https://gitlab.com/psmisc/psmisc
Download: https://sourceforge.net/projects/psmisc/files/psmisc/psmisc-23.7.tar.xz
MD5 sum: 53eae841735189a896d614cba440eb10
Home page: https://www.python.org/
Download: https://www.python.org/ftp/python/3.13.1/Python-3.13.1.tar.xz
MD5 sum: 80c16badb94ffe235280d4d9a099b8bc
Download: https://www.python.org/ftp/python/doc/3.13.1/python-3.13.1-docs-html.tar.bz2
MD5 sum: 2fbda851be0e4d4c4dad7bb8d1ff7e50
Home page: https://tiswww.case.edu/php/chet/readline/rltop.html
Download: https://ftp.gnu.org/gnu/readline/readline-8.2.13.tar.gz
MD5 sum: 05080bf3801e6874bb115cd6700b708f
Home page: https://www.gnu.org/software/sed/
Download: https://ftp.gnu.org/gnu/sed/sed-4.9.tar.xz
MD5 sum: 6aac9b2dbafcd5b7a67a8a9bcb8036c3
Home page: https://pypi.org/project/setuptools/
Download: https://pypi.org/packages/source/s/setuptools/setuptools-75.6.0.tar.gz
MD5 sum: 94458e508bd8e9dc6e6d097fc8747cf0
Home page: https://github.com/shadow-maint/shadow/
Download: https://github.com/shadow-maint/shadow/releases/download/4.16.0/shadow-4.16.0.tar.xz
MD5 sum: eb70bad3316d08f0d3bb3d4bbeccb3b4
Home page: https://www.infodrom.org/projects/sysklogd/
Download: https://github.com/troglobit/sysklogd/releases/download/v2.6.2/sysklogd-2.6.2.tar.gz
MD5 sum: 9f64535a9a791f20504841b94d194391
Home page: https://www.freedesktop.org/wiki/Software/systemd/
Download: https://github.com/systemd/systemd/archive/v257/systemd-257.tar.gz
MD5 sum: a51c7f9ab0d8b0a08dcf14bea2b6a5cb
Home page: https://www.freedesktop.org/wiki/Software/systemd/
Download: https://anduin.linuxfromscratch.org/LFS/systemd-man-pages-257.tar.xz
MD5 sum: ac0b54961b1f20474fdff0927bc8be14
The Linux From Scratch team generates its own tarball of the man pages using the systemd source. This is done in order to avoid unnecessary dependencies.
Home page: https://savannah.nongnu.org/projects/sysvinit
Download: https://github.com/slicer69/sysvinit/releases/download/3.11/sysvinit-3.11.tar.xz
MD5 sum: cb4e4bdabd902b774c4d66a85e1f6209
Home page: https://www.gnu.org/software/tar/
Download: https://ftp.gnu.org/gnu/tar/tar-1.35.tar.xz
MD5 sum: a2d8042658cfd8ea939e6d911eaf4152
Home page: https://tcl.sourceforge.net/
Download: https://downloads.sourceforge.net/tcl/tcl8.6.15-src.tar.gz
MD5 sum: c13a4d5425b5ae335258342b38ba34c2
Download: https://downloads.sourceforge.net/tcl/tcl8.6.15-html.tar.gz
MD5 sum: 146d6317a5318ad79d4c1421ba612fe9
Home page: https://www.gnu.org/software/texinfo/
Download: https://ftp.gnu.org/gnu/texinfo/texinfo-7.1.1.tar.xz
MD5 sum: e5fc595794a7980f98ce446a5f8aa273
Home page: https://www.iana.org/time-zones
Download: https://www.iana.org/time-zones/repository/releases/tzdata2024b.tar.gz
MD5 sum: e1d010b46844502f12dcff298c1b7154
Download: https://anduin.linuxfromscratch.org/LFS/udev-lfs-20230818.tar.xz
MD5 sum: acd4360d8a5c3ef320b9db88d275dae6
Home page: https://git.kernel.org/pub/scm/utils/util-linux/util-linux.git/
Download: https://www.kernel.org/pub/linux/utils/util-linux/v2.40/util-linux-2.40.2.tar.xz
MD5 sum: 88faefc8fefced097e58142077a3d14e
Home page: https://www.vim.org
Download: https://github.com/vim/vim/archive/v9.1.0927/vim-9.1.0927.tar.gz
MD5 sum: 912f5a4303b2b779ba608b0d06f28aa8
The version of vim changes daily. To get the latest version, go to https://github.com/vim/vim/tags.
Home page: https://pypi.org/project/wheel/
Download: https://pypi.org/packages/source/w/wheel/wheel-0.45.1.tar.gz
MD5 sum: dddc505d0573d03576c7c6c5a4fe0641
Home page: https://github.com/chorny/XML-Parser
Download: https://cpan.metacpan.org/authors/id/T/TO/TODDR/XML-Parser-2.47.tar.gz
MD5 sum: 89a8e82cfd2ad948b349c0a69c494463
Home page: https://tukaani.org/xz
Download: https://github.com//tukaani-project/xz/releases/download/v5.6.3/xz-5.6.3.tar.xz
MD5 sum: 57581b216a82482503bb63c8170d549c
Home page: https://zlib.net/
Download: https://zlib.net/fossils/zlib-1.3.1.tar.gz
MD5 sum: 9855b6d802d7fe5b7bd5b196a2271655
Home page: https://facebook.github.io/zstd/
Download: https://github.com/facebook/zstd/releases/download/v1.5.6/zstd-1.5.6.tar.gz
MD5 sum: 5a473726b3445d0e5d6296afd1ab6854
Total size of these packages: about 528 MB
In addition to the packages, several patches are also required. These patches correct any mistakes in the packages that should be fixed by the maintainer. The patches also make small modifications to make the packages easier to work with. The following patches will be needed to build an LFS system:
Download: https://www.linuxfromscratch.org/patches/lfs/development/binutils-2.43.1-upstream_fix-1.patch
MD5 sum: eddd9860af589ec328541a9ec5e5928e
Download: https://www.linuxfromscratch.org/patches/lfs/development/bzip2-1.0.8-install_docs-1.patch
MD5 sum: 6a5ac7e89b791aae556de0f745916f7f
Download: https://www.linuxfromscratch.org/patches/lfs/development/coreutils-9.5-i18n-2.patch
MD5 sum: 58961caf5bbdb02462591fa506c73b6d
Download: https://www.linuxfromscratch.org/patches/lfs/development/expect-5.45.4-gcc14-1.patch
MD5 sum: 0b8b5ac411d011263ad40b0664c669f0
Download: https://www.linuxfromscratch.org/patches/lfs/development/glibc-2.40-fhs-1.patch
MD5 sum: 9a5997c3452909b1769918c759eff8a2
Download: https://www.linuxfromscratch.org/patches/lfs/development/kbd-2.7-backspace-1.patch
MD5 sum: f75cca16a38da6caa7d52151f7136895
Download: https://www.linuxfromscratch.org/patches/lfs/development/sysvinit-3.11-consolidated-1.patch
MD5 sum: 17ffccbb8e18c39e8cedc32046f3a475
Total size of these patches: about 208.7 KB
In addition to the above required patches, there exist a number of optional patches created by the LFS community. These optional patches solve minor problems or enable functionality that is not enabled by default. Feel free to peruse the patches database located at https://www.linuxfromscratch.org/patches/downloads/ and acquire any additional patches to suit your system needs.
In this chapter, we will perform a few additional tasks to
prepare for building the temporary system. We will create a set
of directories in $LFS
(in which
we will install the temporary tools), add an unprivileged user,
and create an appropriate build environment for that user. We
will also explain the units of time (“SBUs”) we use to
measure how long it takes to build LFS packages, and provide
some information about package test suites.
In this section, we begin populating the LFS filesystem with the pieces that will constitute the final Linux system. The first step is to create a limited directory hierarchy, so that the programs compiled in Chapter 6 (as well as glibc and libstdc++ in Chapter 5) can be installed in their final location. We do this so those temporary programs will be overwritten when the final versions are built in Chapter 8.
Create the required directory layout by issuing the following
commands as root
:
mkdir -pv $LFS/{etc,var} $LFS/usr/{bin,lib,sbin} for i in bin lib sbin; do ln -sv usr/$i $LFS/$i done case $(uname -m) in x86_64) mkdir -pv $LFS/lib64 ;; esac
Programs in Chapter 6
will be compiled with a cross-compiler (more details can be
found in section Toolchain Technical Notes).
This cross-compiler will be installed in a special directory,
to separate it from the other programs. Still acting as
root
, create that directory
with this command:
mkdir -pv $LFS/tools
The LFS editors have deliberately decided not to use a
/usr/lib64
directory. Several
steps are taken to be sure the toolchain will not use it. If
for any reason this directory appears (either because you
made an error in following the instructions, or because you
installed a binary package that created it after finishing
LFS), it may break your system. You should always be sure
this directory does not exist.
When logged in as user root
,
making a single mistake can damage or destroy a system.
Therefore, the packages in the next two chapters are built as
an unprivileged user. You could use your own user name, but to
make it easier to set up a clean working environment, we will
create a new user called lfs
as
a member of a new group (also named lfs
) and run commands as lfs
during the installation process. As
root
, issue the following
commands to add the new user:
groupadd lfs useradd -s /bin/bash -g lfs -m -k /dev/null lfs
This is what the command line options mean:
-s
/bin/bash
This makes bash the default shell
for user lfs
.
-g
lfs
This option adds user lfs
to group lfs
.
-m
This creates a home directory for lfs
.
-k
/dev/null
This parameter prevents possible copying of files from a
skeleton directory (the default is /etc/skel
) by changing the input
location to the special null device.
lfs
This is the name of the new user.
If you want to log in as lfs
or
switch to lfs
from a
non-root
user (as opposed to
switching to user lfs
when
logged in as root
, which does
not require the lfs
user to
have a password), you need to set a password for lfs
. Issue the following command as the
root
user to set the password:
passwd lfs
Grant lfs
full access to all
the directories under $LFS
by
making lfs
the owner:
chown -v lfs $LFS/{usr{,/*},lib,var,etc,bin,sbin,tools} case $(uname -m) in x86_64) chown -v lfs $LFS/lib64 ;; esac
In some host systems, the following su command does not
complete properly and suspends the login for the lfs
user to the background. If the prompt
"lfs:~$" does not appear immediately, entering the
fg command will
fix the issue.
Next, start a shell running as user lfs
. This can be done by logging in as
lfs
on a virtual console, or
with the following substitute/switch user command:
su - lfs
The “-
” instructs
su to start a
login shell as opposed to a non-login shell. The difference
between these two types of shells is described in detail in
bash(1) and
info bash.
Set up a good working environment by creating two new startup
files for the bash shell. While logged in
as user lfs
, issue the
following command to create a new .bash_profile
:
cat > ~/.bash_profile << "EOF"
exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash
EOF
When logged on as user lfs
, or
when switched to the lfs
user
using an su
command with the “-
” option, the
initial shell is a login
shell which reads the /etc/profile
of the host (probably containing
some settings and environment variables) and then .bash_profile
. The exec env -i.../bin/bash
command in the .bash_profile
file
replaces the running shell with a new one with a completely
empty environment, except for the HOME
, TERM
, and
PS1
variables. This ensures that no
unwanted and potentially hazardous environment variables from
the host system leak into the build environment.
The new instance of the shell is a non-login shell, which does not
read, and execute, the contents of the /etc/profile
or .bash_profile
files, but rather reads, and
executes, the .bashrc
file
instead. Create the .bashrc
file
now:
cat > ~/.bashrc << "EOF"
set +h
umask 022
LFS=/mnt/lfs
LC_ALL=POSIX
LFS_TGT=$(uname -m)-lfs-linux-gnu
PATH=/usr/bin
if [ ! -L /bin ]; then PATH=/bin:$PATH; fi
PATH=$LFS/tools/bin:$PATH
CONFIG_SITE=$LFS/usr/share/config.site
export LFS LC_ALL LFS_TGT PATH CONFIG_SITE
EOF
The meaning of the settings in .bashrc
set
+h
The set +h
command turns off bash's hash function.
Hashing is ordinarily a useful feature—bash uses a hash table
to remember the full path to executable files to avoid
searching the PATH
time and
again to find the same executable. However, the new tools
should be used as soon as they are installed. Switching
off the hash function forces the shell to search the
PATH
whenever a program is to
be run. As such, the shell will find the newly compiled
tools in $LFS/tools/bin
as
soon as they are available without remembering a previous
version of the same program provided by the host distro,
in /usr/bin
or /bin
.
umask
022
Setting the user file-creation mask (umask) to 022 ensures that newly created files and directories are only writable by their owner, but are readable and executable by anyone (assuming default modes are used by the open(2) system call, new files will end up with permission mode 644 and directories with mode 755).
LFS=/mnt/lfs
The LFS
variable should be set
to the chosen mount point.
LC_ALL=POSIX
The LC_ALL
variable controls
the localization of certain programs, making their
messages follow the conventions of a specified country.
Setting LC_ALL
to “POSIX” or
“C”
(the two are equivalent) ensures that everything will
work as expected in the cross-compilation environment.
LFS_TGT=$(uname
-m)-lfs-linux-gnu
The LFS_TGT
variable sets a
non-default, but compatible machine description for use
when building our cross-compiler and linker and when
cross-compiling our temporary toolchain. More information
is provided by Toolchain Technical
Notes.
PATH=/usr/bin
Many modern Linux distributions have merged /bin
and /usr/bin
. When this is the case, the
standard PATH
variable should
be set to /usr/bin/
for the
Chapter 6
environment. When this is not the case, the following
line adds /bin
to the path.
if [ ! -L
/bin ]; then PATH=/bin:$PATH; fi
If /bin
is not a symbolic
link, it must be added to the PATH
variable.
PATH=$LFS/tools/bin:$PATH
By putting $LFS/tools/bin
ahead of the standard PATH
,
the cross-compiler installed at the beginning of
Chapter 5
is picked up by the shell immediately after its
installation. This, combined with turning off hashing,
limits the risk that the compiler from the host is used
instead of the cross-compiler.
CONFIG_SITE=$LFS/usr/share/config.site
In Chapter 5
and
Chapter 6, if this variable is not set,
configure
scripts may attempt to load configuration items specific
to some distributions from /usr/share/config.site
on the host
system. Override it to prevent potential contamination
from the host.
export
...
While the preceding commands have set some variables, in order to make them visible within any sub-shells, we export them.
Several commercial distributions add an undocumented
instantiation of /etc/bash.bashrc
to the initialization of
bash. This file
has the potential to modify the lfs
user's environment in ways that can
affect the building of critical LFS packages. To make sure
the lfs
user's environment is
clean, check for the presence of /etc/bash.bashrc
and, if present, move it
out of the way. As the root
user, run:
[ ! -e /etc/bash.bashrc ] || mv -v /etc/bash.bashrc /etc/bash.bashrc.NOUSE
When the lfs
user is no
longer needed (at the beginning of
Chapter 7), you may safely restore /etc/bash.bashrc
(if desired).
Note that the LFS Bash package we will build in Section 8.36, “Bash-5.2.37”
is not configured to load or execute /etc/bash.bashrc
, so this file is useless
on a completed LFS system.
For many modern systems with multiple processors (or cores) the
compilation time for a package can be reduced by performing a
"parallel make" by telling the make program how many processors
are available via a command line option or an environment
variable. For instance, an Intel Core i9-13900K processor has 8
P (performance) cores and 16 E (efficiency) cores, and a P core
can simultaneously run two threads so each P core are modeled
as two logical cores by the Linux kernel. As the result there
are 32 logical cores in total. One obvious way to use all these
logical cores is allowing make to spawn up to 32 build
jobs. This can be done by passing the -j32
option to make:
make -j32
Or set the MAKEFLAGS
environment
variable and its content will be automatically used by
make as command
line options:
export MAKEFLAGS=-j32
Never pass a -j
option without a number to make or set such an option
in MAKEFLAGS
. Doing so will allow
make to spawn
infinite build jobs and cause system stability problems.
To use all logical cores available for building packages in
Chapter 5
and Chapter 6,
set MAKEFLAGS
now in .bashrc
:
cat >> ~/.bashrc << "EOF"
export MAKEFLAGS=-j$(nproc)
EOF
Replace $(nproc)
with
the number of logical cores you want to use if you don't want
to use all the logical cores.
Finally, to ensure the environment is fully prepared for building the temporary tools, force the bash shell to read the new user profile:
source ~/.bash_profile
Many people would like to know beforehand approximately how long it takes to compile and install each package. Because Linux From Scratch can be built on many different systems, it is impossible to provide absolute time estimates. The biggest package (gcc) will take approximately 5 minutes on the fastest systems, but could take days on slower systems! Instead of providing actual times, the Standard Build Unit (SBU) measure will be used instead.
The SBU measure works as follows. The first package to be compiled is binutils in Chapter 5. The time it takes to compile using one core is what we will refer to as the Standard Build Unit or SBU. All other compile times will be expressed in terms of this unit of time.
For example, consider a package whose compilation time is 4.5 SBUs. This means that if your system took 4 minutes to compile and install the first pass of binutils, it will take approximately 18 minutes to build the example package. Fortunately, most build times are shorter than one SBU.
SBUs are not entirely accurate because they depend on many factors, including the host system's version of GCC. They are provided here to give an estimate of how long it might take to install a package, but the numbers can vary by as much as dozens of minutes in some cases.
On some newer systems, the motherboard is capable of controlling the system clock speed. This can be controlled with a command such as powerprofilesctl. This is not available in LFS, but may be available on the host distro. After LFS is complete, it can be added to a system with the procedures at the BLFS power-profiles-daemon page. Before measuring the build time of any package it is advisable to use a system power profile set for maximum performance (and maximum power consumption). Otherwise the measured SBU value may be inaccurate because the system may react differently when building binutils-pass1 or other packages. Be aware that a significant inaccuracy can still show up even if the same profile is used for both packages because the system may respond slower if the system is idle when starting the build procedure. Setting the power profile to “performance” will minimize this problem. And obviously doing so will also make the system build LFS faster.
If powerprofilesctl is
available, issue the powerprofilesctl set
performance command to select the performance
profile. Some distros provides the
tuned-adm command
for managing the profiles instead of powerprofilesctl, on these
distros issue the tuned-adm
profile throughput-performance command to
select the throughput-performance
profile.
When multiple processors are used in this way, the SBU units in the book will vary even more than they normally would. In some cases, the make step will simply fail. Analyzing the output of the build process will also be more difficult because the lines from different processes will be interleaved. If you run into a problem with a build step, revert to a single processor build to properly analyze the error messages.
The times presented here for all packages (except binutils-pass1 which is based on one core) are based upon using four cores (-j4). The times in Chapter 8 also include the time to run the regression tests for the package unless specified otherwise.
Most packages provide a test suite. Running the test suite for a newly built package is a good idea because it can provide a “sanity check” indicating that everything compiled correctly. A test suite that passes its set of checks usually proves that the package is functioning as the developer intended. It does not, however, guarantee that the package is totally bug free.
Some test suites are more important than others. For example, the test suites for the core toolchain packages—GCC, binutils, and glibc—are of the utmost importance due to their central role in a properly functioning system. The test suites for GCC and glibc can take a very long time to complete, especially on slower hardware, but are strongly recommended.
Running the test suites in Chapter 5 and Chapter 6 is pointless; since the test programs are compiled with a cross-compiler, they probably can't run on the build host.
A common issue with running the test suites for binutils and
GCC is running out of pseudo terminals (PTYs). This can result
in a large number of failing tests. This may happen for several
reasons, but the most likely cause is that the host system does
not have the devpts
file system
set up correctly. This issue is discussed in greater detail at
https://www.linuxfromscratch.org/lfs/faq.html#no-ptys.
Sometimes package test suites will fail for reasons which the developers are aware of and have deemed non-critical. Consult the logs located at https://www.linuxfromscratch.org/lfs/build-logs/development/ to verify whether or not these failures are expected. This site is valid for all test suites throughout this book.
This part is divided into three stages: first, building a cross compiler and its associated libraries; second, using this cross toolchain to build several utilities in a way that isolates them from the host distribution; and third, entering the chroot environment (which further improves host isolation) and constructing the remaining tools needed to build the final system.
This is where the real work of building a new system begins. Be very careful to follow the instructions exactly as the book shows them. You should try to understand what each command does, and no matter how eager you are to finish your build, you should refrain from blindly typing the commands as shown. Read the documentation when there is something you do not understand. Also, keep track of your typing and of the output of commands, by using the tee utility to send the terminal output to a file. This makes debugging easier if something goes wrong.
The next section is a technical introduction to the build process, while the following one presents very important general instructions.
This section explains some of the rationale and technical details behind the overall build method. Don't try to immediately understand everything in this section. Most of this information will be clearer after performing an actual build. Come back and re-read this chapter at any time during the build process.
The overall goal of Chapter 5 and Chapter 6 is to produce a temporary area containing a set of tools that are known to be good, and that are isolated from the host system. By using the chroot command, the compilations in the remaining chapters will be isolated within that environment, ensuring a clean, trouble-free build of the target LFS system. The build process has been designed to minimize the risks for new readers, and to provide the most educational value at the same time.
This build process is based on cross-compilation. Cross-compilation is normally used to build a compiler and its associated toolchain for a machine different from the one that is used for the build. This is not strictly necessary for LFS, since the machine where the new system will run is the same as the one used for the build. But cross-compilation has one great advantage: anything that is cross-compiled cannot depend on the host environment.
The LFS book is not (and does not contain) a general tutorial to build a cross- (or native) toolchain. Don't use the commands in the book for a cross-toolchain for some purpose other than building LFS, unless you really understand what you are doing.
Cross-compilation involves some concepts that deserve a section of their own. Although this section may be omitted on a first reading, coming back to it later will help you gain a fuller understanding of the process.
Let us first define some terms used in this context.
is the machine where we build programs. Note that this machine is also referred to as the “host.”
is the machine/system where the built programs will run. Note that this use of “host” is not the same as in other sections.
is only used for compilers. It is the machine the compiler produces code for. It may be different from both the build and the host.
As an example, let us imagine the following scenario (sometimes referred to as “Canadian Cross”). We have a compiler on a slow machine only, let's call it machine A, and the compiler ccA. We also have a fast machine (B), but no compiler for (B), and we want to produce code for a third, slow machine (C). We will build a compiler for machine C in three stages.
Stage | Build | Host | Target | Action |
---|---|---|---|---|
1 | A | A | B | Build cross-compiler cc1 using ccA on machine A. |
2 | A | B | C | Build cross-compiler cc2 using cc1 on machine A. |
3 | B | C | C | Build compiler ccC using cc2 on machine B. |
Then, all the programs needed by machine C can be compiled using cc2 on the fast machine B. Note that unless B can run programs produced for C, there is no way to test the newly built programs until machine C itself is running. For example, to run a test suite on ccC, we may want to add a fourth stage:
Stage | Build | Host | Target | Action |
---|---|---|---|---|
4 | C | C | C | Rebuild and test ccC using ccC on machine C. |
In the example above, only cc1 and cc2 are cross-compilers, that is, they produce code for a machine different from the one they are run on. The other compilers ccA and ccC produce code for the machine they are run on. Such compilers are called native compilers.
All the cross-compiled packages in this book use an autoconf-based building system. The autoconf-based building system accepts system types in the form cpu-vendor-kernel-os, referred to as the system triplet. Since the vendor field is often irrelevant, autoconf lets you omit it.
An astute reader may wonder why a “triplet” refers
to a four component name. The kernel field and the os field
began as a single “system” field. Such a three-field
form is still valid today for some systems, for example,
x86_64-unknown-freebsd
. But
two systems can share the same kernel and still be too
different to use the same triplet to describe them. For
example, Android running on a mobile phone is completely
different from Ubuntu running on an ARM64 server, even
though they are both running on the same type of CPU
(ARM64) and using the same kernel (Linux).
Without an emulation layer, you cannot run an executable
for a server on a mobile phone or vice versa. So the
“system” field has been divided into
kernel and os fields, to designate these systems
unambiguously. In our example, the Android system is
designated aarch64-unknown-linux-android
, and the
Ubuntu system is designated aarch64-unknown-linux-gnu
.
The word “triplet” remains embedded in the
lexicon. A simple way to determine your system triplet is
to run the config.guess script that
comes with the source for many packages. Unpack the
binutils sources, run the script ./config.guess
, and note
the output. For example, for a 32-bit Intel processor the
output will be i686-pc-linux-gnu. On a 64-bit
system it will be x86_64-pc-linux-gnu. On most
Linux systems the even simpler gcc -dumpmachine command
will give you similar information.
You should also be aware of the name of the platform's
dynamic linker, often referred to as the dynamic loader
(not to be confused with the standard linker ld that is part of
binutils). The dynamic linker provided by package glibc
finds and loads the shared libraries needed by a program,
prepares the program to run, and then runs it. The name of
the dynamic linker for a 32-bit Intel machine is
ld-linux.so.2
; it's
ld-linux-x86-64.so.2
on
64-bit systems. A sure-fire way to determine the name of
the dynamic linker is to inspect a random binary from the
host system by running: readelf -l <name of binary> | grep
interpreter
and noting the output. The
authoritative reference covering all platforms is in
a Glibc wiki
page.
In order to fake a cross-compilation in LFS, the name of the
host triplet is slightly adjusted by changing the "vendor"
field in the LFS_TGT
variable so
it says "lfs". We also use the --with-sysroot
option when
building the cross-linker and cross-compiler, to tell them
where to find the needed host files. This ensures that none
of the other programs built in Chapter 6
can link to libraries on the build machine. Only two stages
are mandatory, plus one more for tests.
Stage | Build | Host | Target | Action |
---|---|---|---|---|
1 | pc | pc | lfs | Build cross-compiler cc1 using cc-pc on pc. |
2 | pc | lfs | lfs | Build compiler cc-lfs using cc1 on pc. |
3 | lfs | lfs | lfs | Rebuild and test cc-lfs using cc-lfs on lfs. |
In the preceding table, “on pc” means the commands are run on a machine using the already installed distribution. “On lfs” means the commands are run in a chrooted environment.
This is not yet the end of the story. The C language is not merely a compiler; it also defines a standard library. In this book, the GNU C library, named glibc, is used (there is an alternative, "musl"). This library must be compiled for the LFS machine; that is, using the cross-compiler cc1. But the compiler itself uses an internal library providing complex subroutines for functions not available in the assembler instruction set. This internal library is named libgcc, and it must be linked to the glibc library to be fully functional. Furthermore, the standard library for C++ (libstdc++) must also be linked with glibc. The solution to this chicken and egg problem is first to build a degraded cc1-based libgcc, lacking some functionalities such as threads and exception handling, and then to build glibc using this degraded compiler (glibc itself is not degraded), and also to build libstdc++. This last library will lack some of the functionality of libgcc.
The upshot of the preceding paragraph is that cc1 is unable to build a fully functional libstdc++ with the degraded libgcc, but cc1 is the only compiler available for building the C/C++ libraries during stage 2. There are two reasons we don't immediately use the compiler built in stage 2, cc-lfs, to build those libraries.
Generally speaking, cc-lfs cannot run on pc (the host system). Even though the triplets for pc and lfs are compatible with each other, an executable for lfs must depend on glibc-2.40; the host distro may utilize either a different implementation of libc (for example, musl), or a previous release of glibc (for example, glibc-2.13).
Even if cc-lfs can run on pc, using it on pc would create a risk of linking to the pc libraries, since cc-lfs is a native compiler.
So when we build gcc stage 2, we instruct the building system to rebuild libgcc and libstdc++ with cc1, but we link libstdc++ to the newly rebuilt libgcc instead of the old, degraded build. This makes the rebuilt libstdc++ fully functional.
In Chapter 8 (or “stage 3”), all the packages needed for the LFS system are built. Even if a package has already been installed into the LFS system in a previous chapter, we still rebuild the package. The main reason for rebuilding these packages is to make them stable: if we reinstall an LFS package on a completed LFS system, the reinstalled content of the package should be the same as the content of the same package when first installed in Chapter 8. The temporary packages installed in Chapter 6 or Chapter 7 cannot satisfy this requirement, because some of them are built without optional dependencies, and autoconf cannot perform some feature checks in Chapter 6 because of cross-compilation, causing the temporary packages to lack optional features, or use suboptimal code routines. Additionally, a minor reason for rebuilding the packages is to run the test suites.
The cross-compiler will be installed in a separate
$LFS/tools
directory, since it
will not be part of the final system.
Binutils is installed first because the configure runs of both gcc and glibc perform various feature tests on the assembler and linker to determine which software features to enable or disable. This is more important than one might realize at first. An incorrectly configured gcc or glibc can result in a subtly broken toolchain, where the impact of such breakage might not show up until near the end of the build of an entire distribution. A test suite failure will usually highlight this error before too much additional work is performed.
Binutils installs its assembler and linker in two locations,
$LFS/tools/bin
and $LFS/tools/$LFS_TGT/bin
. The tools in one
location are hard linked to the other. An important facet of
the linker is its library search order. Detailed information
can be obtained from ld by passing it the
--verbose
flag. For
example, $LFS_TGT-ld --verbose
| grep SEARCH will illustrate the current
search paths and their order. (Note that this example can be
run as shown only while logged in as user lfs
. If you come back to this page later,
replace $LFS_TGT-ld with
ld).
The next package installed is gcc. An example of what can be seen during its run of configure is:
checking what assembler to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/as
checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld
This is important for the reasons mentioned above. It also demonstrates that gcc's configure script does not search the PATH directories to find which tools to use. However, during the actual operation of gcc itself, the same search paths are not necessarily used. To find out which standard linker gcc will use, run: $LFS_TGT-gcc -print-prog-name=ld. (Again, remove the $LFS_TGT- prefix if coming back to this later.)
Detailed information can be obtained from gcc by passing it the
-v
command line
option while compiling a program. For example, $LFS_TGT-gcc -v example.c
(or
without $LFS_TGT- if coming back
later) will show detailed information about the preprocessor,
compilation, and assembly stages, including gcc's search paths for
included headers and their order.
Next up: sanitized Linux API headers. These allow the standard C library (glibc) to interface with features that the Linux kernel will provide.
Next comes glibc. The most important considerations for
building glibc are the compiler, binary tools, and kernel
headers. The compiler and binary tools are generally not an
issue since glibc will always use those relating to the
--host
parameter
passed to its configure script; e.g., in our case, the
compiler will be $LFS_TGT-gcc and the
readelf tool
will be $LFS_TGT-readelf. The
kernel headers can be a bit more complicated. Therefore, we
take no risks and use the available configure switch to
enforce the correct selection. After the run of configure, check the
contents of the config.make
file in the build
directory for
all important details. These items highlight an important
aspect of the glibc package—it is very self-sufficient in
terms of its build machinery, and generally does not rely on
toolchain defaults.
As mentioned above, the standard C++ library is compiled
next, followed in Chapter 6
by other programs that must be cross-compiled to break
circular dependencies at build time. The install step of all
those packages uses the DESTDIR
variable to force installation in the LFS filesystem.
At the end of Chapter 6
the native LFS compiler is installed. First binutils-pass2 is
built, in the same DESTDIR
directory as the other programs, then the second pass of gcc
is constructed, omitting some non-critical libraries. Due to
some weird logic in gcc's configure script, CC_FOR_TARGET
ends up as cc when the host is the
same as the target, but different from the build system. This
is why CC_FOR_TARGET=$LFS_TGT-gcc
is
declared explicitly as one of the configuration options.
Upon entering the chroot environment in Chapter 7, the temporary installations of programs needed for the proper operation of the toolchain are performed. From this point onwards, the core toolchain is self-contained and self-hosted. In Chapter 8, final versions of all the packages needed for a fully functional system are built, tested, and installed.
During a development cycle of LFS, the instructions in the book are often modified to adapt for a package update or take the advantage of new features from updated packages. Mixing up the instructions of different versions of the LFS book can cause subtle breakages. This kind of issue is generally a result from reusing some script created for a prior LFS release. Such a reuse is strongly discouraged. If you are reusing scripts for a prior LFS release for any reason, you'll need to be very careful to update the scripts to match current version of the LFS book.
Here are some things you should know about building each package:
Several packages are patched before compilation, but only when the patch is needed to circumvent a problem. A patch is often needed in both the current and the following chapters, but sometimes, when the same package is built more than once, the patch is not needed right away. Therefore, do not be concerned if instructions for a downloaded patch seem to be missing. Warning messages about offset or fuzz may also be encountered when applying a patch. Do not worry about these warnings; the patch was still successfully applied.
During the compilation of most packages, some warnings will scroll by on the screen. These are normal and can safely be ignored. These warnings are usually about deprecated, but not invalid, use of the C or C++ syntax. C standards change fairly often, and some packages have not yet been updated. This is not a serious problem, but it does cause the warnings to appear.
Check one last time that the LFS
environment variable is set up
properly:
echo $LFS
Make sure the output shows the path to the LFS
partition's mount point, which is /mnt/lfs
, using our example.
Finally, two important items must be emphasized:
The build instructions assume that the Host System Requirements, including symbolic links, have been set properly:
bash is the shell in use.
sh is a symbolic link to bash.
/usr/bin/awk is a symbolic link to gawk.
/usr/bin/yacc is a symbolic link to bison, or to a small script that executes bison.
Here is a synopsis of the build process.
Place all the sources and patches in a directory
that will be accessible from the chroot
environment, such as /mnt/lfs/sources/
.
Change to the /mnt/lfs/sources/
directory.
Using the tar program, extract the package to be built. In Chapter 5 and Chapter 6, ensure you are the lfs user when extracting the package.
Do not use any method except the tar command to extract the source code. Notably, using the cp -R command to copy the source code tree somewhere else can destroy timestamps in the source tree, and cause the build to fail.
Change to the directory created when the package was extracted.
Follow the instructions for building the package.
Change back to the sources directory when the build is complete.
Delete the extracted source directory unless instructed otherwise.
This chapter shows how to build a cross-compiler and its associated tools. Although here cross-compilation is faked, the principles are the same as for a real cross-toolchain.
The programs compiled in this chapter will be installed under
the $LFS/tools
directory to keep
them separate from the files installed in the following
chapters. The libraries, on the other hand, are installed into
their final place, since they pertain to the system we want to
build.
The Binutils package contains a linker, an assembler, and other tools for handling object files.
Go back and re-read the notes in the section titled General Compilation Instructions. Understanding the notes labeled important can save you a lot of problems later.
It is important that Binutils be the first package compiled because both Glibc and GCC perform various tests on the available linker and assembler to determine which of their own features to enable.
The Binutils documentation recommends building Binutils in a dedicated build directory:
mkdir -v build cd build
In order for the SBU values listed in the rest of the book
to be of any use, measure the time it takes to build this
package from the configuration, up to and including the
first install. To achieve this easily, wrap the commands in
a time
command like this: time {
../configure ... && make && make install;
}
.
Now prepare Binutils for compilation:
../configure --prefix=$LFS/tools \ --with-sysroot=$LFS \ --target=$LFS_TGT \ --disable-nls \ --enable-gprofng=no \ --disable-werror \ --enable-new-dtags \ --enable-default-hash-style=gnu
The meaning of the configure options:
--prefix=$LFS/tools
This tells the configure script to prepare to install
the Binutils programs in the $LFS/tools
directory.
--with-sysroot=$LFS
For cross compilation, this tells the build system to look in $LFS for the target system libraries as needed.
--target=$LFS_TGT
Because the machine description in the LFS_TGT
variable is slightly different
than the value returned by the config.guess script,
this switch will tell the configure script to
adjust binutil's build system for building a cross
linker.
--disable-nls
This disables internationalization as i18n is not needed for the temporary tools.
--enable-gprofng=no
This disables building gprofng which is not needed for the temporary tools.
--disable-werror
This prevents the build from stopping in the event that there are warnings from the host's compiler.
--enable-new-dtags
This makes the linker use the “runpath” tag for embedding library search paths into executables and shared libraries, instead of the traditional “rpath” tag. It makes debugging dynamically linked executables easier and works around potential issues in the test suite of some packages.
--enable-default-hash-style=gnu
By default, the linker would generate both the GNU-style hash table and the classic ELF hash table for shared libraries and dynamically linked executables. The hash tables are only intended for a dynamic linker to perform symbol lookup. On LFS the dynamic linker (provided by the Glibc package) will always use the GNU-style hash table which is faster to query. So the classic ELF hash table is completely useless. This makes the linker only generate the GNU-style hash table by default, so we can avoid wasting time to generate the classic ELF hash table when we build the packages, or wasting disk space to store it.
Continue with compiling the package:
make
Install the package:
make install
Details on this package are located in Section 8.20.2, “Contents of Binutils.”
The GCC package contains the GNU compiler collection, which includes the C and C++ compilers.
GCC requires the GMP, MPFR and MPC packages. As these packages may not be included in your host distribution, they will be built with GCC. Unpack each package into the GCC source directory and rename the resulting directories so the GCC build procedures will automatically use them:
There are frequent misunderstandings about this chapter. The procedures are the same as every other chapter, as explained earlier (Package build instructions). First, extract the gcc-14.2.0 tarball from the sources directory, and then change to the directory created. Only then should you proceed with the instructions below.
tar -xf ../mpfr-4.2.1.tar.xz mv -v mpfr-4.2.1 mpfr tar -xf ../gmp-6.3.0.tar.xz mv -v gmp-6.3.0 gmp tar -xf ../mpc-1.3.1.tar.gz mv -v mpc-1.3.1 mpc
On x86_64 hosts, set the default directory name for 64-bit libraries to “lib”:
case $(uname -m) in x86_64) sed -e '/m64=/s/lib64/lib/' \ -i.orig gcc/config/i386/t-linux64 ;; esac
The GCC documentation recommends building GCC in a dedicated build directory:
mkdir -v build cd build
Prepare GCC for compilation:
../configure \ --target=$LFS_TGT \ --prefix=$LFS/tools \ --with-glibc-version=2.40 \ --with-sysroot=$LFS \ --with-newlib \ --without-headers \ --enable-default-pie \ --enable-default-ssp \ --disable-nls \ --disable-shared \ --disable-multilib \ --disable-threads \ --disable-libatomic \ --disable-libgomp \ --disable-libquadmath \ --disable-libssp \ --disable-libvtv \ --disable-libstdcxx \ --enable-languages=c,c++
The meaning of the configure options:
--with-glibc-version=2.40
This option specifies the version of Glibc which will be used on the target. It is not relevant to the libc of the host distro because everything compiled by pass1 GCC will run in the chroot environment, which is isolated from libc of the host distro.
--with-newlib
Since a working C library is not yet available, this ensures that the inhibit_libc constant is defined when building libgcc. This prevents the compiling of any code that requires libc support.
--without-headers
When creating a complete cross-compiler, GCC requires standard headers compatible with the target system. For our purposes these headers will not be needed. This switch prevents GCC from looking for them.
--enable-default-pie and
--enable-default-ssp
Those switches allow GCC to compile programs with some hardening security features (more information on those in the note on PIE and SSP in chapter 8) by default. They are not strictly needed at this stage, since the compiler will only produce temporary executables. But it is cleaner to have the temporary packages be as close as possible to the final ones.
--disable-shared
This switch forces GCC to link its internal libraries statically. We need this because the shared libraries require Glibc, which is not yet installed on the target system.
--disable-multilib
On x86_64, LFS does not support a multilib configuration. This switch is harmless for x86.
--disable-threads, --disable-libatomic,
--disable-libgomp, --disable-libquadmath,
--disable-libssp, --disable-libvtv,
--disable-libstdcxx
These switches disable support for threading, libatomic, libgomp, libquadmath, libssp, libvtv, and the C++ standard library respectively. These features may fail to compile when building a cross-compiler and are not necessary for the task of cross-compiling the temporary libc.
--enable-languages=c,c++
This option ensures that only the C and C++ compilers are built. These are the only languages needed now.
Compile GCC by running:
make
Install the package:
make install
This build of GCC has installed a couple of internal system
headers. Normally one of them, limits.h
, would in turn include the
corresponding system limits.h
header, in this case, $LFS/usr/include/limits.h
. However, at the
time of this build of GCC $LFS/usr/include/limits.h
does not exist,
so the internal header that has just been installed is a
partial, self-contained file and does not include the
extended features of the system header. This is adequate for
building Glibc, but the full internal header will be needed
later. Create a full version of the internal header using a
command that is identical to what the GCC build system does
in normal circumstances:
The command below shows an example of nested command
substitution using two methods: backquotes and a
$()
construct. It could be
rewritten using the same method for both substitutions, but
is shown this way to demonstrate how they can be mixed.
Generally the $()
method is
preferred.
cd .. cat gcc/limitx.h gcc/glimits.h gcc/limity.h > \ `dirname $($LFS_TGT-gcc -print-libgcc-file-name)`/include/limits.h
Details on this package are located in Section 8.29.2, “Contents of GCC.”
The Linux API Headers (in linux-6.12.5.tar.xz) expose the kernel's API for use by Glibc.
The Linux kernel needs to expose an Application Programming Interface (API) for the system's C library (Glibc in LFS) to use. This is done by way of sanitizing various C header files that are shipped in the Linux kernel source tarball.
Make sure there are no stale files embedded in the package:
make mrproper
Now extract the user-visible kernel headers from the source.
The recommended make target “headers_install” cannot be used,
because it requires rsync,
which may not be available. The headers are first placed in
./usr
, then copied to the
needed location.
make headers find usr/include -type f ! -name '*.h' -delete cp -rv usr/include $LFS/usr
The Linux API ASM Headers |
|
The Linux API ASM Generic Headers |
|
The Linux API DRM Headers |
|
The Linux API Linux Headers |
|
The Linux API Miscellaneous Headers |
|
The Linux API MTD Headers |
|
The Linux API RDMA Headers |
|
The Linux API SCSI Headers |
|
The Linux API Sound Headers |
|
The Linux API Video Headers |
|
The Linux API Xen Headers |
The Glibc package contains the main C library. This library provides the basic routines for allocating memory, searching directories, opening and closing files, reading and writing files, string handling, pattern matching, arithmetic, and so on.
First, create a symbolic link for LSB compliance. Additionally, for x86_64, create a compatibility symbolic link required for proper operation of the dynamic library loader:
case $(uname -m) in i?86) ln -sfv ld-linux.so.2 $LFS/lib/ld-lsb.so.3 ;; x86_64) ln -sfv ../lib/ld-linux-x86-64.so.2 $LFS/lib64 ln -sfv ../lib/ld-linux-x86-64.so.2 $LFS/lib64/ld-lsb-x86-64.so.3 ;; esac
The above command is correct. The ln command has several syntactic versions, so be sure to check info coreutils ln and ln(1) before reporting what may appear to be an error.
Some of the Glibc programs use the non-FHS-compliant
/var/db
directory to store
their runtime data. Apply the following patch to make such
programs store their runtime data in the FHS-compliant
locations:
patch -Np1 -i ../glibc-2.40-fhs-1.patch
The Glibc documentation recommends building Glibc in a dedicated build directory:
mkdir -v build cd build
Ensure that the ldconfig and sln utilities are installed
into /usr/sbin
:
echo "rootsbindir=/usr/sbin" > configparms
Next, prepare Glibc for compilation:
../configure \ --prefix=/usr \ --host=$LFS_TGT \ --build=$(../scripts/config.guess) \ --enable-kernel=5.4 \ --with-headers=$LFS/usr/include \ --disable-nscd \ libc_cv_slibdir=/usr/lib
The meaning of the configure options:
--host=$LFS_TGT,
--build=$(../scripts/config.guess)
The combined effect of these switches is that Glibc's
build system configures itself to be cross-compiled,
using the cross-linker and cross-compiler in
$LFS/tools
.
--enable-kernel=5.4
This tells Glibc to compile the library with support for 5.4 and later Linux kernels. Workarounds for older kernels are not enabled.
--with-headers=$LFS/usr/include
This tells Glibc to compile itself against the headers recently installed to the $LFS/usr/include directory, so that it knows exactly what features the kernel has and can optimize itself accordingly.
libc_cv_slibdir=/usr/lib
This ensures that the library is installed in /usr/lib instead of the default /lib64 on 64-bit machines.
--disable-nscd
Do not build the name service cache daemon which is no longer used.
During this stage the following warning might appear:
configure: WARNING: *** These auxiliary programs are missing or *** incompatible versions: msgfmt *** some features will be disabled. *** Check the INSTALL file for required versions.
The missing or incompatible msgfmt program is generally harmless. This msgfmt program is part of the Gettext package, which the host distribution should provide.
There have been reports that this package may fail when
building as a “parallel make.” If that occurs, rerun
the make command with the -j1
option.
Compile the package:
make
Install the package:
If LFS
is not properly set, and
despite the recommendations, you are building as
root
, the next command will
install the newly built Glibc to your host system, which
will almost certainly render it unusable. So double-check
that the environment is correctly set, and that you are not
root
, before running the
following command.
make DESTDIR=$LFS install
The meaning of the make install option:
DESTDIR=$LFS
The DESTDIR
make variable is
used by almost all packages to define the location
where the package should be installed. If it is not
set, it defaults to the root (/
) directory. Here we specify that
the package is installed in $LFS
, which will become the root
directory in Section 7.4,
“Entering the Chroot Environment”.
Fix a hard coded path to the executable loader in the ldd script:
sed '/RTLDLIST=/s@/usr@@g' -i $LFS/usr/bin/ldd
At this point, it is imperative to stop and ensure that the basic functions (compiling and linking) of the new toolchain are working as expected. To perform a sanity check, run the following commands:
echo 'int main(){}' | $LFS_TGT-gcc -xc - readelf -l a.out | grep ld-linux
If everything is working correctly, there should be no errors, and the output of the last command will be of the form:
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
Note that for 32-bit machines, the interpreter name will be
/lib/ld-linux.so.2
.
If the output is not as shown above, or there is no output at all, then something is wrong. Investigate and retrace the steps to find out where the problem is and correct it. This issue must be resolved before continuing.
Once all is well, clean up the test file:
rm -v a.out
Building the packages in the next chapter will serve as an additional check that the toolchain has been built properly. If some package, especially Binutils-pass2 or GCC-pass2, fails to build, it is an indication that something has gone wrong with the preceding Binutils, GCC, or Glibc installations.
Details on this package are located in Section 8.5.3, “Contents of Glibc.”
Libstdc++ is the standard C++ library. It is needed to compile C++ code (part of GCC is written in C++), but we had to defer its installation when we built gcc-pass1 because Libstdc++ depends on Glibc, which was not yet available in the target directory.
Libstdc++ is part of the
GCC sources. You should first unpack the GCC tarball and
change to the gcc-14.2.0
directory.
Create a separate build directory for Libstdc++ and enter it:
mkdir -v build cd build
Prepare Libstdc++ for compilation:
../libstdc++-v3/configure \ --host=$LFS_TGT \ --build=$(../config.guess) \ --prefix=/usr \ --disable-multilib \ --disable-nls \ --disable-libstdcxx-pch \ --with-gxx-include-dir=/tools/$LFS_TGT/include/c++/14.2.0
The meaning of the configure options:
--host=...
Specifies that the cross-compiler we have just built
should be used instead of the one in /usr/bin
.
--disable-libstdcxx-pch
This switch prevents the installation of precompiled include files, which are not needed at this stage.
--with-gxx-include-dir=/tools/$LFS_TGT/include/c++/14.2.0
This specifies the installation directory for include
files. Because Libstdc++ is the standard C++ library
for LFS, this directory should match the location where
the C++ compiler ($LFS_TGT-g++) would
search for the standard C++ include files. In a normal
build, this information is automatically passed to the
Libstdc++ configure options
from the top level directory. In our case, this
information must be explicitly given. The C++ compiler
will prepend the sysroot path $LFS
(specified when building
GCC-pass1) to the include file search path, so it will
actually search in $LFS/tools/$LFS_TGT/include/c++/14.2.0
.
The combination of the DESTDIR
variable (in the
make
install command below) and this switch
causes the headers to be installed there.
Compile Libstdc++ by running:
make
Install the library:
make DESTDIR=$LFS install
Remove the libtool archive files because they are harmful for cross-compilation:
rm -v $LFS/usr/lib/lib{stdc++{,exp,fs},supc++}.la
Details on this package are located in Section 8.29.2, “Contents of GCC.”
This chapter shows how to cross-compile basic utilities using the just built cross-toolchain. Those utilities are installed into their final location, but cannot be used yet. Basic tasks still rely on the host's tools. Nevertheless, the installed libraries are used when linking.
Using the utilities will be possible in the next chapter after entering the “chroot” environment. But all the packages built in the present chapter need to be built before we do that. Therefore we cannot be independent of the host system yet.
Once again, let us recall that improper setting of LFS
together with building as root
, may render your computer unusable.
This whole chapter must be done as user lfs
, with the environment as described in
Section 4.4,
“Setting Up the Environment.”
The M4 package contains a macro processor.
Prepare M4 for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.13.2, “Contents of M4.”
The Ncurses package contains libraries for terminal-independent handling of character screens.
First, run the following commands to build the “tic” program on the build host:
mkdir build pushd build ../configure AWK=gawk make -C include make -C progs tic popd
Prepare Ncurses for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(./config.guess) \ --mandir=/usr/share/man \ --with-manpage-format=normal \ --with-shared \ --without-normal \ --with-cxx-shared \ --without-debug \ --without-ada \ --disable-stripping \ AWK=gawk
The meaning of the new configure options:
--with-manpage-format=normal
This prevents Ncurses from installing compressed manual pages, which may happen if the host distribution itself has compressed manual pages.
--with-shared
This makes Ncurses build and install shared C libraries.
--without-normal
This prevents Ncurses from building and installing static C libraries.
--without-debug
This prevents Ncurses from building and installing debug libraries.
--with-cxx-shared
This makes Ncurses build and install shared C++ bindings. It also prevents it building and installing static C++ bindings.
--without-ada
This ensures that Ncurses does not build support for the Ada compiler, which may be present on the host but will not be available once we enter the chroot environment.
--disable-stripping
This switch prevents the building system from using the strip program from the host. Using host tools on cross-compiled programs can cause failure.
AWK=gawk
This switch prevents the building system from using the mawk program from the host. Some versions of mawk can cause this package to fail to build.
Compile the package:
make
Install the package:
make DESTDIR=$LFS TIC_PATH=$(pwd)/build/progs/tic install ln -sv libncursesw.so $LFS/usr/lib/libncurses.so sed -e 's/^#if.*XOPEN.*$/#if 1/' \ -i $LFS/usr/include/curses.h
The meaning of the install options:
TIC_PATH=$(pwd)/build/progs/tic
We need to pass the path of the newly built tic program that runs on the building machine, so the terminal database can be created without errors.
The libncurses.so
library
is needed by a few packages we will build soon. We
create this symlink to use libncursesw.so
as a replacement.
The header file curses.h
contains the definition of various Ncurses data
structures. With different preprocessor macro
definitions two different sets of the data structure
definition may be used: the 8-bit definition is
compatible with libncurses.so
and the wide-character
definition is compatible with libncursesw.so
. Since we are using
libncursesw.so
as a
replacement of libncurses.so
, edit the header file
so it will always use the wide-character data structure
definition compatible with libncursesw.so
.
Details on this package are located in Section 8.30.2, “Contents of Ncurses.”
The Bash package contains the Bourne-Again Shell.
Prepare Bash for compilation:
./configure --prefix=/usr \ --build=$(sh support/config.guess) \ --host=$LFS_TGT \ --without-bash-malloc
The meaning of the configure options:
--without-bash-malloc
This option turns off the use of Bash's memory
allocation (malloc
)
function which is known to cause segmentation faults.
By turning this option off, Bash will use the
malloc
functions from
Glibc which are more stable.
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Make a link for the programs that use sh for a shell:
ln -sv bash $LFS/bin/sh
Details on this package are located in Section 8.36.2, “Contents of Bash.”
The Coreutils package contains the basic utility programs needed by every operating system.
Prepare Coreutils for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(build-aux/config.guess) \ --enable-install-program=hostname \ --enable-no-install-program=kill,uptime
The meaning of the configure options:
--enable-install-program=hostname
This enables the hostname binary to be built and installed – it is disabled by default but is required by the Perl test suite.
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Move programs to their final expected locations. Although this is not necessary in this temporary environment, we must do so because some programs hardcode executable locations:
mv -v $LFS/usr/bin/chroot $LFS/usr/sbin mkdir -pv $LFS/usr/share/man/man8 mv -v $LFS/usr/share/man/man1/chroot.1 $LFS/usr/share/man/man8/chroot.8 sed -i 's/"1"/"8"/' $LFS/usr/share/man/man8/chroot.8
Details on this package are located in Section 8.58.2, “Contents of Coreutils.”
The Diffutils package contains programs that show the differences between files or directories.
Prepare Diffutils for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(./build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.60.2, “Contents of Diffutils.”
The File package contains a utility for determining the type of a given file or files.
The file command on the build host needs to be the same version as the one we are building in order to create the signature file. Run the following commands to make a temporary copy of the file command:
mkdir build pushd build ../configure --disable-bzlib \ --disable-libseccomp \ --disable-xzlib \ --disable-zlib make popd
The meaning of the new configure option:
--disable-*
The configuration script attempts to use some packages from the host distribution if the corresponding library files exist. It may cause compilation failure if a library file exists, but the corresponding header files do not. These options prevent using these unneeded capabilities from the host.
Prepare File for compilation:
./configure --prefix=/usr --host=$LFS_TGT --build=$(./config.guess)
Compile the package:
make FILE_COMPILE=$(pwd)/build/src/file
Install the package:
make DESTDIR=$LFS install
Remove the libtool archive file because it is harmful for cross compilation:
rm -v $LFS/usr/lib/libmagic.la
Details on this package are located in Section 8.11.2, “Contents of File.”
The Findutils package contains programs to find files. Programs are provided to search through all the files in a directory tree and to create, maintain, and search a database (often faster than the recursive find, but unreliable unless the database has been updated recently). Findutils also supplies the xargs program, which can be used to run a specified command on each file selected by a search.
Prepare Findutils for compilation:
./configure --prefix=/usr \ --localstatedir=/var/lib/locate \ --host=$LFS_TGT \ --build=$(build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.62.2, “Contents of Findutils.”
The Gawk package contains programs for manipulating text files.
First, ensure some unneeded files are not installed:
sed -i 's/extras//' Makefile.in
Prepare Gawk for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.61.2, “Contents of Gawk.”
The Grep package contains programs for searching through the contents of files.
Prepare Grep for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(./build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.35.2, “Contents of Grep.”
The Gzip package contains programs for compressing and decompressing files.
Prepare Gzip for compilation:
./configure --prefix=/usr --host=$LFS_TGT
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.65.2, “Contents of Gzip.”
The Make package contains a program for controlling the generation of executables and other non-source files of a package from source files.
Prepare Make for compilation:
./configure --prefix=/usr \ --without-guile \ --host=$LFS_TGT \ --build=$(build-aux/config.guess)
The meaning of the new configure option:
--without-guile
Although we are cross-compiling, configure tries to use guile from the build host if it finds it. This makes compilation fail, so this switch prevents using it.
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.69.2, “Contents of Make.”
The Patch package contains a program for modifying or creating files by applying a “patch” file typically created by the diff program.
Prepare Patch for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.70.2, “Contents of Patch.”
The Sed package contains a stream editor.
Prepare Sed for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(./build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.31.2, “Contents of Sed.”
The Tar package provides the ability to create tar archives as well as perform various other kinds of archive manipulation. Tar can be used on previously created archives to extract files, to store additional files, or to update or list files which were already stored.
Prepare Tar for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(build-aux/config.guess)
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Details on this package are located in Section 8.71.2, “Contents of Tar.”
The Xz package contains programs for compressing and decompressing files. It provides capabilities for the lzma and the newer xz compression formats. Compressing text files with xz yields a better compression percentage than with the traditional gzip or bzip2 commands.
Prepare Xz for compilation:
./configure --prefix=/usr \ --host=$LFS_TGT \ --build=$(build-aux/config.guess) \ --disable-static \ --docdir=/usr/share/doc/xz-5.6.3
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Remove the libtool archive file because it is harmful for cross compilation:
rm -v $LFS/usr/lib/liblzma.la
Details on this package are located in Section 8.8.2, “Contents of Xz.”
The Binutils package contains a linker, an assembler, and other tools for handling object files.
Binutils building system relies on an shipped libtool copy to link against internal static libraries, but the libiberty and zlib copies shipped in the package do not use libtool. This inconsistency may cause produced binaries mistakenly linked against libraries from the host distro. Work around this issue:
sed '6009s/$add_dir//' -i ltmain.sh
Create a separate build directory again:
mkdir -v build cd build
Prepare Binutils for compilation:
../configure \ --prefix=/usr \ --build=$(../config.guess) \ --host=$LFS_TGT \ --disable-nls \ --enable-shared \ --enable-gprofng=no \ --disable-werror \ --enable-64-bit-bfd \ --enable-new-dtags \ --enable-default-hash-style=gnu
The meaning of the new configure options:
--enable-shared
Builds libbfd
as a shared
library.
--enable-64-bit-bfd
Enables 64-bit support (on hosts with smaller word sizes). This may not be needed on 64-bit systems, but it does no harm.
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
Remove the libtool archive files because they are harmful for cross compilation, and remove unnecessary static libraries:
rm -v $LFS/usr/lib/lib{bfd,ctf,ctf-nobfd,opcodes,sframe}.{a,la}
Details on this package are located in Section 8.20.2, “Contents of Binutils.”
The GCC package contains the GNU compiler collection, which includes the C and C++ compilers.
As in the first build of GCC, the GMP, MPFR, and MPC packages are required. Unpack the tarballs and move them into the required directories:
tar -xf ../mpfr-4.2.1.tar.xz mv -v mpfr-4.2.1 mpfr tar -xf ../gmp-6.3.0.tar.xz mv -v gmp-6.3.0 gmp tar -xf ../mpc-1.3.1.tar.gz mv -v mpc-1.3.1 mpc
If building on x86_64, change the default directory name for 64-bit libraries to “lib”:
case $(uname -m) in x86_64) sed -e '/m64=/s/lib64/lib/' \ -i.orig gcc/config/i386/t-linux64 ;; esac
Override the building rule of libgcc and libstdc++ headers, to allow building these libraries with POSIX threads support:
sed '/thread_header =/s/@.*@/gthr-posix.h/' \ -i libgcc/Makefile.in libstdc++-v3/include/Makefile.in
Create a separate build directory again:
mkdir -v build cd build
Before starting to build GCC, remember to unset any environment variables that override the default optimization flags.
Now prepare GCC for compilation:
../configure \ --build=$(../config.guess) \ --host=$LFS_TGT \ --target=$LFS_TGT \ LDFLAGS_FOR_TARGET=-L$PWD/$LFS_TGT/libgcc \ --prefix=/usr \ --with-build-sysroot=$LFS \ --enable-default-pie \ --enable-default-ssp \ --disable-nls \ --disable-multilib \ --disable-libatomic \ --disable-libgomp \ --disable-libquadmath \ --disable-libsanitizer \ --disable-libssp \ --disable-libvtv \ --enable-languages=c,c++
The meaning of the new configure options:
--with-build-sysroot=$LFS
Normally, using --host
ensures that a
cross-compiler is used for building GCC, and that
compiler knows that it has to look for headers and
libraries in $LFS
. But
the build system for GCC uses other tools, which are
not aware of this location. This switch is needed so
those tools will find the needed files in $LFS
, and not on the host.
--target=$LFS_TGT
We are cross-compiling GCC, so it's impossible to build
target libraries (libgcc
and libstdc++
) with the
GCC binaries compiled in this pass—those binaries won't
run on the host. The GCC build system will attempt to
use the host's C and C++ compilers as a workaround by
default. Building the GCC target libraries with a
different version of GCC is not supported, so using the
host's compilers may cause the build to fail. This
parameter ensures the libraries are built by GCC pass
1.
LDFLAGS_FOR_TARGET=...
Allow libstdc++
to use
the libgcc
being built in
this pass, instead of the previous version built in
gcc-pass1. The
previous version cannot properly support C++ exception
handling because it was built without libc support.
--disable-libsanitizer
Disable GCC sanitizer runtime libraries. They are not
needed for the temporary installation. In gcc-pass1 it was
implied by --disable-libstdcxx
, and
now we can explicitly pass it.
Compile the package:
make
Install the package:
make DESTDIR=$LFS install
As a finishing touch, create a utility symlink. Many programs and scripts run cc instead of gcc, which is used to keep programs generic and therefore usable on all kinds of UNIX systems where the GNU C compiler is not always installed. Running cc leaves the system administrator free to decide which C compiler to install:
ln -sv gcc $LFS/usr/bin/cc
Details on this package are located in Section 8.29.2, “Contents of GCC.”
This chapter shows how to build the last missing bits of the temporary system: the tools needed to build the various packages. Now that all circular dependencies have been resolved, a “chroot” environment, completely isolated from the host operating system (except for the running kernel), can be used for the build.
For proper operation of the isolated environment, some communication with the running kernel must be established. This is done via the so-called Virtual Kernel File Systems, which will be mounted before entering the chroot environment. You may want to verify that they are mounted by issuing the findmnt command.
Until Section 7.4,
“Entering the Chroot Environment”, the commands must be run
as root
, with the LFS
variable set. After entering chroot, all
commands are run as root
,
fortunately without access to the OS of the computer you built
LFS on. Be careful anyway, as it is easy to destroy the whole
LFS system with bad commands.
The commands in the remainder of this book must be performed
while logged in as user root
and no longer as user lfs
.
Also, double check that $LFS
is
set in root
's environment.
Currently, the whole directory hierarchy in $LFS
is owned by the user lfs
, a user that exists only on the host
system. If the directories and files under $LFS
are kept as they are, they will be owned
by a user ID without a corresponding account. This is dangerous
because a user account created later could get this same user
ID and would own all the files under $LFS
, thus exposing these files to possible
malicious manipulation.
To address this issue, change the ownership of the $LFS/*
directories to user root
by running the following command:
chown --from lfs -R root:root $LFS/{usr,lib,var,etc,bin,sbin,tools} case $(uname -m) in x86_64) chown --from lfs -R root:root $LFS/lib64 ;; esac
Applications running in userspace utilize various file systems created by the kernel to communicate with the kernel itself. These file systems are virtual: no disk space is used for them. The content of these file systems resides in memory. These file systems must be mounted in the $LFS directory tree so the applications can find them in the chroot environment.
Begin by creating the directories on which these virtual file systems will be mounted:
mkdir -pv $LFS/{dev,proc,sys,run}
During a normal boot of an LFS system, the kernel
automatically mounts the devtmpfs
file system on the /dev
directory; the kernel creates device
nodes on that virtual file system during the boot process, or
when a device is first detected or accessed. The udev daemon
may change the ownership or permissions of the device nodes
created by the kernel, and create new device nodes or
symlinks, to ease the work of distro maintainers and system
administrators. (See Section 9.3.2.2,
“Device Node Creation” for details.) If the host kernel
supports devtmpfs
, we can
simply mount a devtmpfs
at
$LFS/dev
and rely on the kernel
to populate it.
But some host kernels lack devtmpfs
support; these host distros use
different methods to create the content of /dev
. So the only host-agnostic way to
populate the $LFS/dev
directory
is by bind mounting the host system's /dev
directory. A bind mount is a special
type of mount that makes a directory subtree or a file
visible at some other location. Use the following command to
do this.
mount -v --bind /dev $LFS/dev
Now mount the remaining virtual kernel file systems:
mount -vt devpts devpts -o gid=5,mode=0620 $LFS/dev/pts mount -vt proc proc $LFS/proc mount -vt sysfs sysfs $LFS/sys mount -vt tmpfs tmpfs $LFS/run
The meaning of the mount options for devpts:
gid=5
This ensures that all devpts-created device nodes are
owned by group ID 5. This is the ID we will use later
on for the tty
group.
We use the group ID instead of a name, since the host
system might use a different ID for its tty
group.
mode=0620
This ensures that all devpts-created device nodes have mode 0620 (user readable and writable, group writable). Together with the option above, this ensures that devpts will create device nodes that meet the requirements of grantpt(), meaning the Glibc pt_chown helper binary (which is not installed by default) is not necessary.
In some host systems, /dev/shm
is a symbolic link to a directory, typically /run/shm
. The /run tmpfs was mounted above
so in this case only a directory needs to be created with the
correct permissions.
In other host systems /dev/shm
is a mount point for a tmpfs. In that case the mount of /dev
above will only create /dev/shm as a directory in the chroot
environment. In this situation we must explicitly mount a
tmpfs:
if [ -h $LFS/dev/shm ]; then install -v -d -m 1777 $LFS$(realpath /dev/shm) else mount -vt tmpfs -o nosuid,nodev tmpfs $LFS/dev/shm fi
Now that all the packages which are required to build the rest
of the needed tools are on the system, it is time to enter the
chroot environment and finish installing the temporary tools.
This environment will also be used to install the final system.
As user root
, run the following
command to enter the environment that is, at the moment,
populated with nothing but temporary tools:
chroot "$LFS" /usr/bin/env -i \ HOME=/root \ TERM="$TERM" \ PS1='(lfs chroot) \u:\w\$ ' \ PATH=/usr/bin:/usr/sbin \ MAKEFLAGS="-j$(nproc)
" \ TESTSUITEFLAGS="-j$(nproc)
" \ /bin/bash --login
If you don't want to use all available logical cores, replace
$(nproc)
with the
number of logical cores you want to use for building packages
in this chapter and the following chapters. The test suites of
some packages (notably Autoconf, Libtool, and Tar) in Chapter 8
are not affected by MAKEFLAGS
, they
use a TESTSUITEFLAGS
environment
variable instead. We set that here as well for running these
test suites with multiple cores.
The -i
option given to
the env command
will clear all the variables in the chroot environment. After
that, only the HOME
, TERM
, PS1
, and
PATH
variables are set again. The
TERM=$TERM
construct
sets the TERM
variable inside chroot
to the same value as outside chroot. This variable is needed so
programs like vim
and less can
operate properly. If other variables are desired, such as
CFLAGS
or CXXFLAGS
, this is a good place to set them.
From this point on, there is no need to use the LFS
variable any more because all work will be
restricted to the LFS file system; the chroot command runs the Bash
shell with the root (/
) directory
set to $LFS
.
Notice that /tools/bin
is not in
the PATH
. This means that the cross
toolchain will no longer be used.
Also note that the bash prompt will say
I have no name!
This is
normal because the /etc/passwd
file has not been created yet.
It is important that all the commands throughout the remainder of this chapter and the following chapters are run from within the chroot environment. If you leave this environment for any reason (rebooting for example), ensure that the virtual kernel filesystems are mounted as explained in Section 7.3.1, “Mounting and Populating /dev” and Section 7.3.2, “Mounting Virtual Kernel File Systems” and enter chroot again before continuing with the installation.
It is time to create the full directory structure in the LFS file system.
Some of the directories mentioned in this section may have already been created earlier with explicit instructions, or when installing some packages. They are repeated below for completeness.
Create some root-level directories that are not in the limited set required in the previous chapters by issuing the following command:
mkdir -pv /{boot,home,mnt,opt,srv}
Create the required set of subdirectories below the root-level by issuing the following commands:
mkdir -pv /etc/{opt,sysconfig} mkdir -pv /lib/firmware mkdir -pv /media/{floppy,cdrom} mkdir -pv /usr/{,local/}{include,src} mkdir -pv /usr/lib/locale mkdir -pv /usr/local/{bin,lib,sbin} mkdir -pv /usr/{,local/}share/{color,dict,doc,info,locale,man} mkdir -pv /usr/{,local/}share/{misc,terminfo,zoneinfo} mkdir -pv /usr/{,local/}share/man/man{1..8} mkdir -pv /var/{cache,local,log,mail,opt,spool} mkdir -pv /var/lib/{color,misc,locate} ln -sfv /run /var/run ln -sfv /run/lock /var/lock install -dv -m 0750 /root install -dv -m 1777 /tmp /var/tmp
Directories are, by default, created with permission mode 755,
but this is not desirable everywhere. In the commands above,
two changes are made—one to the home directory of user
root
, and another to the
directories for temporary files.
The first mode change ensures that not just anybody can enter
the /root
directory—just like a
normal user would do with his or her own home directory. The
second mode change makes sure that any user can write to the
/tmp
and /var/tmp
directories, but cannot remove
another user's files from them. The latter is prohibited by the
so-called “sticky
bit,” the highest bit (1) in the 1777 bit mask.
This directory tree is based on the Filesystem Hierarchy
Standard (FHS) (available at https://refspecs.linuxfoundation.org/fhs.shtml).
The FHS also specifies the optional existence of additional
directories such as /usr/local/games
and /usr/share/games
. In LFS, we create only
the directories that are really necessary. However, feel free
to create more directories, if you wish.
The FHS does not mandate the existence of the directory
/usr/lib64
, and the LFS
editors have decided not to use it. For the instructions in
LFS and BLFS to work correctly, it is imperative that this
directory be non-existent. From time to time you should
verify that it does not exist, because it is easy to create
it inadvertently, and this will probably break your system.
Historically, Linux maintained a list of the mounted file
systems in the file /etc/mtab
.
Modern kernels maintain this list internally and expose it to
the user via the /proc
filesystem. To satisfy utilities that expect to find
/etc/mtab
, create the following
symbolic link:
ln -sv /proc/self/mounts /etc/mtab
Create a basic /etc/hosts
file to
be referenced in some test suites, and in one of Perl's
configuration files as well:
cat > /etc/hosts << EOF
127.0.0.1 localhost $(hostname)
::1 localhost
EOF
In order for user root
to be
able to login and for the name “root” to be
recognized, there must be relevant entries in the /etc/passwd
and /etc/group
files.
Create the /etc/passwd
file by
running the following command:
cat > /etc/passwd << "EOF"
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/dev/null:/usr/bin/false
daemon:x:6:6:Daemon User:/dev/null:/usr/bin/false
messagebus:x:18:18:D-Bus Message Daemon User:/run/dbus:/usr/bin/false
uuidd:x:80:80:UUID Generation Daemon User:/dev/null:/usr/bin/false
nobody:x:65534:65534:Unprivileged User:/dev/null:/usr/bin/false
EOF
The actual password for root
will be set later.
Create the /etc/group
file by
running the following command:
cat > /etc/group << "EOF"
root:x:0:
bin:x:1:daemon
sys:x:2:
kmem:x:3:
tape:x:4:
tty:x:5:
daemon:x:6:
floppy:x:7:
disk:x:8:
lp:x:9:
dialout:x:10:
audio:x:11:
video:x:12:
utmp:x:13:
cdrom:x:15:
adm:x:16:
messagebus:x:18:
input:x:24:
mail:x:34:
kvm:x:61:
uuidd:x:80:
wheel:x:97:
users:x:999:
nogroup:x:65534:
EOF
The created groups are not part of any standard—they are groups
decided on in part by the requirements of the Udev
configuration in Chapter 9, and in part by common conventions
employed by a number of existing Linux distributions. In
addition, some test suites rely on specific users or groups.
The Linux Standard Base (LSB, available at https://refspecs.linuxfoundation.org/lsb.shtml)
only recommends that, besides the group root
with a Group ID (GID) of 0, a group
bin
with a GID of 1 be present.
The GID of 5 is widely used for the tty
group, and the number 5 is also used in
/etc/fstab
for the devpts
filesystem. All other group names
and GIDs can be chosen freely by the system administrator since
well-written programs do not depend on GID numbers, but rather
use the group's name.
The ID 65534 is used by the kernel for NFS and separate user
namespaces for unmapped users and groups (those exist on the
NFS server or the parent user namespace, but “do not exist” on the
local machine or in the separate namespace). We assign
nobody
and nogroup
to avoid an unnamed ID. But other
distros may treat this ID differently, so any portable program
should not depend on this assignment.
Some packages need a locale.
localedef -i C -f UTF-8 C.UTF-8
Some tests in Chapter 8 need a regular user. We add this user here and delete this account at the end of that chapter.
echo "tester:x:101:101::/home/tester:/bin/bash" >> /etc/passwd echo "tester:x:101:" >> /etc/group install -o tester -d /home/tester
To remove the “I have
no name!” prompt, start a new shell. Since the
/etc/passwd
and /etc/group
files have been created, user name
and group name resolution will now work:
exec /usr/bin/bash --login
The login, agetty, and init programs (and others) use a number of log files to record information such as who was logged into the system and when. However, these programs will not write to the log files if they do not already exist. Initialize the log files and give them proper permissions:
touch /var/log/{btmp,lastlog,faillog,wtmp} chgrp -v utmp /var/log/lastlog chmod -v 664 /var/log/lastlog chmod -v 600 /var/log/btmp
The /var/log/wtmp
file records
all logins and logouts. The /var/log/lastlog
file records when each user
last logged in. The /var/log/faillog
file records failed login
attempts. The /var/log/btmp
file
records the bad login attempts.
The /run/utmp
file records the
users that are currently logged in. This file is created
dynamically in the boot scripts.
The utmp
, wtmp
, btmp
,
and lastlog
files use 32-bit
integers for timestamps and they'll be fundamentally broken
after year 2038. Many packages have stopped using them and
other packages are going to stop using them. It is probably
best to consider them deprecated.
The Gettext package contains utilities for internationalization and localization. These allow programs to be compiled with NLS (Native Language Support), enabling them to output messages in the user's native language.
For our temporary set of tools, we only need to install three programs from Gettext.
Prepare Gettext for compilation:
./configure --disable-shared
The meaning of the configure option:
--disable-shared
We do not need to install any of the shared Gettext libraries at this time, therefore there is no need to build them.
Compile the package:
make
Install the msgfmt, msgmerge, and xgettext programs:
cp -v gettext-tools/src/{msgfmt,msgmerge,xgettext} /usr/bin
Details on this package are located in Section 8.33.2, “Contents of Gettext.”
The Bison package contains a parser generator.
Prepare Bison for compilation:
./configure --prefix=/usr \ --docdir=/usr/share/doc/bison-3.8.2
The meaning of the new configure option:
--docdir=/usr/share/doc/bison-3.8.2
This tells the build system to install bison documentation into a versioned directory.
Compile the package:
make
Install the package:
make install
Details on this package are located in Section 8.34.2, “Contents of Bison.”
The Perl package contains the Practical Extraction and Report Language.
Prepare Perl for compilation:
sh Configure -des \ -D prefix=/usr \ -D vendorprefix=/usr \ -D useshrplib \ -D privlib=/usr/lib/perl5/5.40/core_perl \ -D archlib=/usr/lib/perl5/5.40/core_perl \ -D sitelib=/usr/lib/perl5/5.40/site_perl \ -D sitearch=/usr/lib/perl5/5.40/site_perl \ -D vendorlib=/usr/lib/perl5/5.40/vendor_perl \ -D vendorarch=/usr/lib/perl5/5.40/vendor_perl
The meaning of the Configure options:
-des
This is a combination of three options: -d uses defaults for all items; -e ensures completion of all tasks; -s silences non-essential output.
-D
vendorprefix=/usr
This ensures perl knows how to tell packages where they should install their Perl modules.
-D
useshrplib
Build libperl
needed by
some Perl modules as a shared library, instead of a
static library.
-D
privlib,-D archlib,-D sitelib,...
These settings define where Perl looks for installed modules. The LFS editors chose to put them in a directory structure based on the MAJOR.MINOR version of Perl (5.40) which allows upgrading Perl to newer patch levels (the patch level is the last dot separated part in the full version string like 5.40.0) without reinstalling all of the modules.
Compile the package:
make
Install the package:
make install
Details on this package are located in Section 8.43.2, “Contents of Perl.”
The Python 3 package contains the Python development environment. It is useful for object-oriented programming, writing scripts, prototyping large programs, and developing entire applications. Python is an interpreted computer language.
There are two package files whose name starts with the
“python” prefix. The one to extract
from is Python-3.13.1.tar.xz
(notice the uppercase first letter).
Prepare Python for compilation:
./configure --prefix=/usr \ --enable-shared \ --without-ensurepip
The meaning of the configure option:
--enable-shared
This switch prevents installation of static libraries.
--without-ensurepip
This switch disables the Python package installer, which is not needed at this stage.
Compile the package:
make
Some Python 3 modules can't be built now because the
dependencies are not installed yet. For the ssl
module, a message Python requires a OpenSSL 1.1.1 or
newer
is outputted. The message should be ignored.
Just make sure the toplevel make command has not
failed. The optional modules are not needed now and they
will be built in Chapter 8.
Install the package:
make install
Details on this package are located in Section 8.52.2, “Contents of Python 3.”
The Texinfo package contains programs for reading, writing, and converting info pages.
Prepare Texinfo for compilation:
./configure --prefix=/usr
Compile the package:
make
Install the package:
make install
Details on this package are located in Section 8.72.2, “Contents of Texinfo.”
The Util-linux package contains miscellaneous utility programs.
The FHS recommends using the /var/lib/hwclock
directory instead of the
usual /etc
directory as the
location for the adjtime
file.
Create this directory with:
mkdir -pv /var/lib/hwclock
Prepare Util-linux for compilation:
./configure --libdir=/usr/lib \ --runstatedir=/run \ --disable-chfn-chsh \ --disable-login \ --disable-nologin \ --disable-su \ --disable-setpriv \ --disable-runuser \ --disable-pylibmount \ --disable-static \ --disable-liblastlog2 \ --without-python \ ADJTIME_PATH=/var/lib/hwclock/adjtime \ --docdir=/usr/share/doc/util-linux-2.40.2
The meaning of the configure options:
ADJTIME_PATH=/var/lib/hwclock/adjtime
This sets the location of the file recording information about the hardware clock in accordance to the FHS. This is not strictly needed for this temporary tool, but it prevents creating a file at another location, which would not be overwritten or removed when building the final util-linux package.
--libdir=/usr/lib
This switch ensures the .so
symlinks targeting the shared
library file in the same directory (/usr/lib
) directly.
--disable-*
These switches prevent warnings about building components that require packages not in LFS or not installed yet.
--without-python
This switch disables using Python. It avoids trying to build unneeded bindings.
runstatedir=/run
This switch sets the location of the socket used by
uuidd and
libuuid
correctly.
Compile the package:
make
Install the package:
make install
Details on this package are located in Section 8.79.2, “Contents of Util-linux.”
First, remove the currently installed documentation files to prevent them from ending up in the final system, and to save about 35 MB:
rm -rf /usr/share/{info,man,doc}/*
Second, on a modern Linux system, the libtool .la files are only useful for libltdl. No libraries in LFS are loaded by libltdl, and it's known that some .la files can cause BLFS package failures. Remove those files now:
find /usr/{lib,libexec} -name \*.la -delete
The current system size is now about 3 GB, however the /tools directory is no longer needed. It uses about 1 GB of disk space. Delete it now:
rm -rf /tools
At this point the essential programs and libraries have been created and your current LFS system is in a good state. Your system can now be backed up for later reuse. In case of fatal failures in the subsequent chapters, it often turns out that removing everything and starting over (more carefully) is the best way to recover. Unfortunately, all the temporary files will be removed, too. To avoid spending extra time to redo something which has been done successfully, creating a backup of the current LFS system may prove useful.
All the remaining steps in this section are optional. Nevertheless, as soon as you begin installing packages in Chapter 8, the temporary files will be overwritten. So it may be a good idea to do a backup of the current system as described below.
The following steps are performed from outside the chroot
environment. That means you have to leave the chroot
environment first before continuing. The reason for that is
to get access to file system locations outside of the chroot
environment to store/read the backup archive, which ought not
be placed within the $LFS
hierarchy.
If you have decided to make a backup, leave the chroot environment:
exit
All of the following instructions are executed by
root
on your host system.
Take extra care about the commands you're going to run as
mistakes made here can modify your host system. Be aware
that the environment variable LFS
is set for user lfs
by default but may not be set for root
.
Whenever commands are to be executed by root
, make sure you have set
LFS
.
This has been discussed in Section 2.6, “Setting The $LFS Variable.”
Before making a backup, unmount the virtual file systems:
mountpoint -q $LFS/dev/shm && umount $LFS/dev/shm umount $LFS/dev/pts umount $LFS/{sys,proc,run,dev}
Make sure you have at least 1 GB free disk space (the source tarballs will be included in the backup archive) on the file system containing the directory where you create the backup archive.
Note that the instructions below specify the home directory
of the host system's root
user, which is typically found on the root file system.
Replace $HOME
by a directory of
your choice if you do not want to have the backup stored in
root
's home directory.
Create the backup archive by running the following command:
Because the backup archive is compressed, it takes a relatively long time (over 10 minutes) even on a reasonably fast system.
cd $LFS tar -cJpf $HOME/lfs-temp-tools-r12.2-59.tar.xz .
If continuing to chapter 8, don't forget to reenter the chroot environment as explained in the “Important” box below.
In case some mistakes have been made and you need to start
over, you can use this backup to restore the system and save
some recovery time. Since the sources are located under
$LFS
, they are included in the
backup archive as well, so they do not need to be downloaded
again. After checking that $LFS
is
set properly, you can restore the backup by executing the
following commands:
The following commands are extremely dangerous. If you run
rm -rf ./* as
the root
user and you do
not change to the $LFS directory or the LFS
environment variable is not set for the
root
user, it will destroy
your entire host system. YOU ARE WARNED.
cd $LFS
rm -rf ./*
tar -xpf $HOME/lfs-temp-tools-r12.2-59.tar.xz
Again, double check that the environment has been set up properly and continue building the rest of the system.
If you left the chroot environment to create a backup or restart building using a restore, remember to check that the virtual file systems are still mounted (findmnt | grep $LFS). If they are not mounted, remount them now as described in Section 7.3, “Preparing Virtual Kernel File Systems” and re-enter the chroot environment (see Section 7.4, “Entering the Chroot Environment”) before continuing.
In this chapter, we start constructing the LFS system in earnest.
The installation of this software is straightforward. Although in many cases the installation instructions could be made shorter and more generic, we have opted to provide the full instructions for every package to minimize the possibilities for mistakes. The key to learning what makes a Linux system work is to know what each package is used for and why you (or the system) may need it.
We do not recommend using customized optimizations. They can
make a program run slightly faster, but they may also cause
compilation difficulties, and problems when running the
program. If a package refuses to compile with a customized
optimization, try to compile it without optimization and see if
that fixes the problem. Even if the package does compile when
using a customized optimization, there is the risk it may have
been compiled incorrectly because of the complex interactions
between the code and the build tools. Also note that the
-march
and -mtune
options using values not specified in
the book have not been tested. This may cause problems with the
toolchain packages (Binutils, GCC and Glibc). The small
potential gains achieved by customizing compiler optimizations
are often outweighed by the risks. First-time builders of LFS
are encouraged to build without custom optimizations.
On the other hand, we keep the optimizations enabled by the
default configuration of the packages. In addition, we
sometimes explicitly enable an optimized configuration provided
by a package but not enabled by default. The package
maintainers have already tested these configurations and
consider them safe, so it's not likely they would break the
build. Generally the default configuration already enables
-O2
or -O3
, so the resulting system will still run
very fast without any customized optimization, and be stable at
the same time.
Before the installation instructions, each installation page provides information about the package, including a concise description of what it contains, approximately how long it will take to build, and how much disk space is required during this building process. Following the installation instructions, there is a list of programs and libraries (along with brief descriptions) that the package installs.
The SBU values and required disk space include test suite data for all applicable packages in Chapter 8. SBU values have been calculated using four CPU cores (-j4) for all operations unless specified otherwise.
In general, the LFS editors discourage building and installing static libraries. Most static libraries have been made obsolete in a modern Linux system. In addition, linking a static library into a program can be detrimental. If an update to the library is needed to remove a security problem, every program that uses the static library will need to be relinked with the new library. Since the use of static libraries is not always obvious, the relevant programs (and the procedures needed to do the linking) may not even be known.
The procedures in this chapter remove or disable installation
of most static libraries. Usually this is done by passing a
--disable-static
option to
configure. In
other cases, alternate means are needed. In a few cases,
especially Glibc and GCC, the use of static libraries remains
an essential feature of the package building process.
For a more complete discussion of libraries, see Libraries: Static or shared? in the BLFS book.
Package Management is an often requested addition to the LFS Book. A Package Manager tracks the installation of files, making it easier to remove and upgrade packages. A good package manager will also handle the configuration files specially to keep the user configuration when the package is reinstalled or upgraded. Before you begin to wonder, NO—this section will not talk about nor recommend any particular package manager. What it does provide is a roundup of the more popular techniques and how they work. The perfect package manager for you may be among these techniques, or it may be a combination of two or more of these techniques. This section briefly mentions issues that may arise when upgrading packages.
Some reasons why no package manager is mentioned in LFS or BLFS include:
Dealing with package management takes the focus away from the goals of these books—teaching how a Linux system is built.
There are multiple solutions for package management, each having its strengths and drawbacks. Finding one solution that satisfies all audiences is difficult.
There are some hints written on the topic of package management. Visit the Hints Project and see if one of them fits your needs.
A Package Manager makes it easy to upgrade to newer versions when they are released. Generally the instructions in the LFS and BLFS books can be used to upgrade to the newer versions. Here are some points that you should be aware of when upgrading packages, especially on a running system.
If the Linux kernel needs to be upgraded (for example, from 5.10.17 to 5.10.18 or 5.11.1), nothing else needs to be rebuilt. The system will keep working fine thanks to the well-defined interface between the kernel and userspace. Specifically, Linux API headers need not be upgraded along with the kernel. You will merely need to reboot your system to use the upgraded kernel.
If Glibc needs to be upgraded to a newer version, (e.g., from Glibc-2.36 to Glibc-2.40), some extra steps are needed to avoid breaking the system. Read Section 8.5, “Glibc-2.40” for details.
If a package containing a shared library is updated,
and if the name of the library changes, then any
packages dynamically linked to the library must be
recompiled, to link against the newer library. (Note
that there is no correlation between the package
version and the name of the library.) For example,
consider a package foo-1.2.3 that installs a shared
library with the name libfoo.so.1
. Suppose you upgrade the
package to a newer version foo-1.2.4 that installs a
shared library with the name libfoo.so.2
. In this case, any
packages that are dynamically linked to libfoo.so.1
need to be recompiled to
link against libfoo.so.2
in order to use the new library version. You should not
remove the old libraries until all the dependent
packages have been recompiled.
If a package is (directly or indirectly) linked to both
the old and new names of a shared library (for example,
the package links to both libfoo.so.2
and libbar.so.1
, while the latter links
to libfoo.so.3
), the
package may malfunction because the different revisions
of the shared library present incompatible definitions
for some symbol names. This can be caused by
recompiling some, but not all, of the packages linked
to the old shared library after the package providing
the shared library is upgraded. To avoid the issue,
users will need to rebuild every package linked to a
shared library with an updated revision (e.g.
libfoo.so.2 to libfoo.so.3) as soon as possible.
If a package containing a shared library is updated,
and the name of the library doesn't change, but the
version number of the library file decreases (for
example, the library is still named libfoo.so.1
, but the name of the
library file is changed from libfoo.so.1.25
to libfoo.so.1.24
), you should remove
the library file from the previously installed version
(libfoo.so.1.25
in this
case). Otherwise, a ldconfig command
(invoked by yourself from the command line, or by the
installation of some package) will reset the symlink
libfoo.so.1
to point to
the old library file because it seems to be a
“newer” version; its version
number is larger. This situation may arise if you have
to downgrade a package, or if the authors change the
versioning scheme for library files.
If a package containing a shared library is updated,
and the name of the library doesn't change, but a
severe issue (especially, a security vulnerability) is
fixed, all running programs linked to the shared
library should be restarted. The following command, run
as root
after the
update is complete, will list which processes are using
the old versions of those libraries (replace libfoo
with the name of
the library):
grep -l 'libfoo
.*deleted' /proc/*/maps | tr -cd 0-9\\n | xargs -r ps u
If OpenSSH is being used to access the system and it is linked to the updated library, you must restart the sshd service, then logout, login again, and run the preceding command again to confirm that nothing is still using the deleted libraries.
If an executable program or a shared library is overwritten, the processes using the code or data in that program or library may crash. The correct way to update a program or a shared library without causing the process to crash is to remove it first, then install the new version. The install command provided by coreutils has already implemented this, and most packages use that command to install binary files and libraries. This means that you won't be troubled by this issue most of the time. However, the install process of some packages (notably SpiderMonkey in BLFS) just overwrites the file if it exists; this causes a crash. So it's safer to save your work and close unneeded running processes before updating a package.
The following are some common package management techniques. Before making a decision on a package manager, do some research on the various techniques, particularly the drawbacks of each particular scheme.
Yes, this is a package management technique. Some folks do not need a package manager because they know the packages intimately and know which files are installed by each package. Some users also do not need any package management because they plan on rebuilding the entire system whenever a package is changed.
This is a simplistic package management technique that does
not need a special program to manage the packages. Each
package is installed in a separate directory. For example,
package foo-1.1 is installed in /opt/foo-1.1
and a symlink is made from
/opt/foo
to /opt/foo-1.1
. When a new version foo-1.2
comes along, it is installed in /opt/foo-1.2
and the previous symlink is
replaced by a symlink to the new version.
Environment variables such as PATH
, MANPATH
,
INFOPATH
, PKG_CONFIG_PATH
, CPPFLAGS
, LDFLAGS
, and the configuration file
/etc/ld.so.conf
may need to
be expanded to include the corresponding subdirectories in
/opt/foo-x.y
.
This scheme is used by the BLFS book to install some very large packages to make it easier to upgrade them. If you install more than a few packages, this scheme becomes unmanageable. And some packages (for example Linux API headers and Glibc) may not work well with this scheme. Never use this scheme system-wide.
This is a variation of the previous package management
technique. Each package is installed as in the previous
scheme. But instead of making the symlink via a generic
package name, each file is symlinked into the /usr
hierarchy. This removes the need to
expand the environment variables. Though the symlinks can
be created by the user, many package managers use this
approach, and automate the creation of the symlinks. A few
of the popular ones include Stow, Epkg, Graft, and Depot.
The installation script needs to be fooled, so the package
thinks it is installed in /usr
though in reality it is installed in
the /usr/pkg
hierarchy.
Installing in this manner is not usually a trivial task.
For example, suppose you are installing a package
libfoo-1.1. The following instructions may not install the
package properly:
./configure --prefix=/usr/pkg/libfoo/1.1 make make install
The installation will work, but the dependent packages may
not link to libfoo as you would expect. If you compile a
package that links against libfoo, you may notice that it
is linked to /usr/pkg/libfoo/1.1/lib/libfoo.so.1
instead of /usr/lib/libfoo.so.1
as you would expect.
The correct approach is to use the DESTDIR
variable to direct the installation.
This approach works as follows:
./configure --prefix=/usr make make DESTDIR=/usr/pkg/libfoo/1.1 install
Most packages support this approach, but there are some
which do not. For the non-compliant packages, you may
either need to install the package manually, or you may
find that it is easier to install some problematic packages
into /opt
.
In this technique, a file is timestamped before the installation of the package. After the installation, a simple use of the find command with the appropriate options can generate a log of all the files installed after the timestamp file was created. A package manager that uses this approach is install-log.
Though this scheme has the advantage of being simple, it has two drawbacks. If, during installation, the files are installed with any timestamp other than the current time, those files will not be tracked by the package manager. Also, this scheme can only be used when packages are installed one at a time. The logs are not reliable if two packages are installed simultaneously from two different consoles.
In this approach, the commands that the installation scripts perform are recorded. There are two techniques that one can use:
The LD_PRELOAD
environment
variable can be set to point to a library to be preloaded
before installation. During installation, this library
tracks the packages that are being installed by attaching
itself to various executables such as cp, install, mv and tracking the
system calls that modify the filesystem. For this approach
to work, all the executables need to be dynamically linked
without the suid or sgid bit. Preloading the library may
cause some unwanted side-effects during installation.
Therefore, it's a good idea to perform some tests to ensure
that the package manager does not break anything, and that
it logs all the appropriate files.
Another technique is to use strace, which logs all the system calls made during the execution of the installation scripts.
In this scheme, the package installation is faked into a separate tree as previously described in the symlink style package management section. After the installation, a package archive is created using the installed files. This archive is then used to install the package on the local machine or even on other machines.
This approach is used by most of the package managers found in the commercial distributions. Examples of package managers that follow this approach are RPM (which, incidentally, is required by the Linux Standard Base Specification), pkg-utils, Debian's apt, and Gentoo's Portage system. A hint describing how to adopt this style of package management for LFS systems is located at https://www.linuxfromscratch.org/hints/downloads/files/fakeroot.txt.
The creation of package files that include dependency information is complex, and beyond the scope of LFS.
Slackware uses a tar-based system for package archives. This system purposely does not handle package dependencies as more complex package managers do. For details of Slackware package management, see https://www.slackbook.org/html/package-management.html.
This scheme, unique to LFS, was devised by Matthias Benkmann, and is available from the Hints Project. In this scheme, each package is installed as a separate user into the standard locations. Files belonging to a package are easily identified by checking the user ID. The features and shortcomings of this approach are too complex to describe in this section. For the details please see the hint at https://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt.
One of the advantages of an LFS system is that there are no
files that depend on the position of files on a disk system.
Cloning an LFS build to another computer with the same
architecture as the base system is as simple as using
tar on the LFS
partition that contains the root directory (about 900MB
uncompressed for a basic LFS build), copying that file via
network transfer or CD-ROM / USB stick to the new system, and
expanding it. After that, a few configuration files will have
to be changed. Configuration files that may need to be
updated include: /etc/hosts
,
/etc/fstab
, /etc/passwd
, /etc/group
, /etc/shadow
,
/etc/ld.so.conf
, /etc/sysconfig/rc.site
, /etc/sysconfig/network
, and /etc/sysconfig/ifconfig.eth0
.
A custom kernel may be needed for the new system, depending on differences in system hardware and the original kernel configuration.
There have been some reports of issues when copying between similar but not identical architectures. For instance, the instruction set for an Intel system is not identical with the AMD processor's instructions, and later versions of some processors may provide instructions that are unavailable with earlier versions.
Finally, the new system has to be made bootable via Section 10.4, “Using GRUB to Set Up the Boot Process”.
The Man-pages package contains over 2,400 man pages.
Remove two man pages for password hashing functions. Libxcrypt will provide a better version of these man pages:
rm -v man3/crypt*
Install Man-pages by running:
make prefix=/usr install
The Iana-Etc package provides data for network services and protocols.
For this package, we only need to copy the files into place:
cp services protocols /etc
The Glibc package contains the main C library. This library provides the basic routines for allocating memory, searching directories, opening and closing files, reading and writing files, string handling, pattern matching, arithmetic, and so on.
Some of the Glibc programs use the non-FHS compliant
/var/db
directory to store
their runtime data. Apply the following patch to make such
programs store their runtime data in the FHS-compliant
locations:
patch -Np1 -i ../glibc-2.40-fhs-1.patch
The Glibc documentation recommends building Glibc in a dedicated build directory:
mkdir -v build cd build
Ensure that the ldconfig and sln utilities will be
installed into /usr/sbin
:
echo "rootsbindir=/usr/sbin" > configparms
Prepare Glibc for compilation:
../configure --prefix=/usr \ --disable-werror \ --enable-kernel=5.4 \ --enable-stack-protector=strong \ --disable-nscd \ libc_cv_slibdir=/usr/lib
The meaning of the configure options:
--disable-werror
This option disables the -Werror option passed to GCC. This is necessary for running the test suite.
--enable-kernel=5.4
This option tells the build system that this Glibc may be used with kernels as old as 5.4 . This means generating workarounds in case a system call introduced in a later version cannot be used.
--enable-stack-protector=strong
This option increases system security by adding extra
code to check for buffer overflows, such as stack
smashing attacks. Note that Glibc always explicitly
overrides the default of GCC, so this option is still
needed even though we've already specified --enable-default-ssp
for GCC.
--disable-nscd
Do not build the name service cache daemon which is no longer used.
libc_cv_slibdir=/usr/lib
This variable sets the correct library for all systems. We do not want lib64 to be used.
Compile the package:
make
In this section, the test suite for Glibc is considered critical. Do not skip it under any circumstance.
Generally a few tests do not pass. The test failures listed below are usually safe to ignore.
make check
You may see some test failures. The Glibc test suite is somewhat dependent on the host system. A few failures out of over 5000 tests can generally be ignored. This is a list of the most common issues seen for recent versions of LFS:
io/tst-lchmod is known to fail in the LFS chroot environment.
Some tests, for example nss/tst-nss-files-hosts-multi and nptl/tst-thread-affinity* are known to fail due to a timeout (especially when the system is relatively slow and/or running the test suite with multiple parallel make jobs). These tests can be identified with:
grep "Timed out" $(find -name \*.out)
It's possible to re-run a single test with enlarged
timeout with TIMEOUTFACTOR=<factor>
make
test t=<test
name>
. For example,
TIMEOUTFACTOR=10 make
test t=nss/tst-nss-files-hosts-multi
will re-run nss/tst-nss-files-hosts-multi
with ten times the original timeout.
Additionally, some tests may fail with a relatively old CPU model (for example elf/tst-cpu-features-cpuinfo) or host kernel version (for example stdlib/tst-arc4random-thread).
Though it is a harmless message, the install stage of Glibc
will complain about the absence of /etc/ld.so.conf
. Prevent this warning with:
touch /etc/ld.so.conf
Fix the Makefile to skip an outdated sanity check that fails with a modern Glibc configuration:
sed '/test-installation/s@$(PERL)@echo not running@' -i ../Makefile
If upgrading Glibc to a new minor version (for example, from Glibc-2.36 to Glibc-2.40) on a running LFS system, you need to take some extra precautions to avoid breaking the system:
Upgrading Glibc on a LFS system prior to 11.0 (exclusive) is not supported. Rebuild LFS if you are running such an old LFS system but you need a newer Glibc.
If upgrading on a LFS system prior to 12.0
(exclusive), install Libxcrypt following Section 8.27,
“Libxcrypt-4.4.36.” In addition to a normal
Libxcrypt
installation, you MUST
follow the note in Libxcrypt section to install
libcrypt.so.1*
(replacing libcrypt.so.1
from the prior Glibc
installation).
If upgrading on a LFS system prior to 12.1 (exclusive), remove the nscd program:
rm -f /usr/sbin/nscd
Upgrade the kernel and reboot if it's older than 5.4 (check the current version with uname -r) or if you want to upgrade it anyway, following Section 10.3, “Linux-6.12.5.”
Upgrade the kernel API headers if it's older than 5.4
(check the current version with cat
/usr/include/linux/version.h) or if
you want to upgrade it anyway, following Section 5.4,
“Linux-6.12.5 API Headers” (but removing
$LFS
from the cp command).
Perform a DESTDIR
installation and upgrade the Glibc shared libraries
on the system using one single install command:
make DESTDIR=$PWD/dest install install -vm755 dest/usr/lib/*.so.* /usr/lib
It's imperative to strictly follow these steps above unless you completely understand what you are doing. Any unexpected deviation may render the system completely unusable. YOU ARE WARNED.
Then continue to run the make
install command, the sed command against
/usr/bin/ldd
, and the
commands to install the locales. Once they are finished,
reboot the system immediately.
Install the package:
make install
Fix a hardcoded path to the executable loader in the ldd script:
sed '/RTLDLIST=/s@/usr@@g' -i /usr/bin/ldd
Next, install the locales that can make the system respond in a different language. None of these locales are required, but if some of them are missing, the test suites of some packages will skip important test cases.
Individual locales can be installed using the localedef program. E.g.,
the second localedef command below
combines the /usr/share/i18n/locales/cs_CZ
charset-independent locale definition with the /usr/share/i18n/charmaps/UTF-8.gz
charmap
definition and appends the result to the /usr/lib/locale/locale-archive
file. The
following instructions will install the minimum set of
locales necessary for the optimal coverage of tests:
localedef -i C -f UTF-8 C.UTF-8 localedef -i cs_CZ -f UTF-8 cs_CZ.UTF-8 localedef -i de_DE -f ISO-8859-1 de_DE localedef -i de_DE@euro -f ISO-8859-15 de_DE@euro localedef -i de_DE -f UTF-8 de_DE.UTF-8 localedef -i el_GR -f ISO-8859-7 el_GR localedef -i en_GB -f ISO-8859-1 en_GB localedef -i en_GB -f UTF-8 en_GB.UTF-8 localedef -i en_HK -f ISO-8859-1 en_HK localedef -i en_PH -f ISO-8859-1 en_PH localedef -i en_US -f ISO-8859-1 en_US localedef -i en_US -f UTF-8 en_US.UTF-8 localedef -i es_ES -f ISO-8859-15 es_ES@euro localedef -i es_MX -f ISO-8859-1 es_MX localedef -i fa_IR -f UTF-8 fa_IR localedef -i fr_FR -f ISO-8859-1 fr_FR localedef -i fr_FR@euro -f ISO-8859-15 fr_FR@euro localedef -i fr_FR -f UTF-8 fr_FR.UTF-8 localedef -i is_IS -f ISO-8859-1 is_IS localedef -i is_IS -f UTF-8 is_IS.UTF-8 localedef -i it_IT -f ISO-8859-1 it_IT localedef -i it_IT -f ISO-8859-15 it_IT@euro localedef -i it_IT -f UTF-8 it_IT.UTF-8 localedef -i ja_JP -f EUC-JP ja_JP localedef -i ja_JP -f SHIFT_JIS ja_JP.SJIS 2> /dev/null || true localedef -i ja_JP -f UTF-8 ja_JP.UTF-8 localedef -i nl_NL@euro -f ISO-8859-15 nl_NL@euro localedef -i ru_RU -f KOI8-R ru_RU.KOI8-R localedef -i ru_RU -f UTF-8 ru_RU.UTF-8 localedef -i se_NO -f UTF-8 se_NO.UTF-8 localedef -i ta_IN -f UTF-8 ta_IN.UTF-8 localedef -i tr_TR -f UTF-8 tr_TR.UTF-8 localedef -i zh_CN -f GB18030 zh_CN.GB18030 localedef -i zh_HK -f BIG5-HKSCS zh_HK.BIG5-HKSCS localedef -i zh_TW -f UTF-8 zh_TW.UTF-8
In addition, install the locale for your own country, language and character set.
Alternatively, install all the locales listed in the
glibc-2.40/localedata/SUPPORTED
file (it includes every locale listed above and many more) at
once with the following time-consuming command:
make localedata/install-locales
Then use the localedef command to create
and install locales not listed in the glibc-2.40/localedata/SUPPORTED
file when
you need them. For instance, the following two locales are
needed for some tests later in this chapter:
localedef -i C -f UTF-8 C.UTF-8 localedef -i ja_JP -f SHIFT_JIS ja_JP.SJIS 2> /dev/null || true
Glibc now uses libidn2 when resolving internationalized domain names. This is a run time dependency. If this capability is needed, the instructions for installing libidn2 are in the BLFS libidn2 page.
The /etc/nsswitch.conf
file
needs to be created because the Glibc defaults do not work
well in a networked environment.
Create a new file /etc/nsswitch.conf
by running the
following:
cat > /etc/nsswitch.conf << "EOF"
# Begin /etc/nsswitch.conf
passwd: files
group: files
shadow: files
hosts: files dns
networks: files
protocols: files
services: files
ethers: files
rpc: files
# End /etc/nsswitch.conf
EOF
Install and set up the time zone data with the following:
tar -xf ../../tzdata2024b.tar.gz ZONEINFO=/usr/share/zoneinfo mkdir -pv $ZONEINFO/{posix,right} for tz in etcetera southamerica northamerica europe africa antarctica \ asia australasia backward; do zic -L /dev/null -d $ZONEINFO ${tz} zic -L /dev/null -d $ZONEINFO/posix ${tz} zic -L leapseconds -d $ZONEINFO/right ${tz} done cp -v zone.tab zone1970.tab iso3166.tab $ZONEINFO zic -d $ZONEINFO -p America/New_York unset ZONEINFO
The meaning of the zic commands:
zic -L
/dev/null ...
This creates posix time zones without any leap
seconds. It is conventional to put these in both
zoneinfo
and
zoneinfo/posix
. It is
necessary to put the POSIX time zones in zoneinfo
, otherwise various test
suites will report errors. On an embedded system,
where space is tight and you do not intend to ever
update the time zones, you could save 1.9 MB by not
using the posix
directory, but some applications or test suites might
produce some failures.
zic -L
leapseconds ...
This creates right time zones, including leap
seconds. On an embedded system, where space is tight
and you do not intend to ever update the time zones,
or care about the correct time, you could save 1.9MB
by omitting the right
directory.
zic ...
-p ...
This creates the posixrules
file. We use New York
because POSIX requires the daylight saving time rules
to be in accordance with US rules.
One way to determine the local time zone is to run the following script:
tzselect
After answering a few questions about the location, the
script will output the name of the time zone (e.g.,
America/Edmonton).
There are also some other possible time zones listed in
/usr/share/zoneinfo
such as
Canada/Eastern or
EST5EDT that are not
identified by the script but can be used.
Then create the /etc/localtime
file by running:
ln -sfv /usr/share/zoneinfo/<xxx>
/etc/localtime
Replace <xxx>
with the name
of the time zone selected (e.g., Canada/Eastern).
By default, the dynamic loader (/lib/ld-linux.so.2
) searches through
/usr/lib
for dynamic
libraries that are needed by programs as they are run.
However, if there are libraries in directories other than
/usr/lib
, these need to be
added to the /etc/ld.so.conf
file in order for the dynamic loader to find them. Two
directories that are commonly known to contain additional
libraries are /usr/local/lib
and /opt/lib
, so add those
directories to the dynamic loader's search path.
Create a new file /etc/ld.so.conf
by running the following:
cat > /etc/ld.so.conf << "EOF"
# Begin /etc/ld.so.conf
/usr/local/lib
/opt/lib
EOF
If desired, the dynamic loader can also search a directory and include the contents of files found there. Generally the files in this include directory are one line specifying the desired library path. To add this capability run the following commands:
cat >> /etc/ld.so.conf << "EOF"
# Add an include directory
include /etc/ld.so.conf.d/*.conf
EOF
mkdir -pv /etc/ld.so.conf.d
Generates message catalogues |
|
Displays the system configuration values for file system specific variables |
|
Gets entries from an administrative database |
|
Performs character set conversion |
|
Creates fastloading iconv module configuration files |
|
Configures the dynamic linker runtime bindings |
|
Reports which shared libraries are required by each given program or shared library |
|
Assists ldd with object files. It does not exist on newer architectures like x86_64 |
|
Prints various information about the current locale |
|
Compiles locale specifications |
|
Creates a simple database from textual input |
|
Reads and interprets a memory trace file and displays a summary in human-readable format |
|
Dump information generated by PC profiling |
|
Lists dynamic shared objects used by running processes |
|
A statically linked ln program |
|
Traces shared library procedure calls of a specified command |
|
Reads and displays shared object profiling data |
|
Asks the user about the location of the system and reports the corresponding time zone description |
|
Traces the execution of a program by printing the currently executed function |
|
The time zone dumper |
|
The time zone compiler |
|
The helper program for shared library executables |
|
Used internally by Glibc as a gross hack to get
broken programs (e.g., some Motif applications)
running. See comments in |
|
Dummy library containing no functions. Previously
was the asynchronous name lookup library, whose
functions are now in |
|
The main C library |
|
Turns on memory allocation checking when preloaded |
|
Dummy library containing no functions. Previously
was the dynamic linking interface library, whose
functions are now in |
|
Dummy library containing no functions. Previously was a runtime library for g++ |
|
The mathematical library |
|
The vector math library, linked in as needed when
|
|
Turns on memory allocation checking when linked to |
|
Used by memusage to help collect information about the memory usage of a program |
|
The network services library, now deprecated |
|
The Name Service Switch modules, containing
functions for resolving host names, user names,
group names, aliases, services, protocols, etc.
Loaded by |
|
Can be preloaded to PC profile an executable |
|
Dummy library containing no functions. Previously
contained functions providing most of the
interfaces specified by the POSIX.1c Threads
Extensions and the semaphore interfaces specified
by the POSIX.1b Real-time Extensions, now the
functions are in |
|
Contains functions for creating, sending, and interpreting packets to the Internet domain name servers |
|
Contains functions providing most of the interfaces specified by the POSIX.1b Real-time Extensions |
|
Contains functions useful for building debuggers for multi-threaded programs |
|
Dummy library containing no functions. Previously
contained code for “standard”
functions used in many different Unix utilities.
These functions are now in |
The Zlib package contains compression and decompression routines used by some programs.
Prepare Zlib for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Remove a useless static library:
rm -fv /usr/lib/libz.a
The Bzip2 package contains programs for compressing and decompressing files. Compressing text files with bzip2 yields a much better compression percentage than with the traditional gzip.
Apply a patch that will install the documentation for this package:
patch -Np1 -i ../bzip2-1.0.8-install_docs-1.patch
The following command ensures installation of symbolic links are relative:
sed -i 's@\(ln -s -f \)$(PREFIX)/bin/@\1@' Makefile
Ensure the man pages are installed into the correct location:
sed -i "s@(PREFIX)/man@(PREFIX)/share/man@g" Makefile
Prepare Bzip2 for compilation with:
make -f Makefile-libbz2_so make clean
The meaning of the make parameter:
-f
Makefile-libbz2_so
This will cause Bzip2 to be built using a different
Makefile
file, in this
case the Makefile-libbz2_so
file, which
creates a dynamic libbz2.so
library and links the Bzip2
utilities against it.
Compile and test the package:
make
Install the programs:
make PREFIX=/usr install
Install the shared library:
cp -av libbz2.so.* /usr/lib ln -sv libbz2.so.1.0.8 /usr/lib/libbz2.so
Install the shared bzip2 binary into the
/usr/bin
directory, and replace
two copies of bzip2 with symlinks:
cp -v bzip2-shared /usr/bin/bzip2 for i in /usr/bin/{bzcat,bunzip2}; do ln -sfv bzip2 $i done
Remove a useless static library:
rm -fv /usr/lib/libbz2.a
Decompresses bzipped files |
|
Decompresses to standard output |
|
Runs cmp on bzipped files |
|
Runs diff on bzipped files |
|
Runs egrep on bzipped files |
|
Runs fgrep on bzipped files |
|
Runs grep on bzipped files |
|
Compresses files using the Burrows-Wheeler block sorting text compression algorithm with Huffman coding; the compression rate is better than that achieved by more conventional compressors using “Lempel-Ziv” algorithms, like gzip |
|
Tries to recover data from damaged bzipped files |
|
Runs less on bzipped files |
|
Runs more on bzipped files |
|
The library implementing lossless, block-sorting data compression, using the Burrows-Wheeler algorithm |
The Xz package contains programs for compressing and decompressing files. It provides capabilities for the lzma and the newer xz compression formats. Compressing text files with xz yields a better compression percentage than with the traditional gzip or bzip2 commands.
Prepare Xz for compilation with:
./configure --prefix=/usr \ --disable-static \ --docdir=/usr/share/doc/xz-5.6.3
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Decompresses to standard output |
|
Runs cmp on LZMA compressed files |
|
Runs diff on LZMA compressed files |
|
Runs egrep on LZMA compressed files |
|
Runs fgrep on LZMA compressed files |
|
Runs grep on LZMA compressed files |
|
Runs less on LZMA compressed files |
|
Compresses or decompresses files using the LZMA format |
|
A small and fast decoder for LZMA compressed files |
|
Shows information stored in the LZMA compressed file header |
|
Runs more on LZMA compressed files |
|
Decompresses files using the LZMA format |
|
Decompresses files using the XZ format |
|
Compresses or decompresses files using the XZ format |
|
Decompresses to standard output |
|
Runs cmp on XZ compressed files |
|
A small and fast decoder for XZ compressed files |
|
Runs diff on XZ compressed files |
|
Runs egrep on XZ compressed files |
|
Runs fgrep on XZ compressed files |
|
Runs grep on XZ compressed files |
|
Runs less on XZ compressed files |
|
Runs more on XZ compressed files |
|
The library implementing lossless, block-sorting data compression, using the Lempel-Ziv-Markov chain algorithm |
Lz4 is a lossless compression algorithm, providing compression speed greater than 500 MB/s per core. It features an extremely fast decoder, with speed in multiple GB/s per core. Lz4 can work with Zstandard to allow both algorithms to compress data faster.
Compile the package:
make BUILD_STATIC=no PREFIX=/usr
To test the results, issue:
make -j1 check
Install the package:
make BUILD_STATIC=no PREFIX=/usr install
Zstandard is a real-time compression algorithm, providing high compression ratios. It offers a very wide range of compression / speed trade-offs, while being backed by a very fast decoder.
Compile the package:
make prefix=/usr
In the test output there are several places that indicate 'failed'. These are expected and only 'FAIL' is an actual test failure. There should be no test failures.
To test the results, issue:
make check
Install the package:
make prefix=/usr install
Remove the static library:
rm -v /usr/lib/libzstd.a
The File package contains a utility for determining the type of a given file or files.
Prepare File for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The Readline package is a set of libraries that offer command-line editing and history capabilities.
Reinstalling Readline will cause the old libraries to be moved to <libraryname>.old. While this is normally not a problem, in some cases it can trigger a linking bug in ldconfig. This can be avoided by issuing the following two seds:
sed -i '/MV.*old/d' Makefile.in sed -i '/{OLDSUFF}/c:' support/shlib-install
Prevent hard coding library search paths (rpath) into the shared libraries. This package does not need rpath for an installation into the standard location, and rpath may sometimes cause unwanted effects or even security issues:
sed -i 's/-Wl,-rpath,[^ ]*//' support/shobj-conf
Prepare Readline for compilation:
./configure --prefix=/usr \ --disable-static \ --with-curses \ --docdir=/usr/share/doc/readline-8.2.13
The meaning of the new configure option:
--with-curses
This option tells Readline that it can find the termcap
library functions in the curses library, not a separate
termcap library. This will generate the correct
readline.pc
file.
Compile the package:
make SHLIB_LIBS="-lncursesw"
The meaning of the make option:
SHLIB_LIBS="-lncursesw"
This option forces Readline to link against the
libncursesw
library. For
details see the “Shared Libraries” section in the
package's README
file.
This package does not come with a test suite.
Install the package:
make install
If desired, install the documentation:
install -v -m644 doc/*.{ps,pdf,html,dvi} /usr/share/doc/readline-8.2.13
The M4 package contains a macro processor.
Prepare M4 for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Copies the given files while expanding the macros that they contain. These macros are either built-in or user-defined and can take any number of arguments. Besides performing macro expansion, m4 has built-in functions for including named files, running Unix commands, performing integer arithmetic, manipulating text, recursion, etc. The m4 program can be used either as a front end to a compiler or as a macro processor in its own right |
The Bc package contains an arbitrary precision numeric processing language.
Prepare Bc for compilation:
CC=gcc ./configure --prefix=/usr -G -O3 -r
The meaning of the configure options:
CC=gcc
This parameter specifies the compiler to use.
-G
Omit parts of the test suite that won't work until the bc program has been installed.
-O3
Specify the optimization to use.
-r
Enable the use of Readline to improve the line editing feature of bc.
Compile the package:
make
To test bc, run:
make test
Install the package:
make install
The Flex package contains a utility for generating programs that recognize patterns in text.
Prepare Flex for compilation:
./configure --prefix=/usr \ --docdir=/usr/share/doc/flex-2.6.4 \ --disable-static
Compile the package:
make
To test the results (about 0.5 SBU), issue:
make check
Install the package:
make install
A few programs do not know about flex yet and try to run its
predecessor, lex. To support those
programs, create a symbolic link named lex
that runs flex
in lex emulation mode, and
also create the man page of lex as a symlink:
ln -sv flex /usr/bin/lex ln -sv flex.1 /usr/share/man/man1/lex.1
A tool for generating programs that recognize patterns in text; it allows for the versatility to specify the rules for pattern-finding, eradicating the need to develop a specialized program |
|
An extension of flex, is used for generating C++ code and classes. It is a symbolic link to flex |
|
A symbolic link that runs flex in lex emulation mode |
|
The |
The Tcl package contains the Tool Command Language, a robust general-purpose scripting language. The Expect package is written in Tcl (pronounced "tickle").
This package and the next two (Expect and DejaGNU) are installed to support running the test suites for Binutils, GCC and other packages. Installing three packages for testing purposes may seem excessive, but it is very reassuring, if not essential, to know that the most important tools are working properly.
Prepare Tcl for compilation:
SRCDIR=$(pwd) cd unix ./configure --prefix=/usr \ --mandir=/usr/share/man \ --disable-rpath
The meaning of the new configure parameters:
--disable-rpath
This parameter prevents hard coding library search paths (rpath) into the binary executable files and shared libraries. This package does not need rpath for an installation into the standard location, and rpath may sometimes cause unwanted effects or even security issues.
Build the package:
make sed -e "s|$SRCDIR/unix|/usr/lib|" \ -e "s|$SRCDIR|/usr/include|" \ -i tclConfig.sh sed -e "s|$SRCDIR/unix/pkgs/tdbc1.1.9|/usr/lib/tdbc1.1.9|" \ -e "s|$SRCDIR/pkgs/tdbc1.1.9/generic|/usr/include|" \ -e "s|$SRCDIR/pkgs/tdbc1.1.9/library|/usr/lib/tcl8.6|" \ -e "s|$SRCDIR/pkgs/tdbc1.1.9|/usr/include|" \ -i pkgs/tdbc1.1.9/tdbcConfig.sh sed -e "s|$SRCDIR/unix/pkgs/itcl4.3.0|/usr/lib/itcl4.3.0|" \ -e "s|$SRCDIR/pkgs/itcl4.3.0/generic|/usr/include|" \ -e "s|$SRCDIR/pkgs/itcl4.3.0|/usr/include|" \ -i pkgs/itcl4.3.0/itclConfig.sh unset SRCDIR
The various “sed” instructions after the “make” command remove references to the build directory from the configuration files and replace them with the install directory. This is not mandatory for the remainder of LFS, but may be needed if a package built later uses Tcl.
To test the results, issue:
make test
Install the package:
make install
Make the installed library writable so debugging symbols can be removed later:
chmod -v u+w /usr/lib/libtcl8.6.so
Install Tcl's headers. The next package, Expect, requires them.
make install-private-headers
Now make a necessary symbolic link:
ln -sfv tclsh8.6 /usr/bin/tclsh
Rename a man page that conflicts with a Perl man page:
mv /usr/share/man/man3/{Thread,Tcl_Thread}.3
Optionally, install the documentation by issuing the following commands:
cd .. tar -xf ../tcl8.6.15-html.tar.gz --strip-components=1 mkdir -v -p /usr/share/doc/tcl-8.6.15 cp -v -r ./html/* /usr/share/doc/tcl-8.6.15
The Expect package contains tools for automating, via scripted dialogues, interactive applications such as telnet, ftp, passwd, fsck, rlogin, and tip. Expect is also useful for testing these same applications as well as easing all sorts of tasks that are prohibitively difficult with anything else. The DejaGnu framework is written in Expect.
Expect needs PTYs to work. Verify that the PTYs are working properly inside the chroot environment by performing a simple test:
python3 -c 'from pty import spawn; spawn(["echo", "ok"])'
This command should output ok
. If, instead, the output includes
OSError: out of pty
devices
, then the environment is not set up for proper
PTY operation. You need to exit from the chroot environment,
read Section 7.3,
“Preparing Virtual Kernel File Systems” again, and ensure
the devpts
file system (and
other virtual kernel file systems) mounted correctly. Then
reenter the chroot environment following Section 7.4,
“Entering the Chroot Environment”. This issue needs to be
resolved before continuing, or the test suites requiring
Expect (for example the test suites of Bash, Binutils, GCC,
GDBM, and of course Expect itself) will fail
catastrophically, and other subtle breakages may also happen.
Now, make some changes to allow the package with gcc-14.1 or later:
patch -Np1 -i ../expect-5.45.4-gcc14-1.patch
Prepare Expect for compilation:
./configure --prefix=/usr \ --with-tcl=/usr/lib \ --enable-shared \ --disable-rpath \ --mandir=/usr/share/man \ --with-tclinclude=/usr/include
The meaning of the configure options:
--with-tcl=/usr/lib
This parameter is needed to tell configure where the tclConfig.sh script is located.
--with-tclinclude=/usr/include
This explicitly tells Expect where to find Tcl's internal headers.
Build the package:
make
To test the results, issue:
make test
Install the package:
make install ln -svf expect5.45.4/libexpect5.45.4.so /usr/lib
The DejaGnu package contains a framework for running test suites on GNU tools. It is written in expect, which itself uses Tcl (Tool Command Language).
The upstream recommends building DejaGNU in a dedicated build directory:
mkdir -v build cd build
Prepare DejaGNU for compilation:
../configure --prefix=/usr makeinfo --html --no-split -o doc/dejagnu.html ../doc/dejagnu.texi makeinfo --plaintext -o doc/dejagnu.txt ../doc/dejagnu.texi
To test the results, issue:
make check
Install the package:
make install install -v -dm755 /usr/share/doc/dejagnu-1.6.3 install -v -m644 doc/dejagnu.{html,txt} /usr/share/doc/dejagnu-1.6.3
The pkgconf package is a successor to pkg-config and contains a tool for passing the include path and/or library paths to build tools during the configure and make phases of package installations.
Prepare Pkgconf for compilation:
./configure --prefix=/usr \ --disable-static \ --docdir=/usr/share/doc/pkgconf-2.3.0
Compile the package:
make
Install the package:
make install
To maintain compatibility with the original Pkg-config create two symlinks:
ln -sv pkgconf /usr/bin/pkg-config ln -sv pkgconf.1 /usr/share/man/man1/pkg-config.1
The Binutils package contains a linker, an assembler, and other tools for handling object files.
First, apply a patch to prevent a bug that causing unnecessary relinking with packages that depend on cmake-3.31.0 or newer:
patch -Np1 -i ../binutils-2.43.1-upstream_fix-1.patch
The Binutils documentation recommends building Binutils in a dedicated build directory:
mkdir -v build cd build
Prepare Binutils for compilation:
../configure --prefix=/usr \ --sysconfdir=/etc \ --enable-gold \ --enable-ld=default \ --enable-plugins \ --enable-shared \ --disable-werror \ --enable-64-bit-bfd \ --enable-new-dtags \ --with-system-zlib \ --enable-default-hash-style=gnu
The meaning of the new configure parameters:
--enable-gold
Build the gold linker and install it as ld.gold (alongside the default linker).
--enable-ld=default
Build the original bfd linker and install it as both ld (the default linker) and ld.bfd.
--enable-plugins
Enables plugin support for the linker.
--with-system-zlib
Use the installed zlib library instead of building the included version.
Compile the package:
make tooldir=/usr
The meaning of the make parameter:
tooldir=/usr
Normally, the tooldir (the directory where the
executables will ultimately be located) is set to
$(exec_prefix)/$(target_alias)
. For
example, x86_64 machines would expand that to
/usr/x86_64-pc-linux-gnu
.
Because this is a custom system, this target-specific
directory in /usr
is not
required. $(exec_prefix)/$(target_alias)
would
be used if the system were used to cross-compile (for
example, compiling a package on an Intel machine that
generates code that can be executed on PowerPC
machines).
The test suite for Binutils in this section is considered critical. Do not skip it under any circumstances.
Test the results:
make -k check
For a list of failed tests, run:
grep '^FAIL:' $(find -name '*.log')
Twelve tests fail in the gold test suite when the
--enable-default-pie
and
--enable-default-ssp
options are
passed to GCC.
Install the package:
make tooldir=/usr install
Remove useless static libraries:
rm -fv /usr/lib/lib{bfd,ctf,ctf-nobfd,gprofng,opcodes,sframe}.a
Translates program addresses to file names and line numbers; given an address and the name of an executable, it uses the debugging information in the executable to determine which source file and line number are associated with the address |
|
Creates, modifies, and extracts from archives |
|
An assembler that assembles the output of gcc into object files |
|
Used by the linker to de-mangle C++ and Java symbols and to keep overloaded functions from clashing |
|
The DWARF packaging utility |
|
Updates the ELF headers of ELF files |
|
Displays call graph profile data |
|
Gathers and analyzes performance data |
|
A linker that combines a number of object and archive files into a single file, relocating their data and tying up symbol references |
|
A cut down version of ld that only supports the elf object file format |
|
A hard link to ld |
|
Lists the symbols occurring in a given object file |
|
Translates one type of object file into another |
|
Displays information about the given object file, with options controlling the particular information to display; the information shown is useful to programmers who are working on the compilation tools |
|
Generates an index of the contents of an archive and stores it in the archive; the index lists all of the symbols defined by archive members that are relocatable object files |
|
Displays information about ELF type binaries |
|
Lists the section sizes and the total size for the given object files |
|
Outputs, for each given file, the sequences of printable characters that are of at least the specified length (defaulting to four); for object files, it prints, by default, only the strings from the initializing and loading sections while for other types of files, it scans the entire file |
|
Discards symbols from object files |
|
The Binary File Descriptor library |
|
The Compat ANSI-C Type Format debugging support library |
|
A libctf variant which does not use libbfd functionality |
|
A library containing most routines used by gprofng |
|
A library for dealing with opcodes—the “readable text” versions of instructions for the processor; it is used for building utilities like objdump |
|
A library to support online backtracing using a simple unwinder |
The GMP package contains math libraries. These have useful functions for arbitrary precision arithmetic.
If you are building for 32-bit x86, but you have a CPU
which is capable of running 64-bit code and you have specified
CFLAGS
in the environment, the
configure script will attempt to configure for 64-bits and
fail. Avoid this by invoking the configure command below
with
ABI=32
./configure ...
The default settings of GMP produce libraries optimized for
the host processor. If libraries suitable for processors
less capable than the host's CPU are desired, generic
libraries can be created by appending the --host=none-linux-gnu
option to the
configure
command.
Prepare GMP for compilation:
./configure --prefix=/usr \ --enable-cxx \ --disable-static \ --docdir=/usr/share/doc/gmp-6.3.0
The meaning of the new configure options:
--enable-cxx
This parameter enables C++ support
--docdir=/usr/share/doc/gmp-6.3.0
This variable specifies the correct place for the documentation.
Compile the package and generate the HTML documentation:
make make html
The test suite for GMP in this section is considered critical. Do not skip it under any circumstances.
Test the results:
make check 2>&1 | tee gmp-check-log
The code in gmp is highly optimized for the processor where
it is built. Occasionally, the code that detects the
processor misidentifies the system capabilities and there
will be errors in the tests or other applications using the
gmp libraries with the message Illegal instruction
. In this case,
gmp should be reconfigured with the option --host=none-linux-gnu
and rebuilt.
Ensure that at least 199 tests in the test suite passed. Check the results by issuing the following command:
awk '/# PASS:/{total+=$3} ; END{print total}' gmp-check-log
Install the package and its documentation:
make install make install-html
The MPFR package contains functions for multiple precision math.
Prepare MPFR for compilation:
./configure --prefix=/usr \ --disable-static \ --enable-thread-safe \ --docdir=/usr/share/doc/mpfr-4.2.1
Compile the package and generate the HTML documentation:
make make html
The test suite for MPFR in this section is considered critical. Do not skip it under any circumstances.
Test the results and ensure that all 198 tests passed:
make check
Install the package and its documentation:
make install make install-html
The MPC package contains a library for the arithmetic of complex numbers with arbitrarily high precision and correct rounding of the result.
Prepare MPC for compilation:
./configure --prefix=/usr \ --disable-static \ --docdir=/usr/share/doc/mpc-1.3.1
Compile the package and generate the HTML documentation:
make make html
To test the results, issue:
make check
Install the package and its documentation:
make install make install-html
The Attr package contains utilities to administer the extended attributes of filesystem objects.
Prepare Attr for compilation:
./configure --prefix=/usr \ --disable-static \ --sysconfdir=/etc \ --docdir=/usr/share/doc/attr-2.5.2
Compile the package:
make
The tests must be run on a filesystem that supports extended attributes such as the ext2, ext3, or ext4 filesystems. To test the results, issue:
make check
Install the package:
make install
The Acl package contains utilities to administer Access Control Lists, which are used to define fine-grained discretionary access rights for files and directories.
Prepare Acl for compilation:
./configure --prefix=/usr \ --disable-static \ --docdir=/usr/share/doc/acl-2.3.2
Compile the package:
make
The Acl tests must be run on a filesystem that supports access controls, but not until the Coreutils package has been built, using the Acl libraries. If desired, return to this package and run make check after the Coreutils package has been built.
Install the package:
make install
The Libcap package implements the userspace interface to the POSIX 1003.1e capabilities available in Linux kernels. These capabilities partition the all-powerful root privilege into a set of distinct privileges.
Prevent static libraries from being installed:
sed -i '/install -m.*STA/d' libcap/Makefile
Compile the package:
make prefix=/usr lib=lib
The meaning of the make option:
lib=lib
This parameter sets the library directory to
/usr/lib
rather than
/usr/lib64
on x86_64. It
has no effect on x86.
To test the results, issue:
make test
Install the package:
make prefix=/usr lib=lib install
A shell wrapper to explore and constrain capability support |
|
Examines file capabilities |
|
Displays the capabilities of the queried process(es) |
|
Sets file capabilities |
|
Contains the library functions for manipulating POSIX 1003.1e capabilities |
|
Contains functions to support POSIX semantics for syscalls associated with the pthread library |
The Libxcrypt package contains a modern library for one-way hashing of passwords.
Prepare Libxcrypt for compilation:
./configure --prefix=/usr \ --enable-hashes=strong,glibc \ --enable-obsolete-api=no \ --disable-static \ --disable-failure-tokens
The meaning of the new configure options:
--enable-hashes=strong,glibc
Build strong hash algorithms recommended for security
use cases, and the hash algorithms provided by
traditional Glibc libcrypt
for compatibility.
--enable-obsolete-api=no
Disable obsolete API functions. They are not needed for a modern Linux system built from source.
--disable-failure-tokens
Disable failure token feature. It's needed for compatibility with the traditional hash libraries of some platforms, but a Linux system based on Glibc does not need it.
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The instructions above disabled obsolete API functions since no package installed by compiling from sources would link against them at runtime. However, the only known binary-only applications that link against these functions require ABI version 1. If you must have such functions because of some binary-only application or to be compliant with LSB, build the package again with the following commands:
make distclean ./configure --prefix=/usr \ --enable-hashes=strong,glibc \ --enable-obsolete-api=glibc \ --disable-static \ --disable-failure-tokens make cp -av --remove-destination .libs/libcrypt.so.1* /usr/lib
The Shadow package contains programs for handling passwords in a secure way.
If you've installed Linux-PAM, you should follow the BLFS instruction instead of this page to build (or, rebuild or upgrade) shadow.
If you would like to enforce the use of strong passwords, install and configure Linux-PAM first. Then install and configure shadow with the PAM support. Finally install libpwquality and configure PAM to use it.
Disable the installation of the groups program and its man pages, as Coreutils provides a better version. Also, prevent the installation of manual pages that were already installed in Section 8.3, “Man-pages-6.9.1”:
sed -i 's/groups$(EXEEXT) //' src/Makefile.in find man -name Makefile.in -exec sed -i 's/groups\.1 / /' {} \; find man -name Makefile.in -exec sed -i 's/getspnam\.3 / /' {} \; find man -name Makefile.in -exec sed -i 's/passwd\.5 / /' {} \;
Instead of using the default
crypt method, use the
much more secure YESCRYPT method of password
encryption, which also allows passwords longer than 8
characters. It is also necessary to change the obsolete
/var/spool/mail
location for
user mailboxes that Shadow uses by default to the
/var/mail
location used
currently. And, remove /bin
and
/sbin
from the PATH
, since they are simply symlinks to their
counterparts in /usr
.
Including /bin
and/or
/sbin
in the PATH
variable may cause some BLFS packages
fail to build, so don't do that in the .bashrc
file or anywhere else.
sed -e 's:#ENCRYPT_METHOD DES:ENCRYPT_METHOD YESCRYPT:' \ -e 's:/var/spool/mail:/var/mail:' \ -e '/PATH=/{s@/sbin:@@;s@/bin:@@}' \ -i etc/login.defs
Prepare Shadow for compilation:
touch /usr/bin/passwd ./configure --sysconfdir=/etc \ --disable-static \ --with-{b,yes}crypt \ --without-libbsd \ --with-group-name-max-length=32
The meaning of the new configuration options:
The file /usr/bin/passwd
needs to exist because its location is hardcoded in
some programs; if it does not already exist, the
installation script will create it in the wrong place.
--with-{b,yes}crypt
The shell expands this to two switches, --with-bcrypt
and
--with-yescrypt
. They
allow shadow to use the Bcrypt and Yescrypt algorithms
implemented by Libxcrypt for hashing passwords.
These algorithms are more secure (in particular, much
more resistant to GPU-based attacks) than the
traditional SHA algorithms.
--with-group-name-max-length=32
The longest permissible user name is 32 characters. Make the maximum length of a group name the same.
--without-libbsd
Do not use the readpassphrase function from libbsd which is not in LFS. Use the internal copy instead.
Compile the package:
make
This package does not come with a test suite.
Install the package:
make exec_prefix=/usr install make -C man install-man
This package contains utilities to add, modify, and delete
users and groups; set and change their passwords; and perform
other administrative tasks. For a full explanation of what
password shadowing
means, see the doc/HOWTO
file
within the unpacked source tree. If you use Shadow support,
keep in mind that programs which need to verify passwords
(display managers, FTP programs, pop3 daemons, etc.) must be
Shadow-compliant. That is, they must be able to work with
shadowed passwords.
To enable shadowed passwords, run the following command:
pwconv
To enable shadowed group passwords, run:
grpconv
Shadow's default configuration for the useradd utility needs some
explanation. First, the default action for the useradd utility is to
create the user and a group with the same name as the user.
By default the user ID (UID) and group ID (GID) numbers will
begin at 1000. This means if you don't pass extra parameters
to useradd,
each user will be a member of a unique group on the system.
If this behavior is undesirable, you'll need to pass either
the -g
or -N
parameter to useradd, or else change the
setting of USERGROUPS_ENAB
in /etc/login.defs
. See useradd(8) for
more information.
Second, to change the default parameters, the file
/etc/default/useradd
must be
created and tailored to suit your particular needs. Create it
with:
mkdir -p /etc/default useradd -D --gid 999
/etc/default/useradd
parameter explanations
GROUP=999
This parameter sets the beginning of the group numbers
used in the /etc/group
file. The particular value 999 comes from the
--gid
parameter
above. You may set it to any desired value. Note that
useradd
will never reuse a UID or GID. If the number identified
in this parameter is used, it will use the next
available number. Note also that if you don't have a
group with an ID equal to this number on your system,
then the first time you use useradd without the
-g
parameter,
an error message will be generated—useradd: unknown GID 999
, even
though the account has been created correctly. That is
why we created the group users
with this group ID in
Section 7.6,
“Creating Essential Files and Symlinks.”
CREATE_MAIL_SPOOL=yes
This parameter causes useradd to create a
mailbox file for each new user. useradd will assign
the group ownership of this file to the mail
group with 0660 permissions.
If you would rather not create these files, issue the
following command:
sed -i '/MAIL/s/yes/no/' /etc/default/useradd
Choose a password for user root and set it by running:
passwd root
Used to change the maximum number of days between obligatory password changes |
|
Used to change a user's full name and other information |
|
Used to update group passwords in batch mode |
|
Used to update user passwords in batch mode |
|
Used to change a user's default login shell |
|
Checks and enforces the current password expiration policy |
|
Is used to examine the log of login failures, to set a maximum number of failures before an account is blocked, and to reset the failure count |
|
Is used to list the subordinate id ranges for a user |
|
Is used to add and delete members and administrators to groups |
|
Creates a group with the given name |
|
Deletes the group with the given name |
|
Allows a user to administer his/her own group membership list without the requirement of super user privileges. |
|
Is used to modify the given group's name or GID |
|
Verifies the integrity of the group files
|
|
Creates or updates the shadow group file from the normal group file |
|
Updates |
|
Is used by the system to let users sign on |
|
Is a daemon used to enforce restrictions on log-on time and ports |
|
Is used to set the gid mapping of a user namespace |
|
Is used to change the current GID during a login session |
|
Is used to set the uid mapping of a user namespace |
|
Is used to create or update an entire series of user accounts |
|
Displays a message saying an account is not available; it is designed to be used as the default shell for disabled accounts |
|
Is used to change the password for a user or group account |
|
Verifies the integrity of the password files
|
|
Creates or updates the shadow password file from the normal password file |
|
Updates |
|
Executes a given command while the user's GID is set to that of the given group |
|
Runs a shell with substitute user and group IDs |
|
Creates a new user with the given name, or updates the default new-user information |
|
Deletes the specified user account |
|
Is used to modify the given user's login name, user identification (UID), shell, initial group, home directory, etc. |
|
Edits the |
|
Edits the |
|
library to handle subordinate id ranges for users and groups |
The GCC package contains the GNU compiler collection, which includes the C and C++ compilers.
If building on x86_64, change the default directory name for 64-bit libraries to “lib”:
case $(uname -m) in x86_64) sed -e '/m64=/s/lib64/lib/' \ -i.orig gcc/config/i386/t-linux64 ;; esac
The GCC documentation recommends building GCC in a dedicated build directory:
mkdir -v build cd build
Prepare GCC for compilation:
../configure --prefix=/usr \ LD=ld \ --enable-languages=c,c++ \ --enable-default-pie \ --enable-default-ssp \ --enable-host-pie \ --disable-multilib \ --disable-bootstrap \ --disable-fixincludes \ --with-system-zlib
GCC supports seven different computer languages, but the prerequisites for most of them have not yet been installed. See the BLFS Book GCC page for instructions on how to build all of GCC's supported languages.
The meaning of the new configure parameters:
LD=ld
This parameter makes the configure script use the ld program installed by the Binutils package built earlier in this chapter, rather than the cross-built version which would otherwise be used.
--disable-fixincludes
By default, during the installation of GCC some system headers would be “fixed” to be used with GCC. This is not necessary for a modern Linux system, and potentially harmful if a package is reinstalled after installing GCC. This switch prevents GCC from “fixing” the headers.
--with-system-zlib
This switch tells GCC to link to the system installed copy of the Zlib library, rather than its own internal copy.
PIE (position-independent executables) are binary programs that can be loaded anywhere in memory. Without PIE, the security feature named ASLR (Address Space Layout Randomization) can be applied for the shared libraries, but not for the executables themselves. Enabling PIE allows ASLR for the executables in addition to the shared libraries, and mitigates some attacks based on fixed addresses of sensitive code or data in the executables.
SSP (Stack Smashing Protection) is a technique to ensure that the parameter stack is not corrupted. Stack corruption can, for example, alter the return address of a subroutine, thus transferring control to some dangerous code (existing in the program or shared libraries, or injected by the attacker somehow).
Compile the package:
make
In this section, the test suite for GCC is considered important, but it takes a long time. First-time builders are encouraged to run the test suite. The time to run the tests can be reduced significantly by adding -jx to the make -k check command below, where x is the number of CPU cores on your system.
GCC may need more stack space compiling some extremely complex code patterns. As a precaution for the host distros with a tight stack limit, explicitly set the stack size hard limit to infinite. On most host distros (and the final LFS system) the hard limit is infinite by default, but there is no harm done by setting it explicitly. It's not necessary to change the stack size soft limit because GCC will automatically set it to an appropriate value, as long as the value does not exceed the hard limit:
ulimit -s -H unlimited
Now remove/fix several known test failures:
sed -e '/cpython/d' -i ../gcc/testsuite/gcc.dg/plugin/plugin.exp sed -e 's/no-pic /&-no-pie /' -i ../gcc/testsuite/gcc.target/i386/pr113689-1.c sed -e 's/300000/(1|300000)/' -i ../libgomp/testsuite/libgomp.c-c++-common/pr109062.c sed -e 's/{ target nonpic } //' \ -e '/GOTPCREL/d' -i ../gcc/testsuite/gcc.target/i386/fentryname3.c
Test the results as a non-privileged user, but do not stop at errors:
chown -R tester . su tester -c "PATH=$PATH make -k check"
To extract a summary of the test suite results, run:
../contrib/test_summary
To filter out only the summaries, pipe the output through
grep -A7
Summ
.
Results can be compared with those located at https://www.linuxfromscratch.org/lfs/build-logs/development/ and https://gcc.gnu.org/ml/gcc-testresults/.
A few unexpected failures cannot always be avoided. In some cases test failures depend on the specific hardware of the system. Unless the test results are vastly different from those at the above URL, it is safe to continue.
Install the package:
make install
The GCC build directory is owned by tester
now, and the ownership of the
installed header directory (and its content) is incorrect.
Change the ownership to the root
user and group:
chown -v -R root:root \ /usr/lib/gcc/$(gcc -dumpmachine)/14.2.0/include{,-fixed}
Create a symlink required by the FHS for "historical" reasons.
ln -svr /usr/bin/cpp /usr/lib
Many packages use the name cc to call the C compiler. We've already created cc as a symlink in gcc-pass2, create its man page as a symlink as well:
ln -sv gcc.1 /usr/share/man/man1/cc.1
Add a compatibility symlink to enable building programs with Link Time Optimization (LTO):
ln -sfv ../../libexec/gcc/$(gcc -dumpmachine)/14.2.0/liblto_plugin.so \ /usr/lib/bfd-plugins/
Now that our final toolchain is in place, it is important to again ensure that compiling and linking will work as expected. We do this by performing some sanity checks:
echo 'int main(){}' > dummy.c cc dummy.c -v -Wl,--verbose &> dummy.log readelf -l a.out | grep ': /lib'
There should be no errors, and the output of the last command will be (allowing for platform-specific differences in the dynamic linker name):
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
Now make sure that we're set up to use the correct start files:
grep -E -o '/usr/lib.*/S?crt[1in].*succeeded' dummy.log
The output of the last command should be:
/usr/lib/gcc/x86_64-pc-linux-gnu/14.2.0/../../../../lib/Scrt1.o succeeded
/usr/lib/gcc/x86_64-pc-linux-gnu/14.2.0/../../../../lib/crti.o succeeded
/usr/lib/gcc/x86_64-pc-linux-gnu/14.2.0/../../../../lib/crtn.o succeeded
Depending on your machine architecture, the above may differ
slightly. The difference will be the name of the directory
after /usr/lib/gcc
. The
important thing to look for here is that gcc has found all three
crt*.o
files under the
/usr/lib
directory.
Verify that the compiler is searching for the correct header files:
grep -B4 '^ /usr/include' dummy.log
This command should return the following output:
#include <...> search starts here:
/usr/lib/gcc/x86_64-pc-linux-gnu/14.2.0/include
/usr/local/include
/usr/lib/gcc/x86_64-pc-linux-gnu/14.2.0/include-fixed
/usr/include
Again, the directory named after your target triplet may be different than the above, depending on your system architecture.
Next, verify that the new linker is being used with the correct search paths:
grep 'SEARCH.*/usr/lib' dummy.log |sed 's|; |\n|g'
References to paths that have components with '-linux-gnu' should be ignored, but otherwise the output of the last command should be:
SEARCH_DIR("/usr/x86_64-pc-linux-gnu/lib64")
SEARCH_DIR("/usr/local/lib64")
SEARCH_DIR("/lib64")
SEARCH_DIR("/usr/lib64")
SEARCH_DIR("/usr/x86_64-pc-linux-gnu/lib")
SEARCH_DIR("/usr/local/lib")
SEARCH_DIR("/lib")
SEARCH_DIR("/usr/lib");
A 32-bit system may use a few other directories. For example, here is the output from an i686 machine:
SEARCH_DIR("/usr/i686-pc-linux-gnu/lib32")
SEARCH_DIR("/usr/local/lib32")
SEARCH_DIR("/lib32")
SEARCH_DIR("/usr/lib32")
SEARCH_DIR("/usr/i686-pc-linux-gnu/lib")
SEARCH_DIR("/usr/local/lib")
SEARCH_DIR("/lib")
SEARCH_DIR("/usr/lib");
Next make sure that we're using the correct libc:
grep "/lib.*/libc.so.6 " dummy.log
The output of the last command should be:
attempt to open /usr/lib/libc.so.6 succeeded
Make sure GCC is using the correct dynamic linker:
grep found dummy.log
The output of the last command should be (allowing for platform-specific differences in dynamic linker name):
found ld-linux-x86-64.so.2 at /usr/lib/ld-linux-x86-64.so.2
If the output does not appear as shown above or is not received at all, then something is seriously wrong. Investigate and retrace the steps to find out where the problem is and correct it. Any issues should be resolved before continuing with the process.
Once everything is working correctly, clean up the test files:
rm -v dummy.c a.out dummy.log
Finally, move a misplaced file:
mkdir -pv /usr/share/gdb/auto-load/usr/lib mv -v /usr/lib/*gdb.py /usr/share/gdb/auto-load/usr/lib
The C++ compiler |
|
The C compiler |
|
The C preprocessor; it is used by the compiler to expand the #include, #define, and similar directives in the source files |
|
The C++ compiler |
|
The C compiler |
|
A wrapper around ar that adds a plugin to the command line. This program is only used to add "link time optimization" and is not useful with the default build options. |
|
A wrapper around nm that adds a plugin to the command line. This program is only used to add "link time optimization" and is not useful with the default build options. |
|
A wrapper around ranlib that adds a plugin to the command line. This program is only used to add "link time optimization" and is not useful with the default build options. |
|
A coverage testing tool; it is used to analyze programs to determine where optimizations will have the greatest effect |
|
Offline gcda and gcno profile dump tool |
|
Offline gcda profile processing tool |
|
Tool for dumping object files produced by GCC with LTO enabled |
|
The Address Sanitizer runtime library |
|
GCC atomic built-in runtime library |
|
A library that allows GDB to make use of GCC |
|
Contains run-time support for gcc |
|
This library is linked into a program when GCC is instructed to enable profiling |
|
GNU implementation of the OpenMP API for multi-platform shared-memory parallel programming in C/C++ and Fortran |
|
The Hardware-assisted Address Sanitizer runtime library |
|
The GNU transactional memory library |
|
The Leak Sanitizer runtime library |
|
GCC's LTO plugin allows Binutils to process object files produced by GCC with LTO enabled |
|
GCC Quad Precision Math Library API |
|
Contains routines supporting GCC's stack-smashing protection functionality. Normally it is not used, because Glibc also provides those routines. |
|
The standard C++ library |
|
Experimental C++ Contracts library |
|
ISO/IEC TS 18822:2015 Filesystem library |
|
Provides supporting routines for the C++ programming language |
|
The Thread Sanitizer runtime library |
|
The Undefined Behavior Sanitizer runtime library |
The Ncurses package contains libraries for terminal-independent handling of character screens.
Prepare Ncurses for compilation:
./configure --prefix=/usr \ --mandir=/usr/share/man \ --with-shared \ --without-debug \ --without-normal \ --with-cxx-shared \ --enable-pc-files \ --with-pkg-config-libdir=/usr/lib/pkgconfig
The meaning of the new configure options:
--with-shared
This makes Ncurses build and install shared C libraries.
--without-normal
This prevents Ncurses building and installing static C libraries.
--without-debug
This prevents Ncurses building and installing debug libraries.
--with-cxx-shared
This makes Ncurses build and install shared C++ bindings. It also prevents it building and installing static C++ bindings.
--enable-pc-files
This switch generates and installs .pc files for pkg-config.
Compile the package:
make
This package has a test suite, but it can only be run after
the package has been installed. The tests reside in the
test/
directory. See the
README
file in that directory
for further details.
The installation of this package will overwrite libncursesw.so.6.5
in-place. It may crash
the shell process which is using code and data from the
library file. Install the package with DESTDIR
, and replace the library file
correctly using install command (the header
curses.h
is also edited to
ensure the wide-character ABI to be used as what we've done
in Section 6.3, “Ncurses-6.5”):
make DESTDIR=$PWD/dest install install -vm755 dest/usr/lib/libncursesw.so.6.5 /usr/lib rm -v dest/usr/lib/libncursesw.so.6.5 sed -e 's/^#if.*XOPEN.*$/#if 1/' \ -i dest/usr/include/curses.h cp -av dest/* /
Many applications still expect the linker to be able to find
non-wide-character Ncurses libraries. Trick such applications
into linking with wide-character libraries by means of
symlinks (note that the .so
links are only safe with curses.h
edited to always use the
wide-character ABI):
for lib in ncurses form panel menu ; do ln -sfv lib${lib}w.so /usr/lib/lib${lib}.so ln -sfv ${lib}w.pc /usr/lib/pkgconfig/${lib}.pc done
Finally, make sure that old applications that look for
-lcurses
at build time are
still buildable:
ln -sfv libncursesw.so /usr/lib/libcurses.so
If desired, install the Ncurses documentation:
cp -v -R doc -T /usr/share/doc/ncurses-6.5
The instructions above don't create non-wide-character Ncurses libraries since no package installed by compiling from sources would link against them at runtime. However, the only known binary-only applications that link against non-wide-character Ncurses libraries require version 5. If you must have such libraries because of some binary-only application or to be compliant with LSB, build the package again with the following commands:
make distclean ./configure --prefix=/usr \ --with-shared \ --without-normal \ --without-debug \ --without-cxx-binding \ --with-abi-version=5 make sources libs cp -av lib/lib*.so.5* /usr/lib
Converts a termcap description into a terminfo description |
|
Clears the screen, if possible |
|
Compares or prints out terminfo descriptions |
|
Converts a terminfo description into a termcap description |
|
Provides configuration information for ncurses |
|
Reinitializes a terminal to its default values |
|
Clears and sets tab stops on a terminal |
|
The terminfo entry-description compiler that translates a terminfo file from source format into the binary format needed for the ncurses library routines [A terminfo file contains information on the capabilities of a certain terminal.] |
|
Lists all available terminal types, giving the primary name and description for each |
|
Makes the values of terminal-dependent capabilities available to the shell; it can also be used to reset or initialize a terminal or report its long name |
|
Can be used to initialize terminals |
|
Contains functions to display text in many complex ways on a terminal screen; a good example of the use of these functions is the menu displayed during the kernel's make menuconfig |
|
Contains C++ binding for other libraries in this package |
|
Contains functions to implement forms |
|
Contains functions to implement menus |
|
Contains functions to implement panels |
The Sed package contains a stream editor.
Prepare Sed for compilation:
./configure --prefix=/usr
Compile the package and generate the HTML documentation:
make make html
To test the results, issue:
chown -R tester . su tester -c "PATH=$PATH make check"
Install the package and its documentation:
make install install -d -m755 /usr/share/doc/sed-4.9 install -m644 doc/sed.html /usr/share/doc/sed-4.9
The Psmisc package contains programs for displaying information about running processes.
Prepare Psmisc for compilation:
./configure --prefix=/usr
Compile the package:
make
To run the test suite, run:
make check
Install the package:
make install
Reports the Process IDs (PIDs) of processes that use the given files or file systems |
|
Kills processes by name; it sends a signal to all processes running any of the given commands |
|
Peek at file descriptors of a running process, given its PID |
|
Prints information about a process |
|
Reports current logs path of a process |
|
Displays running processes as a tree |
|
Same as pstree, except that it waits for confirmation before exiting |
The Gettext package contains utilities for internationalization and localization. These allow programs to be compiled with NLS (Native Language Support), enabling them to output messages in the user's native language.
At first, fix an issue causing the package fail to build with libxml-2.12 or later. The fix is optional for building LFS, but required if rebuilding this package in BLFS with libxml installed:
sed -e '/^structured/s/xmlError \*/typeof(xmlCtxtGetLastError(NULL)) /' \ -i gettext-tools/src/its.c
Prepare Gettext for compilation:
./configure --prefix=/usr \ --disable-static \ --docdir=/usr/share/doc/gettext-0.23
Compile the package:
make
To test the results (this takes a long time, around 3 SBUs), issue:
make check
Install the package:
make install chmod -v 0755 /usr/lib/preloadable_libintl.so
Copies standard Gettext infrastructure files into a source package |
|
Substitutes environment variables in shell format strings |
|
Translates a natural language message into the user's language by looking up the translation in a message catalog |
|
Primarily serves as a shell function library for gettext |
|
Copies all standard Gettext files into the given top-level directory of a package to begin internationalizing it |
|
Filters the messages of a translation catalog according to their attributes and manipulates the attributes |
|
Concatenates and merges the given |
|
Compares two |
|
Finds the messages that are common to the given
|
|
Converts a translation catalog to a different character encoding |
|
Creates an English translation catalog |
|
Applies a command to all translations of a translation catalog |
|
Applies a filter to all translations of a translation catalog |
|
Generates a binary message catalog from a translation catalog |
|
Extracts all messages of a translation catalog that match a given pattern or belong to some given source files |
|
Creates a new |
|
Combines two raw translations into a single file |
|
Decompiles a binary message catalog into raw translation text |
|
Unifies duplicate translations in a translation catalog |
|
Displays native language translations of a textual message whose grammatical form depends on a number |
|
Recodes Serbian text from Cyrillic to Latin script |
|
Extracts the translatable message lines from the given source files to make the first translation template |
|
Defines the autosprintf class, which makes C formatted output routines usable in C++ programs, for use with the <string> strings and the <iostream> streams |
|
Contains common routines used by the various Gettext programs; these are not intended for general use |
|
Used to write specialized programs that process
|
|
Provides common routines used by the various Gettext programs; these are not intended for general use |
|
Text styling library |
|
A library, intended to be used by LD_PRELOAD, that
helps |
The Bison package contains a parser generator.
Prepare Bison for compilation:
./configure --prefix=/usr --docdir=/usr/share/doc/bison-3.8.2
Compile the package:
make
To test the results (about 5.5 SBU), issue:
make check
Install the package:
make install
Generates, from a series of rules, a program for analyzing the structure of text files; Bison is a replacement for Yacc (Yet Another Compiler Compiler) |
|
A wrapper for bison, meant for
programs that still call yacc instead of
bison; it calls
bison
with the |
|
The Yacc library containing implementations of
Yacc-compatible |
The Grep package contains programs for searching through the contents of files.
First, remove a warning about using egrep and fgrep that makes tests on some packages fail:
sed -i "s/echo/#echo/" src/egrep.sh
Prepare Grep for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The Bash package contains the Bourne-Again Shell.
Prepare Bash for compilation:
./configure --prefix=/usr \ --without-bash-malloc \ --with-installed-readline \ --docdir=/usr/share/doc/bash-5.2.37
The meaning of the new configure option:
--with-installed-readline
This option tells Bash to use the readline
library that is already
installed on the system rather than using its own
readline version.
Compile the package:
make
Skip down to “Install the package” if not running the test suite.
To prepare the tests, ensure that the tester
user can write to the sources
tree:
chown -R tester .
The test suite of this package is designed to be run as a
non-root
user who owns the
terminal connected to standard input. To satisfy the
requirement, spawn a new pseudo terminal using Expect and run the tests as the
tester
user:
su -s /usr/bin/expect tester << "EOF" set timeout -1 spawn make tests expect eof lassign [wait] _ _ _ value exit $value EOF
The test suite uses diff to detect the
difference between test script output and the expected
output. Any output from diff (prefixed with
<
and >
) indicates a test failure,
unless there is a message saying the difference can be
ignored. One test named run-builtins
is known to fail on some host
distros with a difference on the first line of the output.
Install the package:
make install
Run the newly compiled bash program (replacing the one that is currently being executed):
exec /usr/bin/bash --login
A widely-used command interpreter; it performs many types of expansions and substitutions on a given command line before executing it, thus making this interpreter a powerful tool |
|
A shell script to help the user compose and mail standard formatted bug reports concerning bash |
|
A symlink to the bash program; when invoked as sh, bash tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well |
The Libtool package contains the GNU generic library support script. It makes the use of shared libraries simpler with a consistent, portable interface.
Prepare Libtool for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make -k check
Five tests are known to fail in the LFS build environment due to a circular dependency, but these tests pass if rechecked after automake has been installed. Additionally, with grep-3.8 or newer, two tests will trigger a warning for non-POSIX regular expressions and fail.
Install the package:
make install
Remove a useless static library:
rm -fv /usr/lib/libltdl.a
The GDBM package contains the GNU Database Manager. It is a library of database functions that uses extensible hashing and works like the standard UNIX dbm. The library provides primitives for storing key/data pairs, searching and retrieving the data by its key and deleting a key along with its data.
Prepare GDBM for compilation:
./configure --prefix=/usr \ --disable-static \ --enable-libgdbm-compat
The meaning of the configure option:
--enable-libgdbm-compat
This switch enables building the libgdbm compatibility library. Some packages outside of LFS may require the older DBM routines it provides.
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Gperf generates a perfect hash function from a key set.
Prepare Gperf for compilation:
./configure --prefix=/usr --docdir=/usr/share/doc/gperf-3.1
Compile the package:
make
The tests are known to fail if running multiple simultaneous tests (-j option greater than 1). To test the results, issue:
make -j1 check
Install the package:
make install
The Expat package contains a stream oriented C library for parsing XML.
Prepare Expat for compilation:
./configure --prefix=/usr \ --disable-static \ --docdir=/usr/share/doc/expat-2.6.4
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
If desired, install the documentation:
install -v -m644 doc/*.{html,css} /usr/share/doc/expat-2.6.4
The Inetutils package contains programs for basic networking.
First, make the package build with gcc-14.1 or later::
sed -i 's/def HAVE_TERMCAP_TGETENT/ 1/' telnet/telnet.c
Prepare Inetutils for compilation:
./configure --prefix=/usr \ --bindir=/usr/bin \ --localstatedir=/var \ --disable-logger \ --disable-whois \ --disable-rcp \ --disable-rexec \ --disable-rlogin \ --disable-rsh \ --disable-servers
The meaning of the configure options:
--disable-logger
This option prevents Inetutils from installing the logger program, which is used by scripts to pass messages to the System Log Daemon. Do not install it because Util-linux installs a more recent version.
--disable-whois
This option disables the building of the Inetutils whois client, which is out of date. Instructions for a better whois client are in the BLFS book.
--disable-r*
These parameters disable building obsolete programs that should not be used due to security issues. The functions provided by these programs can be provided by the openssh package in the BLFS book.
--disable-servers
This disables the installation of the various network servers included as part of the Inetutils package. These servers are deemed not appropriate in a basic LFS system. Some are insecure by nature and are only considered safe on trusted networks. Note that better replacements are available for many of these servers.
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Move a program to the proper location:
mv -v /usr/{,s}bin/ifconfig
Show the system's DNS domain name |
|
Is the file transfer protocol program |
|
Reports or sets the name of the host |
|
Manages network interfaces |
|
Sends echo-request packets and reports how long the replies take |
|
A version of ping for IPv6 networks |
|
Is used to chat with another user |
|
An interface to the TELNET protocol |
|
A trivial file transfer program |
|
Traces the route your packets take from the host you are working on to another host on a network, showing all the intermediate hops (gateways) along the way |
The Less package contains a text file viewer.
Prepare Less for compilation:
./configure --prefix=/usr --sysconfdir=/etc
The meaning of the configure options:
--sysconfdir=/etc
This option tells the programs created by the package
to look in /etc
for the
configuration files.
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The Perl package contains the Practical Extraction and Report Language.
This version of Perl builds the Compress::Raw::Zlib and Compress::Raw::BZip2 modules. By default Perl will use an internal copy of the sources for the build. Issue the following command so that Perl will use the libraries installed on the system:
export BUILD_ZLIB=False export BUILD_BZIP2=0
To have full control over the way Perl is set up, you can remove the “-des” options from the following command and hand-pick the way this package is built. Alternatively, use the command exactly as shown below to use the defaults that Perl auto-detects:
sh Configure -des \ -D prefix=/usr \ -D vendorprefix=/usr \ -D privlib=/usr/lib/perl5/5.40/core_perl \ -D archlib=/usr/lib/perl5/5.40/core_perl \ -D sitelib=/usr/lib/perl5/5.40/site_perl \ -D sitearch=/usr/lib/perl5/5.40/site_perl \ -D vendorlib=/usr/lib/perl5/5.40/vendor_perl \ -D vendorarch=/usr/lib/perl5/5.40/vendor_perl \ -D man1dir=/usr/share/man/man1 \ -D man3dir=/usr/share/man/man3 \ -D pager="/usr/bin/less -isR" \ -D useshrplib \ -D usethreads
The meaning of the new Configure options:
-D
pager="/usr/bin/less -isR"
This ensures that less
is used instead
of more
.
-D
man1dir=/usr/share/man/man1 -D
man3dir=/usr/share/man/man3
Since Groff is not installed yet, Configure will not create man pages for Perl. These parameters override this behavior.
-D
usethreads
Build Perl with support for threads.
Compile the package:
make
To test the results (approximately 11 SBU), issue:
TEST_JOBS=$(nproc) make test_harness
Install the package and clean up:
make install unset BUILD_ZLIB BUILD_BZIP2
A command line front end to Module::CoreList |
|
Interact with the Comprehensive Perl Archive Network (CPAN) from the command line |
|
Builds a Perl extension for the Encode module from either Unicode Character Mappings or Tcl Encoding Files |
|
Guess the encoding type of one or several files |
|
Converts |
|
Converts |
|
Shell script for examining installed Perl modules; it can create a tarball from an installed module |
|
Converts data between certain input and output formats |
|
Can be used to configure the |
|
Combines some of the best features of C, sed, awk and sh into a single Swiss Army language |
|
A hard link to perl |
|
Used to generate bug reports about Perl, or the modules that come with it, and mail them |
|
Displays a piece of documentation in pod format that is embedded in the Perl installation tree or in a Perl script |
|
The Perl Installation Verification Procedure; it can be used to verify that Perl and its libraries have been installed correctly |
|
Used to generate thank you messages to mail to the Perl developers |
|
A Perl version of the character encoding converter iconv |
|
A rough tool for converting Perl4 |
|
Converts files from pod format to HTML format |
|
Converts pod data to formatted *roff input |
|
Converts pod data to formatted ASCII text |
|
Prints usage messages from embedded pod docs in files |
|
Checks the syntax of pod format documentation files |
|
Displays selected sections of pod documentation |
|
Command line tool for running tests against the Test::Harness module |
|
A tar-like program written in Perl |
|
A Perl program that compares an extracted archive with an unextracted one |
|
A Perl program that applies pattern matching to the contents of files in a tar archive |
|
Prints or checks SHA checksums |
|
Is used to force verbose warning diagnostics in Perl |
|
Converts Perl XS code into C code |
|
Displays details about the internal structure of a Zip file |
The XML::Parser module is a Perl interface to James Clark's XML parser, Expat.
Prepare XML::Parser for compilation:
perl Makefile.PL
Compile the package:
make
To test the results, issue:
make test
Install the package:
make install
The Intltool is an internationalization tool used for extracting translatable strings from source files.
First fix a warning that is caused by perl-5.22 and later:
sed -i 's:\\\${:\\\$\\{:' intltool-update.in
The above regular expression looks unusual because of all the backslashes. What it does is add a backslash before the right brace character in the sequence '\${' resulting in '\$\{'.
Prepare Intltool for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install install -v -Dm644 doc/I18N-HOWTO /usr/share/doc/intltool-0.51.0/I18N-HOWTO
The Autoconf package contains programs for producing shell scripts that can automatically configure source code.
Prepare Autoconf for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Produces shell scripts that automatically configure software source code packages to adapt to many kinds of Unix-like systems; the configuration scripts it produces are independent—running them does not require the autoconf program |
|
A tool for creating template files of C #define statements for configure to use |
|
A wrapper for the M4 macro processor |
|
Automatically runs autoconf, autoheader, aclocal, automake, gettextize, and libtoolize in the correct order to save time when changes are made to autoconf and automake template files |
|
Helps to create a |
|
Modifies a |
|
Helps when writing |
The Automake package contains programs for generating Makefiles for use with Autoconf.
Prepare Automake for compilation:
./configure --prefix=/usr --docdir=/usr/share/doc/automake-1.17
Compile the package:
make
Using four parallel jobs speeds up the tests, even on systems with less logical cores, due to internal delays in individual tests. To test the results, issue:
make -j$(($(nproc)>4?$(nproc):4))
check
Replace $((...))
with the number of logical cores you want to use if you don't
want to use all.
Install the package:
make install
Generates |
|
A hard link to aclocal |
|
A tool for automatically generating |
|
A hard link to automake |
The OpenSSL package contains management tools and libraries relating to cryptography. These are useful for providing cryptographic functions to other packages, such as OpenSSH, email applications, and web browsers (for accessing HTTPS sites).
Prepare OpenSSL for compilation:
./config --prefix=/usr \ --openssldir=/etc/ssl \ --libdir=lib \ shared \ zlib-dynamic
Compile the package:
make
To test the results, issue:
HARNESS_JOBS=$(nproc)
make test
One test, 30-test_afalg.t, is known to fail if the host
kernel does not have CONFIG_CRYPTO_USER_API_SKCIPHER
enabled, or
does not have any options providing an AES with CBC
implementation (for example, the combination of CONFIG_CRYPTO_AES
and CONFIG_CRYPTO_CBC
, or CONFIG_CRYPTO_AES_NI_INTEL
if the CPU
supports AES-NI) enabled. If it fails, it can safely be
ignored.
Install the package:
sed -i '/INSTALL_LIBS/s/libcrypto.a libssl.a//' Makefile make MANSUFFIX=ssl install
Add the version to the documentation directory name, to be consistent with other packages:
mv -v /usr/share/doc/openssl /usr/share/doc/openssl-3.4.0
If desired, install some additional documentation:
cp -vfr doc/* /usr/share/doc/openssl-3.4.0
You should update OpenSSL when a new version which fixes
vulnerabilities is announced. Since OpenSSL 3.0.0, the
OpenSSL versioning scheme follows the MAJOR.MINOR.PATCH
format. API/ABI compatibility is guaranteed for the same
MAJOR version number. Because LFS installs only the shared
libraries, there is no need to recompile packages which
link to libcrypto.so
or
libssl.so
when upgrading to a version with the same
MAJOR version number.
However, any running programs linked to those libraries need to be stopped and restarted. Read the related entries in Section 8.2.1, “Upgrade Issues” for details.
is a Perl script that scans all files in a directory and adds symbolic links to their hash values. Use of c_rehash is considered obsolete and should be replaced by openssl rehash command |
|
is a command-line tool for using the various cryptography functions of OpenSSL's crypto library from the shell. It can be used for various functions which are documented in openssl(1) |
|
implements a wide range of cryptographic algorithms used in various Internet standards. The services provided by this library are used by the OpenSSL implementations of SSL, TLS and S/MIME, and they have also been used to implement OpenSSH, OpenPGP, and other cryptographic standards |
|
implements the Transport Layer Security (TLS v1) protocol. It provides a rich API, documentation on which can be found in ssl(7) |
The Kmod package contains libraries and utilities for loading kernel modules
Prepare Kmod for compilation:
./configure --prefix=/usr \ --sysconfdir=/etc \ --with-openssl \ --with-xz \ --with-zstd \ --with-zlib \ --disable-manpages
The meaning of the configure options:
--with-openssl
This option enables Kmod to handle PKCS7 signatures for kernel modules.
--with-xz
, --with-zlib
, and --with-zstd
These options enable Kmod to handle compressed kernel modules.
--disable-manpages
This option disables generating the man pages which requires an external program.
Compile the package:
make
The test suite of this package requires raw kernel headers (not the “sanitized” kernel headers installed earlier), which are beyond the scope of LFS.
Install the package and recreate some symlinks for
compatibility with Module-Init-Tools (the package that
previously handled Linux kernel modules). The building system
will create all these symlinks in /usr/bin
, but we only want lsmod there and all other
symlinks in /usr/sbin
instead:
make install for target in depmod insmod modinfo modprobe rmmod; do ln -sfv ../bin/kmod /usr/sbin/$target rm -fv /usr/bin/$target done
Creates a dependency file based on the symbols it finds in the existing set of modules; this dependency file is used by modprobe to automatically load the required modules |
|
Installs a loadable module in the running kernel |
|
Loads and unloads kernel modules |
|
Lists currently loaded modules |
|
Examines an object file associated with a kernel module and displays any information that it can glean |
|
Uses a dependency file, created by depmod, to automatically load relevant modules |
|
Unloads modules from the running kernel |
|
This library is used by other programs to load and unload kernel modules |
Libelf is a library for handling ELF (Executable and Linkable Format) files.
Libelf is part of the elfutils-0.192 package. Use the elfutils-0.192.tar.bz2 file as the source tarball.
Prepare Libelf for compilation:
./configure --prefix=/usr \ --disable-debuginfod \ --enable-libdebuginfod=dummy
Compile the package:
make
To test the results, issue:
make check
Install only Libelf:
make -C libelf install install -vm644 config/libelf.pc /usr/lib/pkgconfig rm /usr/lib/libelf.a
The Libffi library provides a portable, high level programming interface to various calling conventions. This allows a programmer to call any function specified by a call interface description at run time.
FFI stands for Foreign Function Interface. An FFI allows a program written in one language to call a program written in another language. Specifically, Libffi can provide a bridge between an interpreter like Perl, or Python, and shared library subroutines written in C, or C++.
Like GMP, Libffi builds with optimizations specific to the
processor in use. If building for another system, change
the value of the --with-gcc-arch=
parameter in
the following command to an architecture name fully
implemented by the CPU on that system. If this is not done,
all applications that link to libffi
will trigger Illegal Operation
Errors.
Prepare Libffi for compilation:
./configure --prefix=/usr \ --disable-static \ --with-gcc-arch=native
The meaning of the configure option:
--with-gcc-arch=native
Ensure GCC optimizes for the current system. If this is not specified, the system is guessed and the code generated may not be correct. If the generated code will be copied from the native system to a less capable system, use the less capable system as a parameter. For details about alternative system types, see the x86 options in the GCC manual.
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The Python 3 package contains the Python development environment. It is useful for object-oriented programming, writing scripts, prototyping large programs, and developing entire applications. Python is an interpreted computer language.
Prepare Python for compilation:
./configure --prefix=/usr \ --enable-shared \ --with-system-expat \ --enable-optimizations
The meaning of the configure options:
--with-system-expat
This switch enables linking against the system version of Expat.
--enable-optimizations
This switch enables extensive, but time-consuming, optimization steps. The interpreter is built twice; tests performed on the first build are used to improve the optimized final version.
Compile the package:
make
Some tests are known to occasionally hang indefinitely. So to test the results, run the test suite but set a 2-minute time limit for each test case:
make test TESTOPTS="--timeout 120"
For a relatively slow system you may need to increase the time limit and 1 SBU (measured when building Binutils pass 1 with one CPU core) should be enough. Some tests are flaky, so the test suite will automatically re-run failed tests. If a test failed but then passed when re-run, it should be considered as passed. One test, test_ssl, is known to fail in the chroot environment.
Install the package:
make install
We use the pip3
command to install Python 3 programs and modules for all
users as root
in several
places in this book. This conflicts with the Python
developers' recommendation: to install packages into a
virtual environment, or into the home directory of a regular
user (by running pip3 as this user). A
multi-line warning is triggered whenever pip3 is issued by the
root
user.
The main reason for the recommendation is to avoid conflicts with the system's package manager (dpkg, for example). LFS does not have a system-wide package manager, so this is not a problem. Also, pip3 will check for a new version of itself whenever it's run. Since domain name resolution is not yet configured in the LFS chroot environment, pip3 cannot check for a new version of itself, and will produce a warning.
After we boot the LFS system and set up a network connection, a different warning will be issued, telling the user to update pip3 from a pre-built wheel on PyPI (whenever a new version is available). But LFS considers pip3 to be a part of Python 3, so it should not be updated separately. Also, an update from a pre-built wheel would deviate from our objective: to build a Linux system from source code. So the warning about a new version of pip3 should be ignored as well. If you wish, you can suppress all these warnings by running the following command, which creates a configuration file:
cat > /etc/pip.conf << EOF
[global]
root-user-action = ignore
disable-pip-version-check = true
EOF
In LFS and BLFS we normally build and install Python
modules with the pip3 command. Please be
sure that the pip3
install commands in both books are run as
the root
user (unless it's
for a Python virtual environment). Running pip3 install as a
non-root
user may seem to
work, but it will cause the installed module to be
inaccessible by other users.
pip3 install
will not reinstall an already installed module
automatically. When using the pip3 install command to
upgrade a module (for example, from meson-0.61.3 to
meson-0.62.0), insert the option --upgrade
into the command
line. If it's really necessary to downgrade a module, or
reinstall the same version for some reason, insert
--force-reinstall
--no-deps
into the command line.
If desired, install the preformatted documentation:
install -v -dm755 /usr/share/doc/python-3.13.1/html tar --no-same-owner \ -xvf ../python-3.13.1-docs-html.tar.bz2 cp -R --no-preserve=mode python-3.13.1-docs-html/* \ /usr/share/doc/python-3.13.1/html
The meaning of the documentation install commands:
--no-same-owner
(tar) and --no-preserve=mode
(cp)
Ensure the installed files have the correct ownership and permissions. Without these options, tar will install the package files with the upstream creator's values and files would have restrictive permissions.
is a Python program that reads Python 2.x source code and applies a series of fixes to transform it into valid Python 3.x code |
|
is a wrapper script that opens a Python aware GUI editor. For this script to run, you must have installed Tk before Python, so that the Tkinter Python module is built. |
|
The package installer for Python. You can use pip to install packages from Python Package Index and other indexes. |
|
is the Python documentation tool |
|
is the interpreter for Python, an interpreted, interactive, object-oriented programming language |
Flit-core is the distribution-building parts of Flit (a packaging tool for simple Python modules).
Build the package:
pip3 wheel -w dist --no-cache-dir --no-build-isolation --no-deps $PWD
Install the package:
pip3 install --no-index --no-user --find-links dist flit_core
The meaning of the pip3 configuration options and commands:
This command builds the wheel archive for this package.
-w
dist
Instructs pip to put the created wheel into the
dist
directory.
--no-cache-dir
Prevents pip from copying the created wheel into the
/root/.cache/pip
directory.
This command installs the package.
--no-build-isolation
,
--no-deps
, and
--no-index
These options prevent fetching files from the online package repository (PyPI). If packages are installed in the correct order, pip won't need to fetch any files in the first place; these options add some safety in case of user error.
--find-links dist
Instructs pip to search for wheel archives in the
dist
directory.
Wheel is a Python library that is the reference implementation of the Python wheel packaging standard.
Compile Wheel with the following command:
pip3 wheel -w dist --no-cache-dir --no-build-isolation --no-deps $PWD
Install Wheel with the following command:
pip3 install --no-index --find-links=dist wheel
Setuptools is a tool used to download, build, install, upgrade, and uninstall Python packages.
Build the package:
pip3 wheel -w dist --no-cache-dir --no-build-isolation --no-deps $PWD
Install the package:
pip3 install --no-index --find-links dist setuptools
Ninja is a small build system with a focus on speed.
When run, ninja normally utilizes the greatest possible number of processes in parallel. By default this is the number of cores on the system, plus two. This may overheat the CPU, or make the system run out of memory. When ninja is invoked from the command line, passing the -jN parameter will limit the number of parallel processes. Some packages embed the execution of ninja, and do not pass the -j parameter on to it.
Using the optional procedure below allows a user to limit the number of parallel processes via an environment variable, NINJAJOBS. For example, setting:
export NINJAJOBS=4
will limit ninja to four parallel processes.
If desired, make ninja recognize the environment variable NINJAJOBS by running the stream editor:
sed -i '/int Guess/a \ int j = 0;\ char* jobs = getenv( "NINJAJOBS" );\ if ( jobs != NULL ) j = atoi( jobs );\ if ( j > 0 ) return j;\ ' src/ninja.cc
Build Ninja with:
python3 configure.py --bootstrap
The meaning of the build option:
--bootstrap
This parameter forces Ninja to rebuild itself for the current system.
The package tests cannot run in the chroot environment. They require cmake.
Install the package:
install -vm755 ninja /usr/bin/ install -vDm644 misc/bash-completion /usr/share/bash-completion/completions/ninja install -vDm644 misc/zsh-completion /usr/share/zsh/site-functions/_ninja
Meson is an open source build system designed to be both extremely fast and as user friendly as possible.
Compile Meson with the following command:
pip3 wheel -w dist --no-cache-dir --no-build-isolation --no-deps $PWD
The test suite requires some packages outside the scope of LFS.
Install the package:
pip3 install --no-index --find-links dist meson install -vDm644 data/shell-completions/bash/meson /usr/share/bash-completion/completions/meson install -vDm644 data/shell-completions/zsh/_meson /usr/share/zsh/site-functions/_meson
The meaning of the install parameters:
-w
dist
Puts the created wheels into the dist
directory.
--find-links dist
Installs wheels from the dist
directory.
The Coreutils package contains the basic utility programs needed by every operating system.
POSIX requires that programs from Coreutils recognize character boundaries correctly even in multibyte locales. The following patch fixes this non-compliance and other internationalization-related bugs.
patch -Np1 -i ../coreutils-9.5-i18n-2.patch
Many bugs have been found in this patch. When reporting new bugs to the Coreutils maintainers, please check first to see if those bugs are reproducible without this patch.
Now prepare Coreutils for compilation:
autoreconf -fiv FORCE_UNSAFE_CONFIGURE=1 ./configure \ --prefix=/usr \ --enable-no-install-program=kill,uptime
The meaning of the configure options:
The patch for internationalization has modified the build system, so the configuration files must be regenerated.
FORCE_UNSAFE_CONFIGURE=1
This environment variable allows the package to be
built by the root
user.
--enable-no-install-program=kill,uptime
The purpose of this switch is to prevent Coreutils from installing programs that will be installed by other packages.
Compile the package:
make
Skip down to “Install the package” if not running the test suite.
Now the test suite is ready to be run. First, run the tests
that are meant to be run as user root
:
make NON_ROOT_USERNAME=tester check-root
We're going to run the remainder of the tests as the
tester
user. Certain tests
require that the user be a member of more than one group. So
that these tests are not skipped, add a temporary group and
make the user tester
a part
of it:
groupadd -g 102 dummy -U tester
Fix some of the permissions so that the non-root
user can compile and run the tests:
chown -R tester .
Now run the tests (using /dev/null
for the standard input, or two
tests may be broken if building LFS in a graphical terminal
or a session in SSH or GNU Screen because the standard input
is connected to a PTY from host distro, and the device node
for such a PTY cannot be accessed from the LFS chroot
environment):
su tester -c "PATH=$PATH make -k RUN_EXPENSIVE_TESTS=yes check" \ < /dev/null
Remove the temporary group:
groupdel dummy
Two tests, tests/cp/preserve-mode.sh
and tests/mv/acl.sh
, are known to fail in the
chroot environment, but pass in a complete system.
Install the package:
make install
Move programs to the locations specified by the FHS:
mv -v /usr/bin/chroot /usr/sbin mv -v /usr/share/man/man1/chroot.1 /usr/share/man/man8/chroot.8 sed -i 's/"1"/"8"/' /usr/share/man/man8/chroot.8
Is an actual command, /usr/bin/[; it is a synonym for the test command |
|
Encodes and decodes data according to the base32 specification (RFC 4648) |
|
Encodes and decodes data according to the base64 specification (RFC 4648) |
|
Prints or checks BLAKE2 (512-bit) checksums |
|
Strips any path and a given suffix from a file name |
|
Encodes or decodes data using various algorithms |
|
Concatenates files to standard output |
|
Changes security context for files and directories |
|
Changes the group ownership of files and directories |
|
Changes the permissions of each file to the given mode; the mode can be either a symbolic representation of the changes to be made, or an octal number representing the new permissions |
|
Changes the user and/or group ownership of files and directories |
|
Runs a command with the specified directory as the
|
|
Prints the Cyclic Redundancy Check (CRC) checksum and the byte counts of each specified file |
|
Compares two sorted files, outputting in three columns the lines that are unique and the lines that are common |
|
Copies files |
|
Splits a given file into several new files, separating them according to given patterns or line numbers, and outputting the byte count of each new file |
|
Prints sections of lines, selecting the parts according to given fields or positions |
|
Displays the current date and time in the given format, or sets the system date and time |
|
Copies a file using the given block size and count, while optionally performing conversions on it |
|
Reports the amount of disk space available (and used) on all mounted file systems, or only on the file systems holding the selected files |
|
Lists the contents of each given directory (the same as the ls command) |
|
Outputs commands to set the |
|
Extracts the directory portion(s) of the given name(s) |
|
Reports the amount of disk space used by the current directory, by each of the given directories (including all subdirectories) or by each of the given files |
|
Displays the given strings |
|
Runs a command in a modified environment |
|
Converts tabs to spaces |
|
Evaluates expressions |
|
Prints the prime factors of the specified integers |
|
Does nothing, unsuccessfully; it always exits with a status code indicating failure |
|
Reformats the paragraphs in the given files |
|
Wraps the lines in the given files |
|
Reports a user's group memberships |
|
Prints the first ten lines (or the given number of lines) of each given file |
|
Reports the numeric identifier (in hexadecimal) of the host |
|
Reports the effective user ID, group ID, and group memberships of the current user or specified user |
|
Copies files while setting their permission modes and, if possible, their owner and group |
|
Joins the lines that have identical join fields from two separate files |
|
Creates a hard link (with the given name) to a file |
|
Makes hard links or soft (symbolic) links between files |
|
Reports the current user's login name |
|
Lists the contents of each given directory |
|
Reports or checks Message Digest 5 (MD5) checksums |
|
Creates directories with the given names |
|
Creates First-In, First-Outs (FIFOs), "named pipes" in UNIX parlance, with the given names |
|
Creates device nodes with the given names; a device node is a character special file, a block special file, or a FIFO |
|
Creates temporary files in a secure manner; it is used in scripts |
|
Moves or renames files or directories |
|
Runs a program with modified scheduling priority |
|
Numbers the lines from the given files |
|
Runs a command immune to hangups, with its output redirected to a log file |
|
Prints the number of processing units available to a process |
|
Converts numbers to or from human-readable strings |
|
Dumps files in octal and other formats |
|
Merges the given files, joining sequentially corresponding lines side by side, separated by tab characters |
|
Checks if file names are valid or portable |
|
Is a lightweight finger client; it reports some information about the given users |
|
Paginates and columnates files for printing |
|
Prints the environment |
|
Prints the given arguments according to the given format, much like the C printf function |
|
Produces a permuted index from the contents of the given files, with each keyword in its context |
|
Reports the name of the current working directory |
|
Reports the value of the given symbolic link |
|
Prints the resolved path |
|
Removes files or directories |
|
Removes directories if they are empty |
|
Runs a command with specified security context |
|
Prints a sequence of numbers within a given range and with a given increment |
|
Prints or checks 160-bit Secure Hash Algorithm 1 (SHA1) checksums |
|
Prints or checks 224-bit Secure Hash Algorithm checksums |
|
Prints or checks 256-bit Secure Hash Algorithm checksums |
|
Prints or checks 384-bit Secure Hash Algorithm checksums |
|
Prints or checks 512-bit Secure Hash Algorithm checksums |
|
Overwrites the given files repeatedly with complex patterns, making it difficult to recover the data |
|
Shuffles lines of text |
|
Pauses for the given amount of time |
|
Sorts the lines from the given files |
|
Splits the given file into pieces, by size or by number of lines |
|
Displays file or filesystem status |
|
Runs commands with altered buffering operations for its standard streams |
|
Sets or reports terminal line settings |
|
Prints checksum and block counts for each given file |
|
Flushes file system buffers; it forces changed blocks to disk and updates the super block |
|
Concatenates the given files in reverse |
|
Prints the last ten lines (or the given number of lines) of each given file |
|
Reads from standard input while writing both to standard output and to the given files |
|
Compares values and checks file types |
|
Runs a command with a time limit |
|
Changes file timestamps, setting the access and modification times of the given files to the current time; files that do not exist are created with zero length |
|
Translates, squeezes, and deletes the given characters from standard input |
|
Does nothing, successfully; it always exits with a status code indicating success |
|
Shrinks or expands a file to the specified size |
|
Performs a topological sort; it writes a completely ordered list according to the partial ordering in a given file |
|
Reports the file name of the terminal connected to standard input |
|
Reports system information |
|
Converts spaces to tabs |
|
Discards all but one of successive identical lines |
|
Removes the given file |
|
Reports the names of the users currently logged on |
|
Is the same as ls -l |
|
Reports the number of lines, words, and bytes for each given file, as well as grand totals when more than one file is given |
|
Reports who is logged on |
|
Reports the user name associated with the current effective user ID |
|
Repeatedly outputs |
|
Library used by stdbuf |
Check is a unit testing framework for C.
Prepare Check for compilation:
./configure --prefix=/usr --disable-static
Build the package:
make
Compilation is now complete. To run the Check test suite, issue the following command:
make check
Install the package:
make docdir=/usr/share/doc/check-0.15.2 install
The Diffutils package contains programs that show the differences between files or directories.
Prepare Diffutils for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The Gawk package contains programs for manipulating text files.
First, ensure some unneeded files are not installed:
sed -i 's/extras//' Makefile.in
Prepare Gawk for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
chown -R tester . su tester -c "PATH=$PATH make check"
Install the package:
rm -f /usr/bin/gawk-5.3.1 make install
The meaning of the command:
The building system will not recreate the hard link
gawk-5.3.1
if it already
exists. Remove it to ensure that the previous hard link
installed in Section 6.9,
“Gawk-5.3.1” is updated here.
The installation process already created awk as a symlink to gawk, create its man page as a symlink as well:
ln -sv gawk.1 /usr/share/man/man1/awk.1
If desired, install the documentation:
install -vDm644 doc/{awkforai.txt,*.{eps,pdf,jpg}} -t /usr/share/doc/gawk-5.3.1
The Findutils package contains programs to find files. Programs are provided to search through all the files in a directory tree and to create, maintain, and search a database (often faster than the recursive find, but unreliable unless the database has been updated recently). Findutils also supplies the xargs program, which can be used to run a specified command on each file selected by a search.
Prepare Findutils for compilation:
./configure --prefix=/usr --localstatedir=/var/lib/locate
The meaning of the configure options:
--localstatedir
This option moves the locate database to
/var/lib/locate
, which is
the FHS-compliant location.
Compile the package:
make
To test the results, issue:
chown -R tester . su tester -c "PATH=$PATH make check"
Install the package:
make install
Searches given directory trees for files matching the specified criteria |
|
Searches through a database of file names and reports the names that contain a given string or match a given pattern |
|
Updates the locate database; it scans the entire file system (including other file systems that are currently mounted, unless told not to) and puts every file name it finds into the database |
|
Can be used to apply a given command to a list of files |
The Groff package contains programs for processing and formatting text and images.
Groff expects the environment variable PAGE
to contain the default paper size. For
users in the United States, PAGE=letter
is appropriate.
Elsewhere, PAGE=A4
may be more suitable. While the default paper size is
configured during compilation, it can be overridden later by
echoing either “A4” or “letter” to the
/etc/papersize
file.
Prepare Groff for compilation:
PAGE=<paper_size>
./configure --prefix=/usr
Build the package:
make
To test the results, issue:
make check
Install the package:
make install
Reads a troff font file and adds some additional font-metric information that is used by the groff system |
|
Creates a font file for use with groff and grops |
|
Groff preprocessor for producing chemical structure diagrams |
|
Compiles descriptions of equations embedded within troff input files into commands that are understood by troff |
|
Converts a troff EQN (equation) into a cropped image |
|
Marks differences between groff/nroff/troff files |
|
Transforms sheet music written in the lilypond language into the groff language |
|
Preprocessor for groff, allowing the insertion of perl code into groff files |
|
Preprocessor for groff, allowing the insertion of Pinyin (Mandarin Chinese spelled with the Roman alphabet) into groff files. |
|
Converts a grap program file into a cropped bitmap image (grap is an old Unix programming language for creating diagrams) |
|
A groff preprocessor for gremlin files |
|
A driver for groff that produces TeX dvi format output files |
|
A front end to the groff document formatting system; normally, it runs the troff program and a post-processor appropriate for the selected device |
|
Displays groff files and man pages on X and tty terminals |
|
Reads files and guesses which of the groff options
|
|
Is a groff driver for Canon CAPSL printers (LBP-4 and LBP-8 series laser printers) |
|
Is a driver for groff that produces output in PCL5 format suitable for an HP LaserJet 4 printer |
|
Translates the output of GNU troff to PDF |
|
Translates the output of GNU troff to PostScript |
|
Translates the output of GNU troff into a form suitable for typewriter-like devices |
|
Creates a font file for use with groff -Tlj4 from an HP-tagged font metric file |
|
Creates an inverted index for the bibliographic databases with a specified file for use with refer, lookbib, and lkbib |
|
Searches bibliographic databases for references that contain specified keys and reports any references found |
|
Prints a prompt on the standard error (unless the standard input is not a terminal), reads a line containing a set of keywords from the standard input, searches the bibliographic databases in a specified file for references containing those keywords, prints any references found on the standard output, and repeats this process until the end of input |
|
A simple preprocessor for groff |
|
Formats equations for American Standard Code for Information Interchange (ASCII) output |
|
A script that emulates the nroff command using groff |
|
Is a wrapper around groff that facilitates the production of PDF documents from files formatted with the mom macros. |
|
Creates pdf documents using groff |
|
Translates a PostScript font in |
|
Compiles descriptions of pictures embedded within troff or TeX input files into commands understood by TeX or troff |
|
Converts a PIC diagram into a cropped image |
|
Translates the output of GNU troff to HTML |
|
Converts encoding of input files to something GNU troff understands |
|
Translates the output of GNU troff to HTML |
|
Copies the contents of a file to the standard output, except that lines between .[ and .] are interpreted as citations, and lines between .R1 and .R2 are interpreted as commands for how citations are to be processed |
|
Transforms roff files into DVI format |
|
Transforms roff files into HTML format |
|
Transforms roff files into PDFs |
|
Transforms roff files into ps files |
|
Transforms roff files into text files |
|
Transforms roff files into other formats |
|
Reads files and replaces lines of the form .so file by the contents of the mentioned file |
|
Compiles descriptions of tables embedded within troff input files into commands that are understood by troff |
|
Creates a font file for use with groff -Tdvi |
|
Is highly compatible with Unix troff; it should usually be invoked using the groff command, which will also run preprocessors and post-processors in the appropriate order and with the appropriate options |
The GRUB package contains the GRand Unified Bootloader.
If your system has UEFI support and you wish to boot LFS with UEFI, you need to install GRUB with UEFI support (and its dependencies) by following the instructions on the BLFS page. You may skip this package, or install this package and the BLFS GRUB for UEFI package without conflict (the BLFS page provides instructions for both cases).
Unset any environment variables which may affect the build:
unset {C,CPP,CXX,LD}FLAGS
Don't try “tuning” this package with custom compilation flags. This package is a bootloader. The low-level operations in the source code may be broken by aggressive optimization.
Add a file missing from the release tarball:
echo depends bli part_gpt
> grub-core/extra_deps.lst
Prepare GRUB for compilation:
./configure --prefix=/usr \ --sysconfdir=/etc \ --disable-efiemu \ --disable-werror
The meaning of the new configure options:
--disable-werror
This allows the build to complete with warnings introduced by more recent versions of Flex.
--disable-efiemu
This option minimizes what is built by disabling a feature and eliminating some test programs not needed for LFS.
Compile the package:
make
The test suite for this packages is not recommended. Most of the tests depend on packages that are not available in the limited LFS environment. To run the tests anyway, run make check.
Install the package, and move the Bash completion support file to the location recommended by the Bash completion maintainers:
make install mv -v /etc/bash_completion.d/grub /usr/share/bash-completion/completions
Making your LFS system bootable with GRUB will be discussed in Section 10.4, “Using GRUB to Set Up the Boot Process.”
Is a helper program for grub-install |
|
Is a tool to edit the environment block |
|
Checks to see if the given file is of the specified type |
|
Is a tool to debug the file system driver |
|
Glues 32-bit and 64-bit binaries into a single file (for Apple machines) |
|
Installs GRUB on your drive |
|
Is a script that converts an xkb layout into one recognized by GRUB |
|
Is the Mac-style bless for HFS or HFS+ file systems (bless is peculiar to Apple machines; it makes a device bootable) |
|
Converts a GRUB Legacy |
|
Generates a |
|
Makes a bootable image of GRUB |
|
Generates a GRUB keyboard layout file |
|
Prepares a GRUB netboot directory |
|
Generates an encrypted PBKDF2 password for use in the boot menu |
|
Makes a system pathname relative to its root |
|
Makes a bootable image of GRUB suitable for a floppy disk, CDROM/DVD, or a USB drive |
|
Generates a standalone image |
|
Is a helper program that prints the path to a GRUB device |
|
Probes device information for a given path or device |
|
Sets the default boot entry for GRUB for the next boot only |
|
Renders Apple .disk_label for Apple Macs |
|
Checks the GRUB configuration script for syntax errors |
|
Sets the default boot entry for GRUB |
|
Is a helper program for grub-setup |
|
Transforms a syslinux config file into grub.cfg format |
The Gzip package contains programs for compressing and decompressing files.
Prepare Gzip for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Decompresses gzipped files |
|
Creates self-decompressing executable files |
|
Compresses the given files using Lempel-Ziv (LZ77) coding |
|
Decompresses compressed files |
|
Decompresses the given gzipped files to standard output |
|
Runs cmp on gzipped files |
|
Runs diff on gzipped files |
|
Runs egrep on gzipped files |
|
Runs fgrep on gzipped files |
|
Forces a |
|
Runs grep on gzipped files |
|
Runs less on gzipped files |
|
Runs more on gzipped files |
|
Re-compresses files from compress format
to gzip
format— |
The IPRoute2 package contains programs for basic and advanced IPV4-based networking.
The arpd program included in this package will not be built since it depends on Berkeley DB, which is not installed in LFS. However, a directory and a man page for arpd will still be installed. Prevent this by running the commands shown below.
sed -i /ARPD/d Makefile rm -fv man/man8/arpd.8
Compile the package:
make NETNS_RUN_DIR=/run/netns
This package does not have a working test suite.
Install the package:
make SBINDIR=/usr/sbin install
If desired, install the documentation:
install -vDm644 COPYING README* -t /usr/share/doc/iproute2-6.12.0
Configures network bridges |
|
Connection status utility |
|
Generic netlink utility front end |
|
Shows interface statistics, including the number of packets transmitted and received, by interface |
|
The main executable. It has several different functions, including these:
ip link ip addr allows users to look at addresses and their properties, add new addresses, and delete old ones ip neighbor allows users to look at neighbor bindings and their properties, add new neighbor entries, and delete old ones ip rule allows users to look at the routing policies and change them ip route allows users to look at the routing table and change routing table rules ip tunnel allows users to look at the IP tunnels and their properties, and change them ip maddr allows users to look at the multicast addresses and their properties, and change them ip mroute allows users to set, change, or delete the multicast routing ip monitor allows users to continuously monitor the state of devices, addresses and routes |
|
Provides Linux network statistics; it is a generalized and more feature-complete replacement for the old rtstat program |
|
Displays network statistics |
|
A component of ip route, for listing the routing tables |
|
Displays the contents of |
|
Route monitoring utility |
|
Converts the output of ip -o into a readable form |
|
Route status utility |
|
Similar to the netstat command; shows active connections |
|
Traffic control for Quality of Service (QoS) and Class of Service (CoS) implementations tc qdisc allows users to set up the queueing discipline tc class allows users to set up classes based on the queueing discipline scheduling tc filter allows users to set up the QoS/CoS packet filtering tc monitor can be used to view changes made to Traffic Control in the kernel. |
The Kbd package contains key-table files, console fonts, and keyboard utilities.
The behavior of the backspace and delete keys is not consistent across the keymaps in the Kbd package. The following patch fixes this issue for i386 keymaps:
patch -Np1 -i ../kbd-2.7-backspace-1.patch
After patching, the backspace key generates the character with code 127, and the delete key generates a well-known escape sequence.
Remove the redundant resizecons program (it requires the defunct svgalib to provide the video mode files - for normal use setfont sizes the console appropriately) together with its manpage.
sed -i '/RESIZECONS_PROGS=/s/yes/no/' configure sed -i 's/resizecons.8 //' docs/man/man8/Makefile.in
Prepare Kbd for compilation:
./configure --prefix=/usr --disable-vlock
The meaning of the configure option:
--disable-vlock
This option prevents the vlock utility from being built because it requires the PAM library, which isn't available in the chroot environment.
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
For some languages (e.g., Belarusian) the Kbd package doesn't provide a useful keymap where the stock “by” keymap assumes the ISO-8859-5 encoding, and the CP1251 keymap is normally used. Users of such languages have to download working keymaps separately.
If desired, install the documentation:
cp -R -v docs/doc -T /usr/share/doc/kbd-2.7
Changes the foreground virtual terminal |
|
Deallocates unused virtual terminals |
|
Dumps the keyboard translation tables |
|
Prints the number of the active virtual terminal |
|
Prints the kernel scancode-to-keycode mapping table |
|
Obtains information about the status of a console |
|
Reports or sets the keyboard mode |
|
Sets the keyboard repeat and delay rates |
|
Loads the keyboard translation tables |
|
Loads the kernel unicode-to-font mapping table |
|
An obsolete program that used to load a user-defined output character mapping table into the console driver; this is now done by setfont |
|
Starts a program on a new virtual terminal (VT) |
|
Adds a Unicode character table to a console font |
|
Extracts the embedded Unicode character table from a console font |
|
Removes the embedded Unicode character table from a console font |
|
Handles Unicode character tables for console fonts |
|
Changes the Enhanced Graphic Adapter (EGA) and Video Graphics Array (VGA) fonts on the console |
|
Loads kernel scancode-to-keycode mapping table entries; this is useful if there are unusual keys on the keyboard |
|
Sets the keyboard flags and Light Emitting Diodes (LEDs) |
|
Defines the keyboard meta-key handling |
|
Sets the console color map in all virtual terminals |
|
Shows the current EGA/VGA console screen font |
|
Reports the scancodes, keycodes, and ASCII codes of the keys pressed on the keyboard |
|
Puts the keyboard and console in UNICODE mode [Don't use this program unless your keymap file is in the ISO-8859-1 encoding. For other encodings, this utility produces incorrect results.] |
|
Reverts keyboard and console from UNICODE mode |
The Libpipeline package contains a library for manipulating pipelines of subprocesses in a flexible and convenient way.
Prepare Libpipeline for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The Make package contains a program for controlling the generation of executables and other non-source files of a package from source files.
Prepare Make for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
chown -R tester . su tester -c "PATH=$PATH make check"
Install the package:
make install
The Patch package contains a program for modifying or creating files by applying a “patch” file typically created by the diff program.
Prepare Patch for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The Tar package provides the ability to create tar archives as well as perform various other kinds of archive manipulation. Tar can be used on previously created archives to extract files, to store additional files, or to update or list files which were already stored.
Prepare Tar for compilation:
FORCE_UNSAFE_CONFIGURE=1 \ ./configure --prefix=/usr
The meaning of the configure option:
FORCE_UNSAFE_CONFIGURE=1
This forces the test for mknod
to be run as root
. It is generally considered
dangerous to run this test as the root
user, but as it is being run
on a system that has only been partially built,
overriding it is OK.
Compile the package:
make
To test the results, issue:
make check
One test, capabilities: binary store/restore, is known to fail if it is run because LFS lacks selinux, but will be skipped if the host kernel does not support extended attributes or security labels on the filesystem used for building LFS.
Install the package:
make install make -C doc install-html docdir=/usr/share/doc/tar-1.35
The Texinfo package contains programs for reading, writing, and converting info pages.
Prepare Texinfo for compilation:
./configure --prefix=/usr
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
Optionally, install the components belonging in a TeX installation:
make TEXMF=/usr/share/texmf install-tex
The meaning of the make parameter:
TEXMF=/usr/share/texmf
The TEXMF
makefile variable
holds the location of the root of the TeX tree if, for
example, a TeX package will be installed later.
The Info documentation system uses a plain text file to hold
its list of menu entries. The file is located at /usr/share/info/dir
. Unfortunately, due to
occasional problems in the Makefiles of various packages, it
can sometimes get out of sync with the info pages installed
on the system. If the /usr/share/info/dir
file ever needs to be
recreated, the following optional commands will accomplish
the task:
pushd /usr/share/info rm -v dir for f in * do install-info $f dir 2>/dev/null done popd
Used to read info pages which are similar to man pages, but often go much deeper than just explaining all the available command line options [For example, compare man bison and info bison.] |
|
Used to install info pages; it updates entries in the info index file |
|
Translates the given Texinfo source documents into info pages, plain text, or HTML |
|
Used to format the given Texinfo document into a Portable Document Format (PDF) file |
|
Converts Pod to Texinfo format |
|
Translate Texinfo source documentation to various other formats |
|
Used to format the given Texinfo document into a device-independent file that can be printed |
|
Used to format the given Texinfo document into a Portable Document Format (PDF) file |
|
Used to sort Texinfo index files |
The Vim package contains a powerful text editor.
If you prefer another editor—such as Emacs, Joe, or Nano—please refer to https://www.linuxfromscratch.org/blfs/view/svn/postlfs/editors.html for suggested installation instructions.
First, change the default location of the vimrc
configuration file to /etc
:
echo '#define SYS_VIMRC_FILE "/etc/vimrc"' >> src/feature.h
Prepare Vim for compilation:
./configure --prefix=/usr
Compile the package:
make
To prepare the tests, ensure that user tester
can write to the source tree:
chown -R tester .
Now run the tests as user tester
:
su tester -c "TERM=xterm-256color LANG=en_US.UTF-8 make -j1 test" \ &> vim-test.log
The test suite outputs a lot of binary data to the screen.
This can cause issues with the settings of the current
terminal (especially while we are overriding the TERM
variable to satisfy some assumptions of
the test suite). The problem can be avoided by redirecting
the output to a log file as shown above. A successful test
will result in the words ALL
DONE
in the log file at completion.
Install the package:
make install
Many users reflexively type vi instead of vim. To allow execution of vim when users habitually enter vi, create a symlink for both the binary and the man page in the provided languages:
ln -sv vim /usr/bin/vi for L in /usr/share/man/{,*/}man1/vim.1; do ln -sv vim.1 $(dirname $L)/vi.1 done
By default, Vim's documentation is installed in /usr/share/vim
. The following symlink
allows the documentation to be accessed via /usr/share/doc/vim-9.1.0927
, making it
consistent with the location of documentation for other
packages:
ln -sv ../vim/vim91/doc /usr/share/doc/vim-9.1.0927
If an X Window System is going to be installed on the LFS system, it may be necessary to recompile Vim after installing X. Vim comes with a GUI version of the editor that requires X and some additional libraries to be installed. For more information on this process, refer to the Vim documentation and the Vim installation page in the BLFS book at https://www.linuxfromscratch.org/blfs/view/svn/postlfs/vim.html.
By default, vim runs in vi-incompatible mode. This may be new to users who have used other editors in the past. The “nocompatible” setting is included below to highlight the fact that a new behavior is being used. It also reminds those who would change to “compatible” mode that it should be the first setting in the configuration file. This is necessary because it changes other settings, and overrides must come after this setting. Create a default vim configuration file by running the following:
cat > /etc/vimrc << "EOF"
" Begin /etc/vimrc
" Ensure defaults are set before customizing settings, not after
source $VIMRUNTIME/defaults.vim
let skip_defaults_vim=1
set nocompatible
set backspace=2
set mouse=
syntax on
if (&term == "xterm") || (&term == "putty")
set background=dark
endif
" End /etc/vimrc
EOF
The set nocompatible
setting makes vim behave in a more useful
way (the default) than the vi-compatible manner. Remove the
“no” to
keep the old vi
behavior. The set
backspace=2
setting allows backspacing over line
breaks, autoindents, and the start of an insert. The
syntax on
parameter
enables vim's syntax highlighting. The set mouse=
setting enables
proper pasting of text with the mouse when working in chroot
or over a remote connection. Finally, the if statement with the set background=dark
setting
corrects vim's
guess about the background color of some terminal emulators.
This gives the highlighting a better color scheme for use on
the black background of these programs.
Documentation for other available options can be obtained by running the following command:
vim -c ':options'
By default, vim only installs spell-checking files for the
English language. To install spell-checking files for your
preferred language, copy the .spl
and optionally, the .sug
files for your language and
character encoding from runtime/spell
into /usr/share/vim/vim91/spell/
.
To use these spell-checking files, some configuration in
/etc/vimrc
is needed, e.g.:
set spelllang=en,ru
set spell
For more information, see runtime/spell/README.txt
.
Starts vim in ex mode |
|
Is a restricted version of view; no shell commands can be started and view cannot be suspended |
|
Is a restricted version of vim; no shell commands can be started and vim cannot be suspended |
|
Link to vim |
|
Starts vim in read-only mode |
|
Is the editor |
|
Edits two or three versions of a file with vim and shows differences |
|
Teaches the basic keys and commands of vim |
|
Creates a hex dump of the given file; it can also perform the inverse operation, so it can be used for binary patching |
MarkupSafe is a Python module that implements an XML/HTML/XHTML Markup safe string.
Compile MarkupSafe with the following command:
pip3 wheel -w dist --no-cache-dir --no-build-isolation --no-deps $PWD
This package does not come with a test suite.
Install the package:
pip3 install --no-index --no-user --find-links dist Markupsafe
Jinja2 is a Python module that implements a simple pythonic template language.
Build the package:
pip3 wheel -w dist --no-cache-dir --no-build-isolation --no-deps $PWD
Install the package:
pip3 install --no-index --no-user --find-links dist Jinja2
The Udev package contains programs for dynamic creation of device nodes.
Udev is part of the systemd-257 package. Use the systemd-257.tar.xz file as the source tarball.
Remove two unneeded groups, render
and sgx
, from the default udev rules:
sed -e 's/GROUP="render"/GROUP="video"/' \ -e 's/GROUP="sgx", //' \ -i rules.d/50-udev-default.rules.in
Remove one udev rule requiring a full Systemd installation:
sed -i '/systemd-sysctl/s/^/#/' rules.d/99-systemd.rules.in
Adjust the hardcoded paths to network configuration files for the standalone udev installation:
sed -e '/NETWORK_DIRS/s/systemd/udev/' \ -i src/libsystemd/sd-network/network-util.h
Prepare Udev for compilation:
mkdir -p build cd build meson setup .. \ --prefix=/usr \ --buildtype=release \ -D mode=release \ -D dev-kvm-mode=0660 \ -D link-udev-shared=false \ -D logind=false \ -D vconsole=false
The meaning of the meson options:
--buildtype=release
This switch overrides the default buildtype (“debug”), which produces unoptimized binaries.
-D
mode=release
Disable some features considered experimental by upstream.
-D
dev-kvm-mode=0660
The default udev rule would allow all users to access
/dev/kvm
. The editors
consider it dangerous. This option overrides it.
-D
link-udev-shared=false
This option prevents udev from linking to the internal
systemd shared library, libsystemd-shared
. This library is
designed to be shared by many Systemd components and
it's too overkill for a udev-only installation.
-D
logind=false -D vconsole=false
These options prevent the generation of several udev rule files belonging to the other Systemd components that we won't install.
Get the list of the shipped udev helpers and save it into an environment variable (exporting it is not strictly necessary, but it makes building as a regular user or using a package manager easier):
export udev_helpers=$(grep "'name' :" ../src/udev/meson.build | \ awk '{print $3}' | tr -d ",'" | grep -v 'udevadm')
Only build the components needed for udev:
ninja udevadm systemd-hwdb \ $(ninja -n | grep -Eo '(src/(lib)?udev|rules.d|hwdb.d)/[^ ]*') \ $(realpath libudev.so --relative-to .) \ $udev_helpers
Install the package:
install -vm755 -d {/usr/lib,/etc}/udev/{hwdb.d,rules.d,network} install -vm755 -d /usr/{lib,share}/pkgconfig install -vm755 udevadm /usr/bin/ install -vm755 systemd-hwdb /usr/bin/udev-hwdb ln -svfn ../bin/udevadm /usr/sbin/udevd cp -av libudev.so{,*[0-9]} /usr/lib/ install -vm644 ../src/libudev/libudev.h /usr/include/ install -vm644 src/libudev/*.pc /usr/lib/pkgconfig/ install -vm644 src/udev/*.pc /usr/share/pkgconfig/ install -vm644 ../src/udev/udev.conf /etc/udev/ install -vm644 rules.d/* ../rules.d/README /usr/lib/udev/rules.d/ install -vm644 $(find ../rules.d/*.rules \ -not -name '*power-switch*') /usr/lib/udev/rules.d/ install -vm644 hwdb.d/* ../hwdb.d/{*.hwdb,README} /usr/lib/udev/hwdb.d/ install -vm755 $udev_helpers /usr/lib/udev install -vm644 ../network/99-default.link /usr/lib/udev/network
Install some custom rules and support files useful in an LFS environment:
tar -xvf ../../udev-lfs-20230818.tar.xz make -f udev-lfs-20230818/Makefile.lfs install
Install the man pages:
tar -xf ../../systemd-man-pages-257.tar.xz \ --no-same-owner --strip-components=1 \ -C /usr/share/man --wildcards '*/udev*' '*/libudev*' \ '*/systemd.link.5' \ '*/systemd-'{hwdb,udevd.service}.8 sed 's|systemd/network|udev/network|' \ /usr/share/man/man5/systemd.link.5 \ > /usr/share/man/man5/udev.link.5 sed 's/systemd\(\\\?-\)/udev\1/' /usr/share/man/man8/systemd-hwdb.8 \ > /usr/share/man/man8/udev-hwdb.8 sed 's|lib.*udevd|sbin/udevd|' \ /usr/share/man/man8/systemd-udevd.service.8 \ > /usr/share/man/man8/udevd.8 rm /usr/share/man/man*/systemd*
Finally, unset the udev_helpers
variable:
unset udev_helpers
Information about hardware devices is maintained in the
/etc/udev/hwdb.d
and
/usr/lib/udev/hwdb.d
directories. Udev needs that
information to be compiled into a binary database
/etc/udev/hwdb.bin
. Create the
initial database:
udev-hwdb update
This command needs to be run each time the hardware information is updated.
Generic udev administration tool: controls the udevd daemon, provides info from the Udev database, monitors uevents, waits for uevents to finish, tests Udev configuration, and triggers uevents for a given device |
|
A daemon that listens for uevents on the netlink socket, creates devices and runs the configured external programs in response to these uevents |
|
Updates or queries the hardware database. |
|
A library interface to udev device information |
|
Contains Udev configuration files, device permissions, and rules for device naming |
The Man-DB package contains programs for finding and viewing man pages.
Prepare Man-DB for compilation:
./configure --prefix=/usr \ --docdir=/usr/share/doc/man-db-2.13.0 \ --sysconfdir=/etc \ --disable-setuid \ --enable-cache-owner=bin \ --with-browser=/usr/bin/lynx \ --with-vgrind=/usr/bin/vgrind \ --with-grap=/usr/bin/grap \ --with-systemdtmpfilesdir= \ --with-systemdsystemunitdir=
The meaning of the configure options:
--disable-setuid
This disables making the man program setuid to
user man
.
--enable-cache-owner=bin
This changes ownership of the system-wide cache files
to user bin
.
--with-...
These three parameters are used to set some default programs. lynx is a text-based web browser (see BLFS for installation instructions), vgrind converts program sources to Groff input, and grap is useful for typesetting graphs in Groff documents. The vgrind and grap programs are not normally needed for viewing manual pages. They are not part of LFS or BLFS, but you should be able to install them yourself after finishing LFS if you wish to do so.
--with-systemd...
These parameters prevent installing unneeded systemd directories and files.
Compile the package:
make
To test the results, issue:
make check
Install the package:
make install
The following table shows the character set that Man-DB
assumes manual pages installed under /usr/share/man/<ll>
will be encoded
with. In addition to this, Man-DB correctly determines if
manual pages installed in that directory are UTF-8 encoded.
Table 8.1. Expected character encoding of legacy 8-bit manual pages
Language (code) | Encoding | Language (code) | Encoding |
---|---|---|---|
Danish (da) | ISO-8859-1 | Croatian (hr) | ISO-8859-2 |
German (de) | ISO-8859-1 | Hungarian (hu) | ISO-8859-2 |
English (en) | ISO-8859-1 | Japanese (ja) | EUC-JP |
Spanish (es) | ISO-8859-1 | Korean (ko) | EUC-KR |
Estonian (et) | ISO-8859-1 | Lithuanian (lt) | ISO-8859-13 |
Finnish (fi) | ISO-8859-1 | Latvian (lv) | ISO-8859-13 |
French (fr) | ISO-8859-1 | Macedonian (mk) | ISO-8859-5 |
Irish (ga) | ISO-8859-1 | Polish (pl) | ISO-8859-2 |
Galician (gl) | ISO-8859-1 | Romanian (ro) | ISO-8859-2 |
Indonesian (id) | ISO-8859-1 | Greek (el) | ISO-8859-7 |
Icelandic (is) | ISO-8859-1 | Slovak (sk) | ISO-8859-2 |
Italian (it) | ISO-8859-1 | Slovenian (sl) | ISO-8859-2 |
Norwegian Bokmal (nb) | ISO-8859-1 | Serbian Latin (sr@latin) | ISO-8859-2 |
Dutch (nl) | ISO-8859-1 | Serbian (sr) | ISO-8859-5 |
Norwegian Nynorsk (nn) | ISO-8859-1 | Turkish (tr) | ISO-8859-9 |
Norwegian (no) | ISO-8859-1 | Ukrainian (uk) | KOI8-U |
Portuguese (pt) | ISO-8859-1 | Vietnamese (vi) | TCVN5712-1 |
Swedish (sv) | ISO-8859-1 | Simplified Chinese (zh_CN) | GBK |
Belarusian (be) | CP1251 | Simplified Chinese, Singapore (zh_SG) | GBK |
Bulgarian (bg) | CP1251 | Traditional Chinese, Hong Kong (zh_HK) | BIG5HKSCS |
Czech (cs) | ISO-8859-2 | Traditional Chinese (zh_TW) | BIG5 |
Manual pages in languages not in the list are not supported.
Dumps the whatis database contents in human-readable form |
|
Searches the whatis database and displays the short descriptions of system commands that contain a given string |
|
Creates or updates the pre-formatted manual pages |
|
Displays one-line summary information about a given manual page |
|
Formats and displays the requested manual page |
|
Converts manual pages to another encoding |
|
Creates or updates the whatis database |
|
Displays the contents of $MANPATH or (if $MANPATH is not set) a suitable search path based on the settings in man.conf and the user's environment |
|
Searches the whatis database and displays the short descriptions of system commands that contain the given keyword as a separate word |
|
Contains run-time support for man |
|
Contains run-time support for man |
The Procps-ng package contains programs for monitoring processes.
Prepare Procps-ng for compilation:
./configure --prefix=/usr \ --docdir=/usr/share/doc/procps-ng-4.0.4 \ --disable-static \ --disable-kill
The meaning of the configure option:
--disable-kill
This switch disables building the kill command; it will be installed from the Util-linux package.
Compile the package:
make
To run the test suite, run:
chown -R tester . su tester -c "PATH=$PATH make check"
One test named ps with output flag
bsdtime,cputime,etime,etimes
is known to fail if the
host kernel is not built with CONFIG_BSD_PROCESS_ACCT
enabled.
Install the package:
make install
Reports the amount of free and used memory (both physical and swap memory) in the system |
|
Looks up processes based on their name and other attributes |
|
Reports the PIDs of the given programs |
|
Signals processes based on their name and other attributes |
|
Reports the memory map of the given process |
|
Lists the current running processes |
|
Reports the current working directory of a process |
|
Displays detailed kernel slab cache information in real time |
|
Modifies kernel parameters at run time |
|
Prints a graph of the current system load average |
|
Displays a list of the most CPU intensive processes; it provides an ongoing look at processor activity in real time |
|
Reports how long the system has been running, how many users are logged on, and the system load averages |
|
Reports virtual memory statistics, giving information about processes, memory, paging, block Input/Output (IO), traps, and CPU activity |
|
Shows which users are currently logged on, where, and since when |
|
Runs a given command repeatedly, displaying the first screen-full of its output; this allows a user to watch the output change over time |
|
Contains the functions used by most programs in this package |
The Util-linux package contains miscellaneous utility programs. Among them are utilities for handling file systems, consoles, partitions, and messages.
Prepare Util-linux for compilation:
./configure --bindir=/usr/bin \ --libdir=/usr/lib \ --runstatedir=/run \ --sbindir=/usr/sbin \ --disable-chfn-chsh \ --disable-login \ --disable-nologin \ --disable-su \ --disable-setpriv \ --disable-runuser \ --disable-pylibmount \ --disable-liblastlog2 \ --disable-static \ --without-python \ --without-systemd \ --without-systemdsystemunitdir \ ADJTIME_PATH=/var/lib/hwclock/adjtime \ --docdir=/usr/share/doc/util-linux-2.40.2
The --disable and --without options prevent warnings about building components that either require packages not in LFS, or are inconsistent with programs installed by other packages.
Compile the package:
make
If desired, create a dummy /etc/fstab
file to satisfy two tests and
run the test suite as a non-root
user:
Running the test suite as the root
user can be harmful to your
system. To run it, the CONFIG_SCSI_DEBUG option for the
kernel must be available in the currently running system
and must be built as a module. Building it into the kernel
will prevent booting. For complete coverage, other BLFS
packages must be installed. If desired, this test can be
run by booting into the completed LFS system and running:
bash tests/run.sh --srcdir=$PWD --builddir=$PWD
touch /etc/fstab chown -R tester . su tester -c "make -k check"
The hardlink tests
will fail if the host's kernel does not have the option
CONFIG_CRYPTO_USER_API_HASH
enabled or does not have any options providing a SHA256
implementation (for example, CONFIG_CRYPTO_SHA256
, or CONFIG_CRYPTO_SHA256_SSSE3
if the CPU
supports Supplemental SSE3) enabled. In addition, the lsfd:
inotify test will fail if the kernel option CONFIG_NETLINK_DIAG
is not enabled.
Install the package:
make install
Informs the Linux kernel of new partitions |
|
Opens a tty port, prompts for a login name, and then invokes the login program |
|
Discards sectors on a device |
|
A command line utility to locate and print block device attributes |
|
Is used to manage zoned storage block devices |
|
Allows users to call block device ioctls from the command line |
|
Displays a simple calendar |
|
Manipulates the partition table of the given device |
|
Modifies the state of CPUs |
|
Configures memory |
|
Displays and adjusts OOM-killer scores, used to determine which process to kill first when Linux is Out Of Memory |
|
Manipulates real-time attributes of a process |
|
Filters out reverse line feeds |
|
Filters nroff output for terminals that lack some capabilities, such as overstriking and half-lines |
|
Filters out the given columns |
|
Formats a given file into multiple columns |
|
Sets the function of the Ctrl+Alt+Del key combination to a hard or a soft reset |
|
Asks the Linux kernel to remove a partition |
|
Dumps the kernel boot messages |
|
Ejects removable media |
|
Preallocates space to a file |
|
Manipulates the partition table of the given device |
|
Counts pages of file contents in core |
|
Finds a file system, either by label or Universally Unique Identifier (UUID) |
|
Is a command line interface to the libmount library for working with mountinfo, fstab and mtab files |
|
Acquires a file lock and then executes a command with the lock held |
|
Is used to check, and optionally repair, file systems |
|
Performs a consistency check on the Cramfs file system on the given device |
|
Performs a consistency check on the Minix file system on the given device |
|
Is a very simple wrapper around FIFREEZE/FITHAW ioctl kernel driver operations |
|
Discards unused blocks on a mounted filesystem |
|
Parses options in the given command line |
|
Consolidates duplicate files by creating hard links |
|
Dumps the given file in hexadecimal, decimal, octal, or ascii |
|
Reads or sets the system's hardware clock, also called the Real-Time Clock (RTC) or Basic Input-Output System (BIOS) clock |
|
A symbolic link to setarch |
|
Gets or sets the io scheduling class and priority for a program |
|
Creates various IPC resources |
|
Removes the given Inter-Process Communication (IPC) resource |
|
Provides IPC status information |
|
Displays kernel interrupt counter information in top(1) style view |
|
Reports the size of an iso9660 file system |
|
Sends signals to processes |
|
Shows which users last logged in (and out),
searching back through the |
|
Shows the failed login attempts, as logged in
|
|
Attaches a line discipline to a serial line |
|
A symbolic link to setarch |
|
A symbolic link to setarch |
|
Enters the given message into the system log |
|
Displays lines that begin with the given string |
|
Sets up and controls loop devices |
|
Lists information about all or selected block devices in a tree-like format |
|
Prints CPU architecture information |
|
Displays information about open files; replaces lsof |
|
Prints information on IPC facilities currently employed in the system |
|
Displays kernel interrupt counter information |
|
Lists local system locks |
|
Lists information about users, groups and system accounts |
|
Lists the ranges of available memory with their online status |
|
Lists namespaces |
|
Generates magic cookies (128-bit random hexadecimal numbers) for xauth |
|
Controls whether other users can send messages to the current user's terminal |
|
Builds a file system on a device (usually a hard disk partition) |
|
Creates a Santa Cruz Operations (SCO) bfs file system |
|
Creates a cramfs file system |
|
Creates a Minix file system |
|
Initializes the given device or file to be used as a swap area |
|
A filter for paging through text one screen at a time |
|
Attaches the file system on the given device to a specified directory in the file-system tree |
|
Checks if the directory is a mountpoint |
|
Shows the symbolic links in the given paths |
|
Runs a program with namespaces of other processes |
|
Tells the kernel about the presence and numbering of on-disk partitions |
|
Makes the given file system the new root file system of the current process |
|
Gets and sets a process's resource limits |
|
Reads kernel profiling information |
|
Renames the given files, replacing a given string with another |
|
Alters the priority of running processes |
|
Asks the Linux kernel to resize a partition |
|
Reverses the lines of a given file |
|
Tool for enabling and disabling wireless devices |
|
Used to enter a system sleep state until the specified wakeup time |
|
Makes a typescript of a terminal session |
|
Re-runs session typescripts using timing information |
|
Plays back typescripts using timing information |
|
Changes reported architecture in a new program environment, and sets personality flags |
|
Runs the given program in a new session |
|
Sets terminal attributes |
|
A disk partition table manipulator |
|
Allows |
|
Makes changes to the swap area's UUID and label |
|
Disables devices and files for paging and swapping |
|
Enables devices and files for paging and swapping, and lists the devices and files currently in use |
|
Switches to another filesystem as the root of the mount tree |
|
Retrieves or sets a process's CPU affinity |
|
Manipulates the utilization clamping attributes of the system or a process |
|
A filter for translating underscores into escape sequences indicating underlining for the terminal in use |
|
Disconnects a file system from the system's file tree |
|
A symbolic link to setarch |
|
Runs a program with some namespaces unshared from parent |
|
Displays the content of the given login file in a user-friendly format |
|
A daemon used by the UUID library to generate time-based UUIDs in a secure and guaranteed-unique fashion |
|
Creates new UUIDs. Each new UUID is a random number likely to be unique among all UUIDs created, on the local system and on other systems, in the past and in the future, with extremely high probability (2128 UUIDs are possible) |
|
A utility to parse unique identifiers |
|
Displays the contents of a file or, by default, its standard input, on the terminals of all currently logged in users |
|
Shows hardware watchdog status |
|
Reports the location of the binary, source, and man page files for the given command |
|
Wipes a filesystem signature from a device |
|
A symbolic link to setarch |
|
A program to set up and control zram (compressed ram disk) devices |
|
Contains routines for device identification and token extraction |
|
Contains routines for manipulating partition tables |
|
Contains routines for block device mounting and unmounting |
|
Contains routines for aiding screen output in tabular form |
|
Contains routines for generating unique identifiers for objects that may be accessible beyond the local system |
The E2fsprogs package contains the utilities for handling the
ext2
file system. It also
supports the ext3
and
ext4
journaling file systems.
The E2fsprogs documentation recommends that the package be built in a subdirectory of the source tree:
mkdir -v build cd build
Prepare E2fsprogs for compilation:
../configure --prefix=/usr \ --sysconfdir=/etc \ --enable-elf-shlibs \ --disable-libblkid \ --disable-libuuid \ --disable-uuidd \ --disable-fsck
The meaning of the configure options:
--enable-elf-shlibs
This creates the shared libraries which some programs in this package use.
--disable-*
These prevent building and installing the libuuid
and libblkid
libraries, the
uuidd
daemon, and the
fsck
wrapper; util-linux installs more recent versions.
Compile the package:
make
To run the tests, issue:
make check
One test named m_assume_storage_prezeroed
is known to
fail.
Install the package:
make install
Remove useless static libraries:
rm -fv /usr/lib/{libcom_err,libe2p,libext2fs,libss}.a
This package installs a gzipped .info
file but doesn't update the
system-wide dir
file. Unzip
this file and then update the system dir
file using the following commands:
gunzip -v /usr/share/info/libext2fs.info.gz install-info --dir-file=/usr/share/info/dir /usr/share/info/libext2fs.info
If desired, create and install some additional documentation by issuing the following commands:
makeinfo -o doc/com_err.info ../lib/et/com_err.texinfo install -v -m644 doc/com_err.info /usr/share/info install-info --dir-file=/usr/share/info/dir /usr/share/info/com_err.info
/etc/mke2fs.conf
contains the
default value of various command line options of mke2fs. You may edit the
file to make the default values suitable for your needs. For
example, some utilities (not in LFS or BLFS) cannot recognize
a ext4
file system with
metadata_csum_seed
feature
enabled. If you
need such an utility, you may remove the feature from the
default ext4
feature list
with the command:
sed 's/metadata_csum_seed,//' -i /etc/mke2fs.conf
Read the man page mke2fs.conf(5) for details.
Searches a device (usually a disk partition) for bad blocks |
|
Changes the attributes of files on |
|
An error table compiler; it converts a table of
error-code names and messages into a C source file
suitable for use with the |
|
A file system debugger; it can be used to examine
and change the state of |
|
Prints the super block and blocks group information for the file system present on a given device |
|
Reports free space fragmentation information |
|
Is used to check and optionally repair |
|
Is used to save critical |
|
Displays or changes the file system label on the
|
|
Checks MMP (Multiple Mount Protection) status of an
|
|
Checks the contents of a mounted |
|
Checks all mounted |
|
Replays the undo log for an |
|
|
|
Online defragmenter for |
|
Reports on how badly fragmented a particular file might be |
|
By default checks |
|
By default checks |
|
By default checks |
|
Saves the output of a command in a log file |
|
Lists the attributes of files on a second extended file system |
|
Converts a table of command names and help messages
into a C source file suitable for use with the
|
|
Creates an |
|
By default creates |
|
By default creates |
|
By default creates |
|
Creates a |
|
Can be used to enlarge or shrink |
|
Adjusts tunable file system parameters on
|
|
The common error display routine |
|
Used by dumpe2fs, chattr, and lsattr |
|
Contains routines to enable user-level programs to
manipulate |
|
Used by debugfs |
The Sysklogd package contains programs for logging system messages, such as those emitted by the kernel when unusual things happen.
Prepare the package for compilation:
./configure --prefix=/usr \ --sysconfdir=/etc \ --runstatedir=/run \ --without-logger \ --docdir=/usr/share/doc/sysklogd-2.6.2
Compile the package:
make
This package does not come with a test suite.
Install the package:
make install
Create a new /etc/syslog.conf
file by running the following:
cat > /etc/syslog.conf << "EOF"
# Begin /etc/syslog.conf
auth,authpriv.* -/var/log/auth.log
*.*;auth,authpriv.none -/var/log/sys.log
daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
mail.* -/var/log/mail.log
user.* -/var/log/user.log
*.emerg *
# Do not open any internet ports.
secure_mode 2
# End /etc/syslog.conf
EOF
The SysVinit package contains programs for controlling the startup, running, and shutdown of the system.
First, apply a patch that removes several programs installed by other packages, clarifies a message, and fixes a compiler warning:
patch -Np1 -i ../sysvinit-3.11-consolidated-1.patch
Compile the package:
make
This package does not come with a test suite.
Install the package:
make install
Logs boot messages to a log file |
|
Runs a command with fstab-encoded arguments |
|
Normally invokes shutdown with the
|
|
The first process to be started when the kernel has initialized the hardware; it takes over the boot process and starts all the processes specified in its configuration file |
|
Sends a signal to all processes, except the processes in its own session; it will not kill its parent shell |
|
Tells the kernel to halt the system and switch off the computer (see halt) |
|
Tells the kernel to reboot the system (see halt) |
|
Reports the previous and the current run-level, as
noted in the last run-level record in |
|
Brings the system down in a secure way, signaling all processes and notifying all logged-in users |
|
Tells init which run-level to change to |
Most programs and libraries are, by default, compiled with
debugging symbols included (with gcc's -g
option). This means that when
debugging a program or library that was compiled with debugging
information, the debugger can provide not only memory
addresses, but also the names of the routines and variables.
The inclusion of these debugging symbols enlarges a program or library significantly. Here are two examples of the amount of space these symbols occupy:
A bash binary with debugging symbols: 1200 KB
A bash binary without debugging symbols: 480 KB (60% smaller)
Glibc and GCC files (/lib
and /usr/lib
) with
debugging symbols: 87 MB
Glibc and GCC files without debugging symbols: 16 MB (82% smaller)
Sizes will vary depending on which compiler and C library were used, but a program that has been stripped of debugging symbols is usually some 50% to 80% smaller than its unstripped counterpart. Because most users will never use a debugger on their system software, a lot of disk space can be regained by removing these symbols. The next section shows how to strip all debugging symbols from the programs and libraries.
This section is optional. If the intended user is not a programmer and does not plan to do any debugging of the system software, the system's size can be decreased by some 2 GB by removing the debugging symbols, and some unnecessary symbol table entries, from binaries and libraries. This causes no real inconvenience for a typical Linux user.
Most people who use the commands mentioned below do not experience any difficulties. However, it is easy to make a mistake and render the new system unusable. So before running the strip commands, it is a good idea to make a backup of the LFS system in its current state.
A strip command
with the --strip-unneeded
option removes
all debug symbols from a binary or library. It also removes all
symbol table entries not needed by the linker (for static
libraries) or dynamic linker (for dynamically linked binaries
and shared libraries).
The debugging symbols from selected libraries are compressed with Zlib and preserved in separate files. That debugging information is needed to run regression tests with valgrind or gdb later, in BLFS.
Note that strip
will overwrite the binary or library file it is processing.
This can crash the processes using code or data from the file.
If the process running strip is affected, the binary
or library being stripped can be destroyed; this can make the
system completely unusable. To avoid this problem we copy some
libraries and binaries into /tmp
,
strip them there, then reinstall them with the install command. (The related
entry in Section 8.2.1, “Upgrade
Issues” gives the rationale for using the install command here.)
The ELF loader's name is ld-linux-x86-64.so.2 on 64-bit
systems and ld-linux.so.2 on 32-bit systems. The construct
below selects the correct name for the current architecture,
excluding anything ending with g
, in case the commands below have already
been run.
If there is any package whose version is different from the
version specified by the book (either following a security
advisory or satisfying personal preference), it may be
necessary to update the library file name in save_usrlib
or online_usrlib
. Failing to do so may render the system
completely unusable.
save_usrlib="$(cd /usr/lib; ls ld-linux*[^g]) libc.so.6 libthread_db.so.1 libquadmath.so.0.0.0 libstdc++.so.6.0.33 libitm.so.1.0.0 libatomic.so.1.2.0" cd /usr/lib for LIB in $save_usrlib; do objcopy --only-keep-debug --compress-debug-sections=zlib $LIB $LIB.dbg cp $LIB /tmp/$LIB strip --strip-unneeded /tmp/$LIB objcopy --add-gnu-debuglink=$LIB.dbg /tmp/$LIB install -vm755 /tmp/$LIB /usr/lib rm /tmp/$LIB done online_usrbin="bash find strip" online_usrlib="libbfd-2.43.1.so libsframe.so.1.0.0 libhistory.so.8.2 libncursesw.so.6.5 libm.so.6 libreadline.so.8.2 libz.so.1.3.1 libzstd.so.1.5.6 $(cd /usr/lib; find libnss*.so* -type f)" for BIN in $online_usrbin; do cp /usr/bin/$BIN /tmp/$BIN strip --strip-unneeded /tmp/$BIN install -vm755 /tmp/$BIN /usr/bin rm /tmp/$BIN done for LIB in $online_usrlib; do cp /usr/lib/$LIB /tmp/$LIB strip --strip-unneeded /tmp/$LIB install -vm755 /tmp/$LIB /usr/lib rm /tmp/$LIB done for i in $(find /usr/lib -type f -name \*.so* ! -name \*dbg) \ $(find /usr/lib -type f -name \*.a) \ $(find /usr/{bin,sbin,libexec} -type f); do case "$online_usrbin $online_usrlib $save_usrlib" in *$(basename $i)* ) ;; * ) strip --strip-unneeded $i ;; esac done unset BIN LIB save_usrlib online_usrbin online_usrlib
A large number of files will be flagged as errors because their file format is not recognized. These warnings can be safely ignored. They indicate that those files are scripts, not binaries.
Finally, clean up some extra files left over from running tests:
rm -rf /tmp/{*,.*}
There are also several files in the /usr/lib and /usr/libexec directories with a file name extension of .la. These are "libtool archive" files. On a modern Linux system the libtool .la files are only useful for libltdl. No libraries in LFS are expected to be loaded by libltdl, and it's known that some .la files can break BLFS package builds. Remove those files now:
find /usr/lib /usr/libexec -name \*.la -delete
For more information about libtool archive files, see the BLFS section "About Libtool Archive (.la) files".
The compiler built in Chapter 6 and Chapter 7 is still partially installed and not needed anymore. Remove it with:
find /usr -depth -name $(uname -m)-lfs-linux-gnu\* | xargs rm -rf
Finally, remove the temporary 'tester' user account created at the beginning of the previous chapter.
userdel -r tester
Booting a Linux system involves several tasks. The process must mount both virtual and real file systems, initialize devices, check file systems for integrity, mount and activate any swap partitions or files, set the system clock, bring up networking, start any daemons required by the system, and accomplish any other custom tasks specified by the user. This process must be organized to ensure the tasks are performed in the correct order and executed as quickly as possible.
System V is the classic boot process that has been used in Unix and Unix-like systems such as Linux since about 1983. It consists of a small program, init, that sets up basic processes such as login (via getty) and runs a script. This script, usually named rc, controls the execution of a set of additional scripts that perform the tasks required to initialize the system.
The init
program is controlled by the /etc/inittab
file and is organized into run
levels that can be chosen by the user. In LFS, they are used
as follows:
0 — halt
1 — Single user mode
2 — User definable
3 — Full multiuser mode
4 — User definable
5 — Full multiuser mode with display manager
6 — reboot
The usual default run level is 3 or 5.
Established, well understood system.
Easy to customize.
May be slower to boot. A medium speed base LFS system takes 8-12 seconds where the boot time is measured from the first kernel message to the login prompt. Network connectivity is typically established about 2 seconds after the login prompt.
Serial processing of boot tasks. This is related to the previous point. A delay in any process, such as a file system check, will delay the entire boot process.
Does not directly support advanced features like control groups (cgroups) and per-user fair share scheduling.
Adding scripts requires manual, static sequencing decisions.
The LFS-Bootscripts package contains a set of scripts to start/stop the LFS system at bootup/shutdown. The configuration files and procedures needed to customize the boot process are described in the following sections.
Install the package:
make install
Checks the integrity of the file systems before they are mounted (with the exception of journaling and network-based file systems) |
|
Removes files that should not be preserved between
reboots, such as those in |
|
Loads the correct keymap table for the desired keyboard layout; it also sets the screen font |
|
Contains common functions, such as error and status checking, that are used by several bootscripts |
|
Halts the system |
|
Stops a network device |
|
Initializes a network device |
|
Sets up the system's hostname and local loopback device |
|
Loads kernel modules listed in |
|
Mounts all file systems, except those that are marked noauto, or are network based |
|
Mounts virtual kernel file systems, such as
|
|
Sets up network interfaces, such as network cards, and sets up the default gateway (where applicable) |
|
The master run-level control script; it is responsible for running all the other bootscripts one-by-one, in a sequence determined by the names of the symbolic links to those other bootscripts |
|
Reboots the system |
|
Makes sure every process is terminated before the system reboots or halts |
|
Resets the system clock to local time if the hardware clock is not set to UTC |
|
Provides the functionality needed to assign a static Internet Protocol (IP) address to a network interface |
|
Enables and disables swap files and partitions |
|
Loads system configuration values from |
|
Starts and stops the system and kernel log daemons |
|
A template to create custom bootscripts for other daemons |
|
Prepares the |
|
Retries failed udev uevents, and copies generated
rules files from |
In Chapter 8, we installed the udev daemon when udev was built. Before we go into the details regarding how udev works, a brief history of previous methods of handling devices is in order.
Linux systems in general traditionally used a static device
creation method, whereby a great many device nodes were created
under /dev
(sometimes literally
thousands of nodes), regardless of whether the corresponding
hardware devices actually existed. This was typically done via
a MAKEDEV script,
which contained a number of calls to the mknod program with the
relevant major and minor device numbers for every possible
device that might exist in the world.
Using the udev method, device nodes are only created for those
devices which are detected by the kernel. These device nodes
are created each time the system boots; they are stored in a
devtmpfs
file system (a virtual
file system that resides entirely in system memory). Device
nodes do not require much space, so the memory that is used is
negligible.
In February 2000, a new filesystem called devfs
was merged into the 2.3.46 kernel
and was made available during the 2.4 series of stable
kernels. Although it was present in the kernel source itself,
this method of creating devices dynamically never received
overwhelming support from the core kernel developers.
The main problem with the approach adopted by devfs
was the way it handled device
detection, creation, and naming. The latter issue, that of
device node naming, was perhaps the most critical. It is
generally accepted that if device names are configurable, the
device naming policy should be chosen by system
administrators, and not imposed on them by the developer(s).
The devfs
file system also
suffered from race conditions that were inherent in its
design; these could not be fixed without a substantial
revision of the kernel. devfs
was marked as deprecated for a long time, and was finally
removed from the kernel in June, 2006.
With the development of the unstable 2.5 kernel tree, later
released as the 2.6 series of stable kernels, a new virtual
filesystem called sysfs
came
to be. The job of sysfs
is to
provide information about the system's hardware configuration
to userspace processes. With this userspace-visible
representation, it became possible to develop a userspace
replacement for devfs
.
The sysfs
filesystem was
mentioned briefly above. One may wonder how sysfs
knows about the devices present
on a system and what device numbers should be used for
them. Drivers that have been compiled into the kernel
register their objects in sysfs
(devtmpfs internally) as they are
detected by the kernel. For drivers compiled as modules,
registration happens when the module is loaded. Once the
sysfs
filesystem is mounted
(on /sys
), data which the
drivers have registered with sysfs
are available to userspace
processes and to udevd for processing (including
modifications to device nodes).
Device files are created by the kernel in the devtmpfs
file system. Any driver that
wishes to register a device node will use the devtmpfs
(via the driver core) to do
it. When a devtmpfs
instance is mounted on /dev
,
the device node will initially be exposed to userspace with
a fixed name, permissions, and owner.
A short time later, the kernel will send a uevent to
udevd. Based
on the rules specified in the files within the /etc/udev/rules.d
, /usr/lib/udev/rules.d
, and /run/udev/rules.d
directories,
udevd will
create additional symlinks to the device node, or change
its permissions, owner, or group, or modify the internal
udevd
database entry (name) for that object.
The rules in these three directories are numbered and all
three directories are merged together. If udevd can't find a rule
for the device it is creating, it will leave the
permissions and ownership at whatever devtmpfs
used initially.
Device drivers compiled as modules may have aliases built
into them. Aliases are visible in the output of the
modinfo
program and are usually related to the bus-specific
identifiers of devices supported by a module. For example,
the snd-fm801 driver
supports PCI devices with vendor ID 0x1319 and device ID
0x0801, and has an alias of pci:v00001319d00000801sv*sd*bc04sc01i*
.
For most devices, the bus driver exports the alias of the
driver that would handle the device via sysfs
. E.g., the /sys/bus/pci/devices/0000:00:0d.0/modalias
file might contain the string pci:v00001319d00000801sv00001319sd00001319bc04sc01i00
.
The default rules provided with udev will cause
udevd to call
out to /sbin/modprobe with the
contents of the MODALIAS
uevent
environment variable (which should be the same as the
contents of the modalias
file
in sysfs), thus loading all modules whose aliases match
this string after wildcard expansion.
In this example, this means that, in addition to snd-fm801, the obsolete (and unwanted) forte driver will be loaded if it is available. See below for ways in which the loading of unwanted drivers can be prevented.
The kernel itself is also able to load modules for network protocols, filesystems, and NLS support on demand.
There are a few possible problems when it comes to automatically creating device nodes.
Udev will only load a module if it has a bus-specific alias
and the bus driver properly exports the necessary aliases
to sysfs
. In other cases,
one should arrange module loading by other means. With
Linux-6.12.5, udev is known to load properly-written
drivers for INPUT, IDE, PCI, USB, SCSI, SERIO, and FireWire
devices.
To determine if the device driver you require has the
necessary support for udev, run modinfo with the module
name as the argument. Now try locating the device directory
under /sys/bus
and check
whether there is a modalias
file there.
If the modalias
file exists
in sysfs
, the driver
supports the device and can talk to it directly, but
doesn't have the alias, it is a bug in the driver. Load the
driver without the help from udev and expect the issue to
be fixed later.
If there is no modalias
file
in the relevant directory under /sys/bus
, this means that the kernel
developers have not yet added modalias support to this bus
type. With Linux-6.12.5, this is the case with ISA busses.
Expect this issue to be fixed in later kernel versions.
Udev is not intended to load “wrapper” drivers such as snd-pcm-oss and non-hardware drivers such as loop at all.
If the “wrapper” module only enhances the
functionality provided by some other module (e.g.,
snd-pcm-oss enhances
the functionality of snd-pcm by making the sound
cards available to OSS applications), configure
modprobe to
load the wrapper after udev loads the wrapped module. To do
this, add a “softdep” line to the corresponding
/etc/modprobe.d/
file. For example:
<filename>
.conf
softdep snd-pcm post: snd-pcm-oss
Note that the “softdep” command also allows
pre:
dependencies, or a
mixture of both pre:
and
post:
dependencies. See the
modprobe.d(5)
manual page for more information on “softdep” syntax
and capabilities.
If the module in question is not a wrapper and is useful by
itself, configure the modules bootscript to
load this module on system boot. To do this, add the module
name to the /etc/sysconfig/modules
file on a separate
line. This works for wrapper modules too, but is suboptimal
in that case.
Either don't build the module, or blacklist it in a
/etc/modprobe.d/blacklist.conf
file as
done with the forte
module in the example below:
blacklist forte
Blacklisted modules can still be loaded manually with the explicit modprobe command.
This usually happens if a rule unexpectedly matches a device. For example, a poorly-written rule can match both a SCSI disk (as desired) and the corresponding SCSI generic device (incorrectly) by vendor. Find the offending rule and make it more specific, with the help of the udevadm info command.
This may be another manifestation of the previous problem.
If not, and your rule uses sysfs
attributes, it may be a kernel
timing issue, to be fixed in later kernels. For now, you
can work around it by creating a rule that waits for the
used sysfs
attribute and
appending it to the /etc/udev/rules.d/10-wait_for_sysfs.rules
file (create this file if it does not exist). Please notify
the LFS Development list if you do so and it helps.
First, be certain that the driver is built into the kernel or already loaded as a module, and that udev isn't creating a misnamed device.
If a kernel driver does not export its data to sysfs
, udev lacks the information
needed to create a device node. This is most likely to
happen with third party drivers from outside the kernel
tree. Create a static device node in /usr/lib/udev/devices
with the
appropriate major/minor numbers (see the file devices.txt
inside the kernel
documentation or the documentation provided by the third
party driver vendor). The static device node will be copied
to /dev
by udev.
This is due to the fact that udev, by design, handles uevents and loads modules in parallel, and thus in an unpredictable order. This will never be “fixed.” You should not rely upon the kernel device names being stable. Instead, create your own rules that make symlinks with stable names based on some stable attributes of the device, such as a serial number or the output of various *_id utilities installed by udev. See Section 9.4, “Managing Devices” and Section 9.5, “General Network Configuration” for examples.
Additional helpful documentation is available at the following sites:
A Userspace Implementation of devfs
http://www.kroah.com/linux/talks/ols_2003_udev_paper/Reprint-Kroah-Hartman-OLS2003.pdf
The sysfs
Filesystem
https://www.kernel.org/pub/linux/kernel/people/mochel/doc/papers/ols-2005/mochel.pdf
Udev, by default, names network devices according to Firmware/BIOS data or physical characteristics like the bus, slot, or MAC address. The purpose of this naming convention is to ensure that network devices are named consistently, not based on when the network card was discovered. In older versions of Linux—on a computer with two network cards made by Intel and Realtek, for instance—the network card manufactured by Intel might have become eth0 while the Realtek card became eth1. After a reboot, the cards would sometimes get renumbered the other way around.
In the new naming scheme, typical network device names are something like enp5s0 or wlp3s0. If this naming convention is not desired, the traditional naming scheme, or a custom scheme, can be implemented.
The traditional naming scheme using eth0, eth1, etc. can be
restored by adding net.ifnames=0
on the
kernel command line. This is most appropriate for systems
that have just one ethernet device of a particular type.
Laptops often have two ethernet connections named eth0 and
wlan0; such laptops can also use this method. The command
line is in the GRUB configuration file. See Section 10.4.4,
“Creating the GRUB Configuration File.”
The naming scheme can be customized by creating custom udev rules. A script has been included that generates the initial rules. Generate these rules by running:
bash /usr/lib/udev/init-net-rules.sh
Now, inspect the /etc/udev/rules.d/70-persistent-net.rules
file, to find out which name was assigned to which network
device:
cat /etc/udev/rules.d/70-persistent-net.rules
In some cases, such as when MAC addresses have been assigned to a network card manually, or in a virtual environment such as Qemu or Xen, the network rules file may not be generated because addresses are not consistently assigned. In these cases, this method cannot be used.
The file begins with a comment block, followed by two lines for each NIC. The first line for each NIC is a commented description showing its hardware IDs (e.g. its PCI vendor and device IDs, if it's a PCI card), along with its driver (in parentheses, if the driver can be found). Neither the hardware ID nor the driver is used to determine which name to give an interface; this information is only for reference. The second line is the udev rule that matches this NIC and actually assigns it a name.
All udev rules are made up of several keywords, separated by commas and optional whitespace. Here are the keywords, and an explanation of each one:
SUBSYSTEM=="net"
- This
tells udev to ignore devices that are not network
cards.
ACTION=="add"
- This
tells udev to ignore this rule for a uevent that
isn't an add ("remove" and "change" uevents also
happen, but don't need to rename network interfaces).
DRIVERS=="?*"
- This
exists so that udev will ignore VLAN or bridge
sub-interfaces (because these sub-interfaces do not
have drivers). These sub-interfaces are skipped
because the name that would be assigned would collide
with the parent devices.
ATTR{address}
- The
value of this keyword is the NIC's MAC address.
ATTR{type}=="1"
- This
ensures the rule only matches the primary interface
in the case of certain wireless drivers which create
multiple virtual interfaces. The secondary interfaces
are skipped for the same reason that VLAN and bridge
sub-interfaces are skipped: there would be a name
collision otherwise.
NAME
- The value of this
keyword is the name that udev will assign to this
interface.
The value of NAME
is the
important part. Make sure you know which name has been
assigned to each of your network cards before proceeding,
and be sure to use that NAME
value when creating your network configuration files.
Even if the custom udev rule file is created, udev may
still assign one or more alternative names for a NIC based
on physical characteristics. If a custom udev rule would
rename some NIC using a name already assigned as an
alternative name of another NIC, this udev rule will fail.
If this issue happens, you may create the /etc/udev/network/99-default.link
configuration file with an empty alternative assignment
policy, overriding the default configuration file
/usr/lib/udev/network/99-default.link
:
sed -e '/^AlternativeNamesPolicy/s/=.*$/=/' \ /usr/lib/udev/network/99-default.link \ > /etc/udev/network/99-default.link
Some software that you may want to install later (e.g.,
various media players) expects the /dev/cdrom
and /dev/dvd
symlinks to exist, and to point to
a CD-ROM or DVD-ROM device. Also, it may be convenient to put
references to those symlinks into /etc/fstab
. Udev comes with a script that
will generate rules files to create these symlinks for you,
depending on the capabilities of each device, but you need to
decide which of two modes of operation you wish to have the
script use.
First, the script can operate in “by-path” mode (used by default for USB and FireWire devices), where the rules it creates depend on the physical path to the CD or DVD device. Second, it can operate in “by-id” mode (default for IDE and SCSI devices), where the rules it creates depend on identification strings stored on the CD or DVD device itself. The path is determined by udev's path_id script, and the identification strings are read from the hardware by its ata_id or scsi_id programs, depending on which type of device you have.
There are advantages to each approach; the correct approach depends on what kinds of device changes may happen. If you expect the physical path to the device (that is, the ports and/or slots that it plugs into) to change, for example because you plan on moving the drive to a different IDE port or a different USB connector, then you should use the “by-id” mode. On the other hand, if you expect the device's identification to change, for example because it may die, and you intend to replace it with a different device that plugs into the same connectors, then you should use the “by-path” mode.
If either type of change is possible with your drive, then choose a mode based on the type of change you expect to happen more often.
External devices (for example, a USB-connected CD drive) should not use by-path persistence, because each time the device is plugged into a new external port, its physical path will change. All externally-connected devices will have this problem if you write udev rules to recognize them by their physical path; the problem is not limited to CD and DVD drives.
If you wish to see the values that the udev scripts will use,
then for the appropriate CD-ROM device, find the
corresponding directory under /sys
(e.g., this can be /sys/block/hdd
) and run a command similar
to the following:
udevadm test /sys/block/hdd
Look at the lines containing the output of various *_id programs. The “by-id” mode will use the ID_SERIAL value if it exists and is not empty, otherwise it will use a combination of ID_MODEL and ID_REVISION. The “by-path” mode will use the ID_PATH value.
If the default mode is not suitable for your situation, then
the following modification can be made to the /etc/udev/rules.d/83-cdrom-symlinks.rules
file, as follows (where mode
is one of “by-id” or
“by-path”):
sed -e 's/"write_cd_rules"/"write_cd_rules mode
"/' \
-i /etc/udev/rules.d/83-cdrom-symlinks.rules
Note that it is not necessary to create the rules files or
symlinks at this time because you have bind-mounted the
host's /dev
directory into the
LFS system and we assume the symlinks exist on the host. The
rules and symlinks will be created the first time you boot
your LFS system.
However, if you have multiple CD-ROM devices, then the
symlinks generated at that time may point to different
devices than they point to on your host because devices are
not discovered in a predictable order. The assignments
created when you first boot the LFS system will be stable, so
this is only an issue if you need the symlinks on both
systems to point to the same device. If you need that, then
inspect (and possibly edit) the generated /etc/udev/rules.d/70-persistent-cd.rules
file after booting, to make sure the assigned symlinks match
your needs.
As explained in Section 9.3,
“Overview of Device and Module Handling,” the order in
which devices with the same function appear in /dev
is essentially random. E.g., if you
have a USB web camera and a TV tuner, sometimes /dev/video0
refers to the camera and
/dev/video1
refers to the
tuner, and sometimes after a reboot the order changes. For
all classes of hardware except sound cards and network cards,
this is fixable by creating udev rules to create persistent
symlinks. The case of network cards is covered separately in
Section 9.5,
“General Network Configuration,” and sound card
configuration can be found in
BLFS.
For each of your devices that is likely to have this problem
(even if the problem doesn't exist in your current Linux
distribution), find the corresponding directory under
/sys/class
or /sys/block
. For video devices, this may be
/sys/class/video4linux/video
. Figure out the
attributes that identify the device uniquely (usually, vendor
and product IDs and/or serial numbers work):
X
udevadm info -a -p /sys/class/video4linux/video0
Then write rules that create the symlinks, e.g.:
cat > /etc/udev/rules.d/83-duplicate_devs.rules << "EOF"
# Persistent symlinks for webcam and tuner
KERNEL=="video*", ATTRS{idProduct}=="1910", ATTRS{idVendor}=="0d81", SYMLINK+="webcam"
KERNEL=="video*", ATTRS{device}=="0x036f", ATTRS{vendor}=="0x109e", SYMLINK+="tvtuner"
EOF
The result is that /dev/video0
and /dev/video1
devices still
refer randomly to the tuner and the web camera (and thus
should never be used directly), but there are symlinks
/dev/tvtuner
and /dev/webcam
that always point to the
correct device.
The files in /etc/sysconfig/
usually determine which interfaces are brought up and down by
the network script. This directory should contain a file for
each interface to be configured, such as ifconfig.xyz
, where “xyz” describes the
network card. The interface name (e.g. eth0) is usually
appropriate. Each file contains the attributes of one
interface, such as its IP address(es), subnet masks, and so
forth. The stem of the filename must be ifconfig.
If the procedure in the previous section was not used, udev will assign network card interface names based on system physical characteristics such as enp2s1. If you are not sure what your interface name is, you can always run ip link or ls /sys/class/net after you have booted your system.
The interface names depend on the implementation and configuration of the udev daemon running on the system. The udev daemon for LFS (installed in Section 8.76, “Udev from Systemd-257”) will not run until the LFS system is booted. So the interface names in the LFS system cannot always be determined by running those commands on the host distro, even in the chroot environment.
The following command creates a sample file for the eth0 device with a static IP address:
cd /etc/sysconfig/ cat > ifconfig.eth0
<< "EOF"ONBOOT=
EOFyes
IFACE=eth0
SERVICE=ipv4-static
IP=192.168.1.2
GATEWAY=192.168.1.1
PREFIX=24
BROADCAST=192.168.1.255
The values in italics must be changed in each file, to set the interfaces up correctly.
If the ONBOOT
variable is set to
yes
the System V network script
will bring up the Network Interface Card (NIC) during the
system boot process. If set to anything besides yes
, the NIC will be ignored by the network
script and will not be started automatically. Interfaces can
be manually started or stopped with the ifup and ifdown commands.
The IFACE
variable defines the
interface name, for example, eth0. It is required for all
network device configuration files. The filename extension
must match this value.
The SERVICE
variable defines the
method used for obtaining the IP address. The LFS-Bootscripts
package has a modular IP assignment format, and creating
additional files in the /lib/services/
directory allows other IP
assignment methods. This is commonly used for Dynamic Host
Configuration Protocol (DHCP), which is addressed in the BLFS
book.
The GATEWAY
variable should
contain the default gateway IP address, if one is present. If
not, then comment out the variable entirely.
The PREFIX
variable specifies the
number of bits used in the subnet. Each segment of an IP
address is 8 bits. If the subnet's netmask is 255.255.255.0,
then it is using the first three segments (24 bits) to
specify the network number. If the netmask is
255.255.255.240, the subnet is using the first 28 bits.
Prefixes longer than 24 bits are commonly used by DSL and
cable-based Internet Service Providers (ISPs). In this
example (PREFIX=24), the netmask is 255.255.255.0. Adjust the
PREFIX
variable according to your
specific subnet. If omitted, the PREFIX defaults to 24.
For more information see the ifup man page.
The system will need some means of obtaining Domain Name
Service (DNS) name resolution to resolve Internet domain
names to IP addresses, and vice versa. This is best achieved
by placing the IP address of the DNS server, available from
the ISP or network administrator, into /etc/resolv.conf
. Create the file by
running the following:
cat > /etc/resolv.conf << "EOF"
# Begin /etc/resolv.conf
domain <Your Domain Name>
nameserver <IP address of your primary nameserver>
nameserver <IP address of your secondary nameserver>
# End /etc/resolv.conf
EOF
The domain
statement can be
omitted or replaced with a search
statement. See the man page for
resolv.conf for more details.
Replace <IP address of the
nameserver>
with the IP address of the DNS
most appropriate for the setup. There will often be more than
one entry (requirements demand secondary servers for fallback
capability). If you only need or want one DNS server, remove
the second nameserver
line from the file. The IP address may also be a router on
the local network.
The Google Public IPv4 DNS addresses are 8.8.8.8 and 8.8.4.4.
During the boot process, the file /etc/hostname
is used for establishing the
system's hostname.
Create the /etc/hostname
file
and enter a hostname by running:
echo "<lfs>
" > /etc/hostname
<lfs>
needs
to be replaced with the name given to the computer. Do not
enter the Fully Qualified Domain Name (FQDN) here. That
information goes in the /etc/hosts
file.
Decide on a fully-qualified domain name (FQDN), and possible
aliases for use in the /etc/hosts
file. If using static IP
addresses, you'll also need to decide on an IP address. The
syntax for a hosts file entry is:
IP_address myhost.example.org aliases
Unless the computer is to be visible to the Internet (i.e., there is a registered domain and a valid block of assigned IP addresses—most users do not have this), make sure that the IP address is in the private network IP address range. Valid ranges are:
Private Network Address Range Normal Prefix
10.0.0.1 - 10.255.255.254 8
172.x.0.1 - 172.x.255.254 16
192.168.y.1 - 192.168.y.254 24
x can be any number in the range 16-31. y can be any number in the range 0-255.
A valid private IP address could be 192.168.1.2.
If the computer is to be visible to the Internet, a valid FQDN can be the domain name itself, or a string resulted by concatenating a prefix (often the hostname) and the domain name with a “.” character. And, you need to contact the domain provider to resolve the FQDN to your public IP address.
Even if the computer is not visible to the Internet, a FQDN
is still needed for certain programs, such as MTAs, to
operate properly. A special FQDN, localhost.localdomain
, can be used for this
purpose.
Create the /etc/hosts
file by
running:
cat > /etc/hosts << "EOF"
# Begin /etc/hosts
127.0.0.1 localhost.localdomain localhost
127.0.1.1 <FQDN>
<HOSTNAME>
<192.168.1.2>
<FQDN>
<HOSTNAME>
[alias1] [alias2 ...]
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# End /etc/hosts
EOF
The <192.168.1.2>
,
<FQDN>
, and
<HOSTNAME>
values need to be changed for specific uses or requirements
(if assigned an IP address by a network/system administrator
and the machine will be connected to an existing network).
The optional alias name(s) can be omitted.
This version of LFS uses a special booting facility named SysVinit, based on a series of run levels. The boot procedure can be quite different from one system to another; the fact that things worked one way in a particular Linux distribution does not guarantee they will work the same way in LFS. LFS has its own way of doing things, but it does respect generally accepted standards.
There is an alternative boot procedure called systemd. We will not discuss that boot process any further here. For a detailed description visit https://www.linux.com/training-tutorials/understanding-and-using-systemd/.
SysVinit (which will be referred to as “init” from now on) uses a run levels scheme. There are seven run levels, numbered 0 to 6. (Actually, there are more run levels, but the others are for special cases and are generally not used. See init(8) for more details.) Each one of the seven corresponds to actions the computer is supposed to perform when it starts up or shuts down. The default run level is 3. Here are the descriptions of the different run levels as they are implemented in LFS:
0: halt the computer
1: single-user mode
2: reserved for customization, otherwise the same as 3
3: multi-user mode with networking
4: reserved for customization, otherwise the same as 3
5: same as 4, it is usually used for GUI login (like GNOME's gdm or LXDE's lxdm)
6: reboot the computer
Classically, run level 2 above was defined as “multi-user mode without networking,” but this was only the case many years ago when multiple users could connect to a system via serial ports. In today's environment it makes no sense, and we now say it is “reserved.”
During kernel initialization, the first program that is run
(if not overridden on the command line) is init. This program reads
the initialization file /etc/inittab
. Create this file with:
cat > /etc/inittab << "EOF"
# Begin /etc/inittab
id:3:initdefault:
si::sysinit:/etc/rc.d/init.d/rc S
l0:0:wait:/etc/rc.d/init.d/rc 0
l1:S1:wait:/etc/rc.d/init.d/rc 1
l2:2:wait:/etc/rc.d/init.d/rc 2
l3:3:wait:/etc/rc.d/init.d/rc 3
l4:4:wait:/etc/rc.d/init.d/rc 4
l5:5:wait:/etc/rc.d/init.d/rc 5
l6:6:wait:/etc/rc.d/init.d/rc 6
ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
su:S06:once:/sbin/sulogin
s1:1:respawn:/sbin/sulogin
1:2345:respawn:/sbin/agetty --noclear tty1 9600
2:2345:respawn:/sbin/agetty tty2 9600
3:2345:respawn:/sbin/agetty tty3 9600
4:2345:respawn:/sbin/agetty tty4 9600
5:2345:respawn:/sbin/agetty tty5 9600
6:2345:respawn:/sbin/agetty tty6 9600
# End /etc/inittab
EOF
An explanation of this initialization file is in the man page
for inittab. In LFS,
the key command is rc. The initialization file
above instructs rc to run all the scripts
starting with an S in the /etc/rc.d/rcS.d
directory followed by all
the scripts starting with an S in the /etc/rc.d/rc?.d
directory where the
question mark is specified by the initdefault value.
As a convenience, the rc script reads a library
of functions in /lib/lsb/init-functions
. This library also
reads an optional configuration file, /etc/sysconfig/rc.site
. Any of the system
configuration parameters described in subsequent sections can
be placed in this file, allowing consolidation of all system
parameters in this one file.
As a debugging convenience, the functions script also logs
all output to /run/var/bootlog
.
Since the /run
directory is a
tmpfs, this file is not persistent across boots; however, it
is appended to the more permanent file /var/log/boot.log
at the end of the boot
process.
Changing run levels is done with init <runlevel>
,
where <runlevel>
is the
target run level. For example, to reboot the computer, a
user could issue the init
6 command, which is an alias for the
reboot
command. Likewise, init
0 is an alias for the halt command.
There are a number of directories under /etc/rc.d
that look like rc?.d
(where ? is the number of the run
level) and rcS.d
, all
containing a number of symbolic links. Some links begin
with a K; the others
begin with an S, and
all of them have two numbers following the initial letter.
The K means to stop (kill) a service and the S means to
start a service. The numbers determine the order in which
the scripts are run, from 00 to 99—the smaller the number,
the sooner the script runs. When init switches to another
run level, the appropriate services are either started or
stopped, depending on the run level chosen.
The real scripts are in /etc/rc.d/init.d
. They do the actual
work, and the symlinks all point to them. K links and S
links point to the same script in /etc/rc.d/init.d
. This is because the
scripts can be called with different parameters like
start
, stop
, restart
, reload
, and status
. When a K link is
encountered, the appropriate script is run with the
stop
argument. When
an S link is encountered, the appropriate script is run
with the start
argument.
These are descriptions of what the arguments make the scripts do:
start
The service is started.
stop
The service is stopped.
restart
The service is stopped and then started again.
reload
The configuration of the service is updated. This is used after the configuration file of a service was modified, when the service does not need to be restarted.
status
Tells if the service is running and with which PIDs.
Feel free to modify the way the boot process works (after all, it is your own LFS system). The files given here are an example of how it can be done.
The /etc/rc.d/init.d/udev
initscript starts udevd, triggers any
"coldplug" devices that have already been created by the
kernel, and waits for any rules to complete. The script also
unsets the uevent handler from the default of /sbin/hotplug
. This is done because the
kernel no longer needs to call an external binary. Instead,
udevd will
listen on a netlink socket for uevents that the kernel
raises.
The /etc/rc.d/init.d/udev_retry
script takes care of re-triggering events for subsystems
whose rules may rely on file systems that are not mounted
until the mountfs script is run (in
particular, /usr
and
/var
may cause this). This
script runs after the mountfs script, so those
rules (if re-triggered) should succeed the second time
around. It is configured by the /etc/sysconfig/udev_retry
file; any words
in this file other than comments are considered subsystem
names to trigger at retry time. To find the subsystem of a
device, use udevadm info
--attribute-walk <device> where
<device> is an absolute path in /dev or /sys, such as
/dev/sr0, or /sys/class/rtc.
For information on kernel module loading and udev, see Section 9.3.2.3, “Module Loading.”
The setclock
script reads the time from the hardware clock, also known as
the BIOS or Complementary Metal Oxide Semiconductor (CMOS)
clock. If the hardware clock is set to UTC, this script will
convert the hardware clock's time to the local time using the
/etc/localtime
file (which
tells the hwclock program which time
zone to use). There is no way to detect whether or not the
hardware clock is set to UTC, so this must be configured
manually.
The setclock program is run via udev when the kernel detects the hardware capability upon boot. It can also be run manually with the stop parameter to store the system time to the CMOS clock.
If you cannot remember whether or not the hardware clock is
set to UTC, find out by running the hwclock --localtime --show
command. This will display what the current time is according
to the hardware clock. If this time matches whatever your
watch says, then the hardware clock is set to local time. If
the output from hwclock is not local time,
chances are it is set to UTC time. Verify this by adding or
subtracting the proper number of hours for your time zone to
the time shown by hwclock. For example, if
you are currently in the MST time zone, which is also known
as GMT -0700, add seven hours to the local time.
Change the value of the UTC
variable below to a value of 0
(zero) if the hardware clock
is NOT set to UTC
time.
Create a new file /etc/sysconfig/clock
by running the
following:
cat > /etc/sysconfig/clock << "EOF"
# Begin /etc/sysconfig/clock
UTC=1
# Set this to any options you might need to give to hwclock,
# such as machine hardware clock type for Alphas.
CLOCKPARAMS=
# End /etc/sysconfig/clock
EOF
A good hint explaining how to deal with time on LFS is
available at
https://www.linuxfromscratch.org/hints/downloads/files/time.txt.
It explains issues such as time zones, UTC, and the
TZ
environment variable.
The CLOCKPARAMS and UTC parameters may also be set in the
/etc/sysconfig/rc.site
file.
This section discusses how to configure the console bootscript that
sets up the keyboard map, console font, and console kernel
log level. If non-ASCII characters (e.g., the copyright sign,
the British pound sign, and the Euro symbol) will not be used
and the keyboard is a U.S. one, much of this section can be
skipped. Without the configuration file, (or equivalent
settings in rc.site
), the
console
bootscript will do nothing.
The console
script reads the /etc/sysconfig/console
file for
configuration information. Decide which keymap and screen
font will be used. Various language-specific HOWTOs can also
help with this; see https://tldp.org/HOWTO/HOWTO-INDEX/other-lang.html.
If still in doubt, look in the /usr/share/keymaps
and /usr/share/consolefonts
directories for
valid keymaps and screen fonts. Read the loadkeys(1)
and setfont(8)
manual pages to determine the correct arguments for these
programs.
The /etc/sysconfig/console
file
should contain lines of the form: VARIABLE=value
. The following variables are
recognized:
This variable specifies the log level for kernel
messages sent to the console as set by dmesg -n. Valid
levels are from 1
(no
messages) to 8
. The
default level is 7
, which
is quite verbose.
This variable specifies the arguments for the
loadkeys
program, typically, the name of the keymap to load,
e.g., it
. If this variable
is not set, the bootscript will not run the
loadkeys
program, and the default kernel keymap will be used.
Note that a few keymaps have multiple versions with the
same name (cz and its variants in qwerty/ and qwertz/,
es in olpc/ and qwerty/, and trf in fgGIod/ and
qwerty/). In these cases the parent directory should
also be specified (e.g. qwerty/es) to ensure the proper
keymap is loaded.
This (rarely used) variable specifies the arguments for
the second call to the loadkeys program.
This is useful if the stock keymap is not completely
satisfactory and a small adjustment has to be made.
E.g., to include the Euro sign into a keymap that
normally doesn't have it, set this variable to
euro2
.
This variable specifies the arguments for the
setfont
program. Typically, this includes the font name,
-m
, and the name of the
application character map to load. E.g., in order to
load the “lat1-16” font together with the
“8859-1” application character map
(appropriate in the USA), set this variable to
lat1-16 -m 8859-1
. In
UTF-8 mode, the kernel uses the application character
map to convert 8-bit key codes to UTF-8. Therefore the
argument of the "-m" parameter should be set to the
encoding of the composed key codes in the keymap.
Set this variable to 1
,
yes
, or true
in order to put the console into
UTF-8 mode. This is useful in UTF-8 based locales and
harmful otherwise.
For many keyboard layouts, there is no stock Unicode keymap in the Kbd package. The console bootscript will convert an available keymap to UTF-8 on the fly if this variable is set to the encoding of the available non-UTF-8 keymap.
Some examples:
We'll use C.UTF-8
as the
locale for interactive sessions in the Linux console in
Section 9.7,
“Configuring the System Locale,” so we should set
UNICODE
to 1
. And the console fonts shipped by
the Kbd package
containing the glyphs for all characters from the
program messages in the C.UTF-8
locale are LatArCyrHeb*.psfu.gz
, LatGrkCyr*.psfu.gz
, Lat2-Terminus16.psfu.gz
, and
pancyrillic.f16.psfu.gz
in /usr/share/consolefonts
(the other
shipped console fonts lack glyphs of some characters
like the Unicode left/right quotation marks and the
Unicode English dash). So set one of them, for example
Lat2-Terminus16.psfu.gz
as the default console font:
cat > /etc/sysconfig/console << "EOF"
# Begin /etc/sysconfig/console
UNICODE="1"
FONT="Lat2-Terminus16"
# End /etc/sysconfig/console
EOF
For a non-Unicode setup, only the KEYMAP and FONT variables are generally needed. E.g., for a Polish setup, one would use:
cat > /etc/sysconfig/console << "EOF"
# Begin /etc/sysconfig/console
KEYMAP="pl2"
FONT="lat2a-16 -m 8859-2"
# End /etc/sysconfig/console
EOF
As mentioned above, it is sometimes necessary to adjust a stock keymap slightly. The following example adds the Euro symbol to the German keymap:
cat > /etc/sysconfig/console << "EOF"
# Begin /etc/sysconfig/console
KEYMAP="de-latin1"
KEYMAP_CORRECTIONS="euro2"
FONT="lat0-16 -m 8859-15"
UNICODE="1"
# End /etc/sysconfig/console
EOF
The following is a Unicode-enabled example for Bulgarian, where a stock UTF-8 keymap exists:
cat > /etc/sysconfig/console << "EOF"
# Begin /etc/sysconfig/console
UNICODE="1"
KEYMAP="bg_bds-utf8"
FONT="LatArCyrHeb-16"
# End /etc/sysconfig/console
EOF
Due to the use of a 512-glyph LatArCyrHeb-16 font in the previous example, bright colors are no longer available on the Linux console unless a framebuffer is used. If one wants to have bright colors without a framebuffer and can live without characters not belonging to his language, it is still possible to use a language-specific 256-glyph font, as illustrated below:
cat > /etc/sysconfig/console << "EOF"
# Begin /etc/sysconfig/console
UNICODE="1"
KEYMAP="bg_bds-utf8"
FONT="cyr-sun16"
# End /etc/sysconfig/console
EOF
The following example illustrates keymap autoconversion from ISO-8859-15 to UTF-8 and enabling dead keys in Unicode mode:
cat > /etc/sysconfig/console << "EOF"
# Begin /etc/sysconfig/console
UNICODE="1"
KEYMAP="de-latin1"
KEYMAP_CORRECTIONS="euro2"
LEGACY_CHARSET="iso-8859-15"
FONT="LatArCyrHeb-16 -m 8859-15"
# End /etc/sysconfig/console
EOF
Some keymaps have dead keys (i.e., keys that don't produce a character by themselves, but put an accent on the character produced by the next key) or define composition rules (such as: “press Ctrl+. A E to get Æ” in the default keymap). Linux-6.12.5 interprets dead keys and composition rules in the keymap correctly only when the source characters to be composed together are not multibyte. This deficiency doesn't affect keymaps for European languages, because there accents are added to unaccented ASCII characters, or two ASCII characters are composed together. However, in UTF-8 mode it is a problem; e.g., for the Greek language, where one sometimes needs to put an accent on the letter α. The solution is either to avoid the use of UTF-8, or to install the X window system, which doesn't have this limitation, in its input handling.
For Chinese, Japanese, Korean, and some other languages, the Linux console cannot be configured to display the needed characters. Users who need such languages should install the X Window System, fonts that cover the necessary character ranges, and the proper input method (e.g., SCIM supports a wide variety of languages).
The /etc/sysconfig/console
file only controls the Linux text console localization. It
has nothing to do with setting the proper keyboard layout
and terminal fonts in the X Window System, with ssh
sessions, or with a serial console. In such situations,
limitations mentioned in the last two list items above do
not apply.
At times, it is desirable to create files at boot time. For
instance, the /tmp/.ICE-unix
directory is often needed. This can be done by creating an
entry in the /etc/sysconfig/createfiles
configuration
script. The format of this file is embedded in the comments
of the default configuration file.
The sysklogd
script invokes the
syslogd program
as a part of System V initialization. The -m 0
option turns off the
periodic timestamp mark that syslogd writes to the log
files every 20 minutes by default. If you want to turn on
this periodic timestamp mark, edit /etc/sysconfig/rc.site
and define the
variable SYSKLOGD_PARMS to the desired value. For instance,
to remove all parameters, set the variable to a null value:
SYSKLOGD_PARMS=
See man
syslogd
for more options.
The optional /etc/sysconfig/rc.site
file contains
settings that are automatically set for each SystemV boot
script. It can alternatively set the values specified in the
hostname
, console
, and clock
files in the /etc/sysconfig/
directory. If the
associated variables are present in both these separate files
and rc.site
, the values in the
script-specific files take effect.
rc.site
also contains
parameters that can customize other aspects of the boot
process. Setting the IPROMPT variable will enable selective
running of bootscripts. Other options are described in the
file comments. The default version of the file is as follows:
# rc.site # Optional parameters for boot scripts. # Distro Information # These values, if specified here, override the defaults #DISTRO="Linux From Scratch" # The distro name #DISTRO_CONTACT="lfs-dev@lists.linuxfromscratch.org" # Bug report address #DISTRO_MINI="LFS" # Short name used in filenames for distro config # Define custom colors used in messages printed to the screen # Please consult `man console_codes` for more information # under the "ECMA-48 Set Graphics Rendition" section # # Warning: when switching from a 8bit to a 9bit font, # the linux console will reinterpret the bold (1;) to # the top 256 glyphs of the 9bit font. This does # not affect framebuffer consoles # These values, if specified here, override the defaults #BRACKET="\\033[1;34m" # Blue #FAILURE="\\033[1;31m" # Red #INFO="\\033[1;36m" # Cyan #NORMAL="\\033[0;39m" # Grey #SUCCESS="\\033[1;32m" # Green #WARNING="\\033[1;33m" # Yellow # Use a colored prefix # These values, if specified here, override the defaults #BMPREFIX=" " #SUCCESS_PREFIX="${SUCCESS} * ${NORMAL} " #FAILURE_PREFIX="${FAILURE}*****${NORMAL} " #WARNING_PREFIX="${WARNING} *** ${NORMAL} " # Manually set the right edge of message output (characters) # Useful when resetting console font during boot to override # automatic screen width detection #COLUMNS=120 # Interactive startup #IPROMPT="yes" # Whether to display the interactive boot prompt #itime="3" # The amount of time (in seconds) to display the prompt # The total length of the distro welcome string, without escape codes #wlen=$(echo "Welcome to ${DISTRO}" | wc -c ) #welcome_message="Welcome to ${INFO}${DISTRO}${NORMAL}" # The total length of the interactive string, without escape codes #ilen=$(echo "Press 'I' to enter interactive startup" | wc -c ) #i_message="Press '${FAILURE}I${NORMAL}' to enter interactive startup" # Set scripts to skip the file system check on reboot #FASTBOOT=yes # Skip reading from the console #HEADLESS=yes # Write out fsck progress if yes #VERBOSE_FSCK=no # Speed up boot without waiting for settle in udev #OMIT_UDEV_SETTLE=y # Speed up boot without waiting for settle in udev_retry #OMIT_UDEV_RETRY_SETTLE=yes # Skip cleaning /tmp if yes #SKIPTMPCLEAN=no # For setclock #UTC=1 #CLOCKPARAMS= # For consolelog (Note that the default, 7=debug, is noisy) #LOGLEVEL=7 # For network #HOSTNAME=mylfs # Delay between TERM and KILL signals at shutdown #KILLDELAY=3 # Optional sysklogd parameters #SYSKLOGD_PARMS="-m 0" # Console parameters #UNICODE=1 #KEYMAP="de-latin1" #KEYMAP_CORRECTIONS="euro2" #FONT="lat0-16 -m 8859-15" #LEGACY_CHARSET=
The LFS boot scripts boot and shut down a system in a
fairly efficient manner, but there are a few tweaks you can
make in the rc.site file to improve speed even more, and to
adjust messages according to your preferences. To do this,
adjust the settings in the /etc/sysconfig/rc.site
file above.
During the boot script udev
, there is a call to
udev
settle that requires some time to
complete. This time may or may not be required
depending on the devices in the system. If you only
have simple partitions and a single ethernet card,
the boot process will probably not need to wait for
this command. To skip it, set the variable
OMIT_UDEV_SETTLE=y.
The boot script udev_retry
also runs udev settle by
default. This command is only needed if the
/var
directory is
separately mounted, because the clock needs the
/var/lib/hwclock/adjtime
file.
Other customizations may also need to wait for udev
to complete, but in many installations it is not
necessary. Skip the command by setting the variable
OMIT_UDEV_RETRY_SETTLE=y.
By default, the file system checks are silent. This can appear to be a delay during the bootup process. To turn on the fsck output, set the variable VERBOSE_FSCK=y.
When rebooting, you may want to skip the filesystem
check, fsck, completely.
To do this, either create the file /fastboot
or reboot the system with
the command /sbin/shutdown -f -r
now. On the other hand, you can force
all file systems to be checked by creating
/forcefsck
or running
shutdown with the
-F
parameter
instead of -f
.
Setting the variable FASTBOOT=y will disable fsck during the boot process until it is removed. This is not recommended on a permanent basis.
Normally, all files in the /tmp
directory are deleted at boot
time. Depending on the number of files or directories
present, this can cause a noticeable delay in the
boot process. To skip removing these files set the
variable SKIPTMPCLEAN=y.
During shutdown, the init program sends a TERM signal to each program it has started (e.g. agetty), waits for a set time (default 3 seconds), then sends each process a KILL signal and waits again. This process is repeated in the sendsignals script for any processes that are not shut down by their own scripts. The delay for init can be set by passing a parameter. For example to remove the delay in init, pass the -t0 parameter when shutting down or rebooting (e.g. /sbin/shutdown -t0 -r now). The delay for the sendsignals script can be skipped by setting the parameter KILLDELAY=0.
Some environment variables are necessary for native language support. Setting them properly results in:
The output of programs being translated into your native language
The correct classification of characters into letters, digits and other classes. This is necessary for bash to properly accept non-ASCII characters in command lines in non-English locales
The correct alphabetical sorting order for the country
The appropriate default paper size
The correct formatting of monetary, time, and date values
Replace <ll>
below with the two-letter code for your desired language (e.g.,
en
) and <CC>
with the two-letter
code for the appropriate country (e.g., GB
). <charmap>
should be
replaced with the canonical charmap for your chosen locale.
Optional modifiers such as @euro
may also be present.
The list of all locales supported by Glibc can be obtained by running the following command:
locale -a
Charmaps can have a number of aliases, e.g., ISO-8859-1
is also referred to as iso8859-1
and iso88591
. Some applications cannot handle the
various synonyms correctly (e.g., require that UTF-8
is written as UTF-8
, not utf8
),
so it is the safest in most cases to choose the canonical name
for a particular locale. To determine the canonical name, run
the following command, where <locale name>
is the
output given by locale
-a for your preferred locale (en_GB.iso88591
in our example).
LC_ALL=<locale name>
locale charmap
For the en_GB.iso88591
locale, the
above command will print:
ISO-8859-1
This results in a final locale setting of en_GB.ISO-8859-1
. It is important that the
locale found using the heuristic above is tested prior to it
being added to the Bash startup files:
LC_ALL=<locale name> locale language LC_ALL=<locale name> locale charmap LC_ALL=<locale name> locale int_curr_symbol LC_ALL=<locale name> locale int_prefix
The above commands should print the language name, the character encoding used by the locale, the local currency, and the prefix to dial before the telephone number in order to get into the country. If any of the commands above fail with a message similar to the one shown below, this means that your locale was either not installed in Chapter 8 or is not supported by the default installation of Glibc.
locale: Cannot set LC_* to default locale: No such file or directory
If this happens, you should either install the desired locale using the localedef command, or consider choosing a different locale. Further instructions assume that there are no such error messages from Glibc.
Other packages can also function incorrectly (but may not necessarily display any error messages) if the locale name does not meet their expectations. In those cases, investigating how other Linux distributions support your locale might provide some useful information.
The shell program /bin/bash (here after
referred as “the
shell”) uses a collection of startup files to
help create the environment to run in. Each file has a specific
use and may affect login and interactive environments
differently. The files in the /etc
directory provide global settings. If
equivalent files exist in the home directory, they may override
the global settings.
An interactive login shell is started after a successful login,
using /bin/login,
by reading the /etc/passwd
file.
An interactive non-login shell is started at the command-line
(e.g. [prompt]$
/bin/bash). A non-interactive
shell is usually present when a shell script is running. It is
non-interactive because it is processing a script and not
waiting for user input between commands.
Create the /etc/profile
once the proper locale settings have been
determined to set the desired locale, but set the
C.UTF-8
locale instead if running
in the Linux console (to prevent programs from outputting
characters that the Linux console is unable to render):
cat > /etc/profile << "EOF"
# Begin /etc/profile
for i in $(locale); do
unset ${i%=*}
done
if [[ "$TERM" = linux ]]; then
export LANG=C.UTF-8
else
export LANG=<ll>_<CC>.<charmap><@modifiers>
fi
# End /etc/profile
EOF
The C
(default) and en_US
(the recommended one for United States
English users) locales are different. C
uses the US-ASCII 7-bit character set, and
treats bytes with the high bit set as invalid characters.
That's why, e.g., the ls command substitutes them
with question marks in that locale. Also, an attempt to send
mail with such characters from Mutt or Pine results in
non-RFC-conforming messages being sent (the charset in the
outgoing mail is indicated as unknown 8-bit
). It's suggested that you
use the C
locale only if you are
certain that you will never need 8-bit characters.
The inputrc
file is the
configuration file for the readline library, which provides
editing capabilities while the user is entering a line from the
terminal. It works by translating keyboard inputs into specific
actions. Readline is used by bash and most other shells as well
as many other applications.
Most people do not need user-specific functionality so the
command below creates a global /etc/inputrc
used by everyone who logs in. If
you later decide you need to override the defaults on a per
user basis, you can create a .inputrc
file in the user's home directory
with the modified mappings.
For more information on how to edit the inputrc
file, see info bash under the
Readline Init File
section. info
readline is also a good source of information.
Below is a generic global inputrc
along with comments to explain what the various options do.
Note that comments cannot be on the same line as commands.
Create the file using the following command:
cat > /etc/inputrc << "EOF"
# Begin /etc/inputrc
# Modified by Chris Lynn <roryo@roryo.dynup.net>
# Allow the command prompt to wrap to the next line
set horizontal-scroll-mode Off
# Enable 8-bit input
set meta-flag On
set input-meta On
# Turns off 8th bit stripping
set convert-meta Off
# Keep the 8th bit for display
set output-meta On
# none, visible or audible
set bell-style none
# All of the following map the escape sequence of the value
# contained in the 1st argument to the readline specific functions
"\eOd": backward-word
"\eOc": forward-word
# for linux console
"\e[1~": beginning-of-line
"\e[4~": end-of-line
"\e[5~": beginning-of-history
"\e[6~": end-of-history
"\e[3~": delete-char
"\e[2~": quoted-insert
# for xterm
"\eOH": beginning-of-line
"\eOF": end-of-line
# for Konsole
"\e[H": beginning-of-line
"\e[F": end-of-line
# End /etc/inputrc
EOF
The shells
file contains a list
of login shells on the system. Applications use this file to
determine whether a shell is valid. For each shell a single
line should be present, consisting of the shell's path relative
to the root of the directory structure (/).
For example, this file is consulted by chsh to determine whether an unprivileged user may change the login shell for her own account. If the command name is not listed, the user will be denied the ability to change shells.
It is a requirement for applications such as GDM which does not populate the face
browser if it can't find /etc/shells
, or FTP daemons which
traditionally disallow access to users with shells not included
in this file.
cat > /etc/shells << "EOF"
# Begin /etc/shells
/bin/sh
/bin/bash
# End /etc/shells
EOF
It is time to make the LFS system bootable. This chapter
discusses creating the /etc/fstab
file, building a kernel for the new LFS system, and installing
the GRUB boot loader so that the LFS system can be selected for
booting at startup.
The /etc/fstab
file is used by
some programs to determine where file systems are to be mounted
by default, in which order, and which must be checked (for
integrity errors) prior to mounting. Create a new file systems
table like this:
cat > /etc/fstab << "EOF"
# Begin /etc/fstab
# file system mount-point type options dump fsck
# order
/dev/<xxx>
/ <fff>
defaults 1 1
/dev/<yyy>
swap swap pri=1 0 0
proc /proc proc nosuid,noexec,nodev 0 0
sysfs /sys sysfs nosuid,noexec,nodev 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /run tmpfs defaults 0 0
devtmpfs /dev devtmpfs mode=0755,nosuid 0 0
tmpfs /dev/shm tmpfs nosuid,nodev 0 0
cgroup2 /sys/fs/cgroup cgroup2 nosuid,noexec,nodev 0 0
# End /etc/fstab
EOF
Replace <xxx>
,
<yyy>
, and
<fff>
with the
values appropriate for the system, for example, sda2
, sda5
, and
ext4
. For details on the six
fields in this file, see fstab(5).
Filesystems with MS-DOS or Windows origin (i.e. vfat, ntfs,
smbfs, cifs, iso9660, udf) need a special option, utf8, in
order for non-ASCII characters in file names to be interpreted
properly. For non-UTF-8 locales, the value of iocharset
should be set to be the same as the
character set of the locale, adjusted in such a way that the
kernel understands it. This works if the relevant character set
definition (found under File systems -> Native Language
Support when configuring the kernel) has been compiled into the
kernel or built as a module. However, if the character set of
the locale is UTF-8, the corresponding option iocharset=utf8
would make the file system case
sensitive. To fix this, use the special option utf8
instead of iocharset=utf8
, for UTF-8 locales. The
“codepage” option is also needed for vfat
and smbfs filesystems. It should be set to the codepage number
used under MS-DOS in your country. For example, in order to
mount USB flash drives, a ru_RU.KOI8-R user would need the
following in the options portion of its mount line in
/etc/fstab
:
noauto,user,quiet,showexec,codepage=866,iocharset=koi8r
The corresponding options fragment for ru_RU.UTF-8 users is:
noauto,user,quiet,showexec,codepage=866,utf8
Note that using iocharset
is the
default for iso8859-1
(which keeps
the file system case insensitive), and the utf8
option tells the kernel to convert the
file names using UTF-8 so they can be interpreted in the UTF-8
locale.
It is also possible to specify default codepage and iocharset
values for some filesystems during kernel configuration. The
relevant parameters are named “Default NLS Option” (CONFIG_NLS_DEFAULT)
, “Default Remote NLS
Option” (CONFIG_SMB_NLS_DEFAULT
), “Default codepage for
FAT” (CONFIG_FAT_DEFAULT_CODEPAGE
), and “Default iocharset for
FAT” (CONFIG_FAT_DEFAULT_IOCHARSET
). There is no way
to specify these settings for the ntfs filesystem at kernel
compilation time.
It is possible to make the ext3 filesystem reliable across
power failures for some hard disk types. To do this, add the
barrier=1
mount option to the
appropriate entry in /etc/fstab
.
To check if the disk drive supports this option, run
hdparm on the applicable disk drive. For example, if:
hdparm -I /dev/sda | grep NCQ
returns non-empty output, the option is supported.
Note: Logical Volume Management (LVM) based partitions cannot
use the barrier
option.
The Linux package contains the Linux kernel.
Building the kernel involves a few steps—configuration,
compilation, and installation. Read the README
file in the kernel source tree for
alternative methods to the way this book configures the
kernel.
Building the linux kernel for the first time is one of the most challenging tasks in LFS. Getting it right depends on the specific hardware for the target system and your specific needs. There are almost 12,000 configuration items that are available for the kernel although only about a third of them are needed for most computers. The LFS editors recommend that users not familiar with this process follow the procedures below fairly closely. The objective is to get an initial system to a point where you can log in at the command line when you reboot later in Section 11.3, “Rebooting the System.” At this point optimization and customization is not a goal.
For general information on kernel configuration see https://www.linuxfromscratch.org/hints/downloads/files/kernel-configuration.txt. Additional information about configuring and building the kernel can be found at https://anduin.linuxfromscratch.org/LFS/kernel-nutshell/. These references are a bit dated, but still give a reasonable overview of the process.
If all else fails, you can ask for help on the lfs-support mailing list. Note that subscribing is required in order for the list to avoid spam.
Prepare for compilation by running the following command:
make mrproper
This ensures that the kernel tree is absolutely clean. The kernel team recommends that this command be issued prior to each kernel compilation. Do not rely on the source tree being clean after un-tarring.
There are several ways to configure the kernel options. Usually, this is done through a menu-driven interface, for example:
make menuconfig
The meaning of optional make environment variables:
LANG=<host_LANG_value>
LC_ALL=
This establishes the locale setting to the one used on the host. This may be needed for a proper menuconfig ncurses interface line drawing on a UTF-8 linux text console.
If used, be sure to replace <host_LANG_value>
by the value of the $LANG
variable from your host. You can alternatively use
instead the host's value of $LC_ALL
or $LC_CTYPE
.
This launches an ncurses menu-driven interface. For other (graphical) interfaces, type make help.
A good starting place for setting up the kernel configuration is to run make defconfig. This will set the base configuration to a good state that takes your current system architecture into account.
Be sure to enable/disable/set the following features or the system might not work correctly or boot at all:
General setup ---> [ ] Compile the kernel with warnings as errors [WERROR] CPU/Task time and stats accounting ---> [*] Pressure stall information tracking [PSI] [ ] Require boot parameter to enable pressure stall information tracking ... [PSI_DEFAULT_DISABLED] < > Enable kernel headers through /sys/kernel/kheaders.tar.xz [IKHEADERS] [*] Control Group support ---> [CGROUPS] [*] Memory controller [MEMCG] [ ] Configure standard kernel features (expert users) ---> [EXPERT] Processor type and features ---> [*] Build a relocatable kernel [RELOCATABLE] [*] Randomize the address of the kernel image (KASLR) [RANDOMIZE_BASE] General architecture-dependent options ---> [*] Stack Protector buffer overflow detection [STACKPROTECTOR] [*] Strong Stack Protector [STACKPROTECTOR_STRONG] Device Drivers ---> Generic Driver Options ---> [ ] Support for uevent helper [UEVENT_HELPER] [*] Maintain a devtmpfs filesystem to mount at /dev [DEVTMPFS] [*] Automount devtmpfs at /dev, after the kernel mounted the rootfs ... [DEVTMPFS_MOUNT] Firmware Drivers ---> [*] Mark VGA/VBE/EFI FB as generic system framebuffer [SYSFB_SIMPLEFB] Graphics support ---> <*> Direct Rendering Manager (XFree86 4.1.0 and higher DRI support) ---> ... [DRM] [*] Display a user-friendly message when a kernel panic occurs ... [DRM_PANIC] (kmsg) Panic screen formatter [DRM_PANIC_SCREEN] [*] Enable legacy fbdev support for your modesetting driver ... [DRM_FBDEV_EMULATION] <*> Simple framebuffer driver [DRM_SIMPLEDRM] Console display driver support ---> [*] Framebuffer Console support [FRAMEBUFFER_CONSOLE]
Enable some additional features if you are building a
64-bit system. If you are using menuconfig, enable them in
the order of CONFIG_PCI_MSI
first, then
CONFIG_IRQ_REMAP
,
at last CONFIG_X86_X2APIC
because an
option only shows up after its dependencies are selected.
Processor type and features ---> [*] Support x2apic [X86_X2APIC] Device Drivers ---> [*] PCI support ---> [PCI] [*] Message Signaled Interrupts (MSI and MSI-X) [PCI_MSI] [*] IOMMU Hardware Support ---> [IOMMU_SUPPORT] [*] Support for Interrupt Remapping [IRQ_REMAP]
If you are building a 32-bit system running on a hardware with RAM more than 4GB, adjust the configuration so the kernel will be able to use up to 64GB physical RAM:
Processor type and features ---> High Memory Support ---> (X) 64GB [HIGHMEM64G]
If the partition for the LFS system is in a NVME SSD (i. e.
the device node for the partition is /dev/nvme*
instead of /dev/sd*
), enable NVME support or the LFS
system won't boot:
Device Drivers ---> NVME Support ---> <*> NVM Express block device [BLK_DEV_NVME]
There are several other options that may be desired depending on the requirements for the system. For a list of options needed for BLFS packages, see the BLFS Index of Kernel Settings.
If your host hardware is using UEFI and you wish to boot the LFS system with it, you should adjust some kernel configuration following the BLFS page even if you'll use the UEFI bootloader from the host distro.
The rationale for the above configuration items:
Randomize
the address of the kernel image
(KASLR)
Enable ASLR for kernel image, to mitigate some attacks based on fixed addresses of sensitive data or code in the kernel.
Compile
the kernel with warnings as errors
This may cause building failure if the compiler and/or configuration are different from those of the kernel developers.
Enable
kernel headers through
/sys/kernel/kheaders.tar.xz
This will require cpio building the kernel. cpio is not installed by LFS.
Configure
standard kernel features (expert
users)
This will make some options show up in the configuration interface but changing those options may be dangerous. Do not use this unless you know what you are doing.
Strong
Stack Protector
Enable SSP for the kernel. We've enabled it for the
entire userspace with --enable-default-ssp
configuring GCC, but the kernel does not use GCC
default setting for SSP. We enable it explicitly here.
Support
for uevent helper
Having this option set may interfere with device management when using Udev.
Maintain a
devtmpfs
This will create automated device nodes which are populated by the kernel, even without Udev running. Udev then runs on top of this, managing permissions and adding symlinks. This configuration item is required for all users of Udev.
Automount
devtmpfs at /dev
This will mount the kernel view of the devices on /dev upon switching to root filesystem just before starting init.
Display a
user-friendly message when a kernel panic
occurs
This will make the kernel correctly display the message
in case a kernel panic happens and a running DRM driver
supports to do so. Without this, it would be more
difficult to diagnose a panic: if no DRM driver is
running, we'd be on the VGA console which can only hold
24 lines and the relevant kernel message is often
flushed away; if a DRM driver is running, the display
is often completely messed up on panic. As of
Linux-6.12, none of the dedicated drivers for
mainstream GPU models really supports this, but it's
supported by the “Simple framebuffer driver” which
runs on the VESA (or EFI) framebuffer before the
dedicated GPU driver is loaded. If the dedicated GPU
driver is built as a module (instead of a part of the
kernel image) and no initramfs is used, this
functionality will work just fine before the root file
system is mounted and it's already enough for providing
information about most LFS configuration errors causing
a panic (for example, an incorrect root=
setting in
Section 10.4,
“Using GRUB to Set Up the Boot Process”).
Panic
screen formatter
Set this kmsg
to make sure
the last kernel messages lines are displayed when a
kernel panic happens. The default, user
, would make the kernel show only
a “user
friendly” panic message which is not
helpful on diagnostic. The third choice, qr_code
, would make the kernel to
compress the last kernel message lines into a QR code
and display it. The QR code can hold more message lines
than plain text and it can be decoded with an external
device (like a smart phone). But it requires a Rust
compiler that LFS does not provide.
Mark
VGA/VBE/EFI FB as generic system framebuffer
and Simple framebuffer
driver
These allow to use the VESA framebuffer (or the EFI framebuffer if booting the LFS system via UEFI) as a DRM device. The VESA framebuffer will be set up by GRUB (or the EFI framebuffer will be set up by the UEFI firmware), so the DRM panic handler can function before the GPU-specific DRM driver is loaded.
Enable
legacy fbdev support for your modesetting
driver
and Framebuffer Console
support
These are needed to display the Linux console on a GPU
driven by a DRI (Direct Rendering Infrastructure)
driver. As CONFIG_DRM
(Direct Rendering Manager) is enabled, we should enable
these two options as well or we'll see a blank screen
once the DRI driver is loaded.
Support
x2apic
Support running the interrupt controller of 64-bit x86 processors in x2APIC mode. x2APIC may be enabled by firmware on 64-bit x86 systems, and a kernel without this option enabled will panic on boot if x2APIC is enabled by firmware. This option has no effect, but also does no harm if x2APIC is disabled by the firmware.
Alternatively, make
oldconfig may be more appropriate in some
situations. See the README
file
for more information.
If desired, skip kernel configuration by copying the kernel
config file, .config
, from the
host system (assuming it is available) to the unpacked
linux-6.12.5
directory.
However, we do not recommend this option. It is often better
to explore all the configuration menus and create the kernel
configuration from scratch.
Compile the kernel image and modules:
make
If using kernel modules, module configuration in /etc/modprobe.d
may be required.
Information pertaining to modules and kernel configuration is
located in Section 9.3,
“Overview of Device and Module Handling” and in the
kernel documentation in the linux-6.12.5/Documentation
directory. Also,
modprobe.d(5)
may be of interest.
Unless module support has been disabled in the kernel configuration, install the modules with:
make modules_install
After kernel compilation is complete, additional steps are
required to complete the installation. Some files need to be
copied to the /boot
directory.
If you've decided to use a separate /boot
partition for the LFS system (maybe
sharing a /boot
partition
with the host distro), the files copied below should go
there. The easiest way to do that is to create the entry
for /boot
in /etc/fstab
first (read the previous
section for details), then issue the following command as
the root
user in the
chroot environment:
mount /boot
The path to the device node is omitted in the command
because mount
can read it from /etc/fstab
.
The path to the kernel image may vary depending on the platform being used. The filename below can be changed to suit your taste, but the stem of the filename should be vmlinuz to be compatible with the automatic setup of the boot process described in the next section. The following command assumes an x86 architecture:
cp -iv arch/x86/boot/bzImage /boot/vmlinuz-6.12.5-lfs-r12.2-59
System.map
is a symbol file for
the kernel. It maps the function entry points of every
function in the kernel API, as well as the addresses of the
kernel data structures for the running kernel. It is used as
a resource when investigating kernel problems. Issue the
following command to install the map file:
cp -iv System.map /boot/System.map-6.12.5
The kernel configuration file .config
produced by the make menuconfig step above
contains all the configuration selections for the kernel that
was just compiled. It is a good idea to keep this file for
future reference:
cp -iv .config /boot/config-6.12.5
Install the documentation for the Linux kernel:
cp -r Documentation -T /usr/share/doc/linux-6.12.5
It is important to note that the files in the kernel source directory are not owned by root. Whenever a package is unpacked as user root (like we did inside chroot), the files have the user and group IDs of whatever they were on the packager's computer. This is usually not a problem for any other package to be installed because the source tree is removed after the installation. However, the Linux source tree is often retained for a long time. Because of this, there is a chance that whatever user ID the packager used will be assigned to somebody on the machine. That person would then have write access to the kernel source.
In many cases, the configuration of the kernel will need to be updated for packages that will be installed later in BLFS. Unlike other packages, it is not necessary to remove the kernel source tree after the newly built kernel is installed.
If the kernel source tree is going to be retained, run
chown -R 0:0
on the linux-6.12.5
directory
to ensure all files are owned by user root.
If you are updating the configuration and rebuilding the
kernel from a retained kernel source tree, normally you
should not run
the make
mrproper command. The command would purge
the .config
file and all the
.o
files from the previous
build. Despite it's easy to restore .config
from the copy in /boot
, purging all the .o
files is still a waste: for a simple
configuration change, often only a few .o
files need to be (re)built and the
kernel build system will correctly skip other .o
files if they are not purged.
On the other hand, if you've upgraded GCC, you should run
make clean to
purge all the .o
files from
the previous build, or the new build may fail.
Some kernel documentation recommends creating a symlink
from /usr/src/linux
pointing
to the kernel source directory. This is specific to kernels
prior to the 2.6 series and must
not be created on an LFS system as it can cause
problems for packages you may wish to build once your base
LFS system is complete.
Most of the time Linux modules are loaded automatically, but
sometimes it needs some specific direction. The program that
loads modules, modprobe or insmod, uses /etc/modprobe.d/usb.conf
for this purpose.
This file needs to be created so that if the USB drivers
(ehci_hcd, ohci_hcd and uhci_hcd) have been built as modules,
they will be loaded in the correct order; ehci_hcd needs to
be loaded prior to ohci_hcd and uhci_hcd in order to avoid a
warning being output at boot time.
Create a new file /etc/modprobe.d/usb.conf
by running the
following:
install -v -m755 -d /etc/modprobe.d
cat > /etc/modprobe.d/usb.conf << "EOF"
# Begin /etc/modprobe.d/usb.conf
install ohci_hcd /sbin/modprobe ehci_hcd ; /sbin/modprobe -i ohci_hcd ; true
install uhci_hcd /sbin/modprobe ehci_hcd ; /sbin/modprobe -i uhci_hcd ; true
# End /etc/modprobe.d/usb.conf
EOF
Contains all the configuration selections for the kernel |
|
The engine of the Linux system. When turning on the computer, the kernel is the first part of the operating system that gets loaded. It detects and initializes all components of the computer's hardware, then makes these components available as a tree of files to the software and turns a single CPU into a multitasking machine capable of running scores of programs seemingly at the same time |
|
A list of addresses and symbols; it maps the entry points and addresses of all the functions and data structures in the kernel |
If your system has UEFI support and you wish to boot LFS with
UEFI, you should skip the instructions in this page but still
learn the syntax of grub.cfg
and the method to specify a partition in the file from this
page, and configure GRUB with UEFI support using the
instructions provided in
the BLFS page.
Configuring GRUB incorrectly can render your system inoperable without an alternate boot device such as a CD-ROM or bootable USB drive. This section is not required to boot your LFS system. You may just want to modify your current boot loader, e.g. Grub-Legacy, GRUB2, or LILO.
Ensure that an emergency boot disk is ready to “rescue” the
computer if the computer becomes unusable (un-bootable). If
you do not already have a boot device, you can create one. In
order for the procedure below to work, you need to jump ahead
to BLFS and install xorriso
from the
libisoburn package.
cd /tmp grub-mkrescue --output=grub-img.iso xorriso -as cdrecord -v dev=/dev/cdrw blank=as_needed grub-img.iso
GRUB uses its own naming structure for drives and partitions
in the form of (hdn,m), where n is the hard drive number and
m is the partition
number. The hard drive numbers start from zero, but the
partition numbers start from one for normal partitions (from
five for extended partitions). Note that this is different
from earlier versions where both numbers started from zero.
For example, partition sda1
is
(hd0,1) to GRUB and
sdb3
is (hd1,3). In contrast to Linux,
GRUB does not consider CD-ROM drives to be hard drives. For
example, if using a CD on hdb
and a second hard drive on hdc
,
that second hard drive would still be (hd1).
GRUB works by writing data to the first physical track of the hard disk. This area is not part of any file system. The programs there access GRUB modules in the boot partition. The default location is /boot/grub/.
The location of the boot partition is a choice of the user
that affects the configuration. One recommendation is to have
a separate small (suggested size is 200 MB) partition just
for boot information. That way each build, whether LFS or
some commercial distro, can access the same boot files and
access can be made from any booted system. If you choose to
do this, you will need to mount the separate partition, move
all files in the current /boot
directory (e.g. the Linux kernel you just built in the
previous section) to the new partition. You will then need to
unmount the partition and remount it as /boot
. If you do this, be sure to update
/etc/fstab
.
Leaving /boot
on the current
LFS partition will also work, but configuration for multiple
systems is more difficult.
Using the above information, determine the appropriate
designator for the root partition (or boot partition, if a
separate one is used). For the following example, it is
assumed that the root (or separate boot) partition is
sda2
.
Install the GRUB files into /boot/grub
and set up the boot track:
The following command will overwrite the current boot loader. Do not run the command if this is not desired, for example, if using a third party boot manager to manage the Master Boot Record (MBR).
grub-install /dev/sda
If the system has been booted using UEFI, grub-install will try to
install files for the x86_64-efi target, but those
files have not been installed in Chapter 8.
If this is the case, add --target
i386-pc
to the command above.
Generate /boot/grub/grub.cfg
:
cat > /boot/grub/grub.cfg << "EOF"
# Begin /boot/grub/grub.cfg
set default=0
set timeout=5
insmod part_gpt
insmod ext2
set root=(hd0,2)
set gfxpayload=1024x768x32
menuentry "GNU/Linux, Linux 6.12.5-lfs-r12.2-59" {
linux /boot/vmlinuz-6.12.5-lfs-r12.2-59 root=/dev/sda2 ro
}
EOF
The insmod
commands load the GRUB
modules named part_gpt
and
ext2
. Despite the naming,
ext2
actually supports
ext2
, ext3
, and ext4
filesystems. The grub-install command has
embedded some modules into the main GRUB image (installed into the MBR or
the GRUB BIOS partition) to access the other modules (in
/boot/grub/i386-pc
) without a
chicken-or-egg issue, so with a typical configuration these
two modules are already embedded and those two insmod commands will do
nothing. But they do no harm anyway, and they may be needed
with some rare configurations.
The set gfxpayload=1024x768x32 command sets the resolution and color depth of the VESA framebuffer to be passed to the kernel. It's necessary for the kernel SimpleDRM driver to use the VESA framebuffer. You can use a different resolution or color depth value which better suits for your monitor.
From GRUB's perspective, the kernel files are relative to the partition used. If you used a separate /boot partition, remove /boot from the above linux line. You will also need to change the set root line to point to the boot partition.
The GRUB designator for a partition may change if you added
or removed some disks (including removable disks like USB
thumb devices). The change may cause boot failure because
grub.cfg
refers to some
“old”
designators. If you wish to avoid such a problem, you may
use the UUID of a partition and the UUID of a filesystem
instead of a GRUB designator to specify a device. Run
lsblk -o
UUID,PARTUUID,PATH,MOUNTPOINT to show the
UUIDs of your filesystems (in the UUID
column) and partitions (in the
PARTUUID
column). Then replace
set root=(hdx,y)
with
search --set=root --fs-uuid
, and
replace <UUID of the filesystem
where the kernel is installed>
root=/dev/sda2
with
root=PARTUUID=
.
<UUID of the partition where LFS is
built>
Note that the UUID of a partition is completely different
from the UUID of the filesystem in this partition. Some
online resources may instruct you to use root=UUID=
instead of <filesystem
UUID>
root=PARTUUID=
, but doing so will require an
initramfs, which is beyond the scope of LFS.
<partition
UUID>
The name of the device node for a partition in /dev
may also change (this is less likely
than a GRUB designator change). You can also replace paths
to device nodes like /dev/sda1
with PARTUUID=
, in <partition
UUID>
/etc/fstab
, to avoid a potential boot
failure in case the device node name has changed.
GRUB is an extremely powerful program and it provides a tremendous number of options for booting from a wide variety of devices, operating systems, and partition types. There are also many options for customization such as graphical splash screens, playing sounds, mouse input, etc. The details of these options are beyond the scope of this introduction.
There is a command, grub-mkconfig, that can write a configuration file automatically. It uses a set of scripts in /etc/grub.d/ and will destroy any customizations that you make. These scripts are designed primarily for non-source distributions and are not recommended for LFS. If you install a commercial Linux distribution, there is a good chance that this program will be run. Be sure to back up your grub.cfg file.
Well done! The new LFS system is installed! We wish you much success with your shiny new custom-built Linux system.
It may be a good idea to create an /etc/lfs-release
file. By having this file,
it is very easy for you (and for us if you need to ask for help
at some point) to find out which LFS version is installed on
the system. Create this file by running:
echo r12.2-59 > /etc/lfs-release
Two files describing the installed system may be used by packages that can be installed on the system later, either in binary form or by building them.
The first one shows the status of your new system with respect to the Linux Standards Base (LSB). To create this file, run:
cat > /etc/lsb-release << "EOF" DISTRIB_ID="Linux From Scratch" DISTRIB_RELEASE="r12.2-59" DISTRIB_CODENAME="<your name here>" DISTRIB_DESCRIPTION="Linux From Scratch" EOF
The second one contains roughly the same information, and is used by systemd and some graphical desktop environments. To create this file, run:
cat > /etc/os-release << "EOF" NAME="Linux From Scratch" VERSION="r12.2-59" ID=lfs PRETTY_NAME="Linux From Scratch r12.2-59" VERSION_CODENAME="<your name here>" HOME_URL="https://www.linuxfromscratch.org/lfs/" RELEASE_TYPE="development" EOF
Be sure to customize the fields 'DISTRIB_CODENAME' and 'VERSION_CODENAME' to make the system uniquely yours.
Now that you have finished the book, do you want to be counted as an LFS user? Head over to https://www.linuxfromscratch.org/cgi-bin/lfscounter.php and register as an LFS user by entering your name and the first LFS version you have used.
Let's reboot into LFS now.
Now that all of the software has been installed, it is time to reboot your computer. However, there are still a few things to check. Here are some suggestions:
Install any firmware needed if the kernel driver for your hardware requires some firmware files to function properly.
Ensure a password is set for the root
user.
A review of the following configuration files is also appropriate at this point.
/etc/bashrc
/etc/dircolors
/etc/fstab
/etc/hosts
/etc/inputrc
/etc/profile
/etc/resolv.conf
/etc/vimrc
/root/.bash_profile
/root/.bashrc
/etc/sysconfig/ifconfig.eth0
Now that we have said that, let's move on to booting our shiny new LFS installation for the first time! First exit from the chroot environment:
logout
Then unmount the virtual file systems:
umount -v $LFS/dev/pts mountpoint -q $LFS/dev/shm && umount -v $LFS/dev/shm umount -v $LFS/dev umount -v $LFS/run umount -v $LFS/proc umount -v $LFS/sys
If multiple partitions were created, unmount the other partitions before unmounting the main one, like this:
umount -v $LFS/home umount -v $LFS
Unmount the LFS file system itself:
umount -v $LFS
Now, reboot the system.
Assuming the GRUB boot loader was set up as outlined earlier, the menu is set to boot LFS r12.2-59 automatically.
When the reboot is complete, the LFS system is ready for use. What you will see is a simple “login: ” prompt. At this point, you can proceed to the BLFS Book where you can add more software to suit your needs.
If your reboot is not successful, it is time to troubleshoot. For hints on solving initial booting problems, see https://www.linuxfromscratch.org/lfs/troubleshooting.html.
Thank you for reading this LFS book. We hope that you have found this book helpful and have learned more about the system creation process.
Now that the LFS system is installed, you may be wondering “What next?” To answer that question, we have compiled a list of resources for you.
Maintenance
Bugs and security notices are reported regularly for all software. Since an LFS system is compiled from source, it is up to you to keep abreast of such reports. There are several online resources that track such reports, some of which are shown below:
This is a list of security vulnerabilities discovered in the LFS book after it's published.
Open Source Security Mailing List
This is a mailing list for discussion of security flaws, concepts, and practices in the Open Source community.
LFS Hints
The LFS Hints are a collection of educational documents submitted by volunteers in the LFS community. The hints are available at https://www.linuxfromscratch.org/hints/downloads/files/.
Mailing lists
There are several LFS mailing lists you may subscribe to if you are in need of help, want to stay current with the latest developments, want to contribute to the project, and more. See Chapter 1 - Mailing Lists for more information.
The Linux Documentation Project
The goal of The Linux Documentation Project (TLDP) is to collaborate on all of the issues of Linux documentation. The TLDP features a large collection of HOWTOs, guides, and man pages. It is located at https://tldp.org/.
Now that LFS is complete and you have a bootable system, what do you do? The next step is to decide how to use it. Generally, there are two broad categories to consider: workstation or server. Indeed, these categories are not mutually exclusive. The applications needed for each category can be combined onto a single system, but let's look at them separately for now.
A server is the simpler category. Generally this consists of a web server such as the Apache HTTP Server and a database server such as MariaDB. However other services are possible. The operating system embedded in a single use device falls into this category.
On the other hand, a workstation is much more complex. It generally requires a graphical user environment such as LXDE, XFCE, KDE, or Gnome based on a basic graphical environment and several graphical based applications such as the Firefox web browser, Thunderbird email client, or LibreOffice office suite. These applications require many (several hundred depending on desired capabilities) more packages of support applications and libraries.
In addition to the above, there is a set of applications for system management for all kinds of systems. These applications are all in the BLFS book. Not all packages are needed in every environment. For example dhcpcd, is not normally appropriate for a server and wireless_tools, are normally only useful for a laptop system.
When you initially boot into LFS, you have all the internal tools to build additional packages. Unfortunately, the user environment is quite sparse. There are a couple of ways to improve this:
This method provides a complete graphical environment where a full featured browser and copy/paste capabilities are available. This method allows using applications like the host's version of wget to download package sources to a location available when working in the chroot environment.
In order to properly build packages in chroot, you will also need to remember to mount the virtual file systems if they are not already mounted. One way to do this is to create a script on the HOST system:
cat > ~/mount-virt.sh << "EOF"
#!/bin/bash
function mountbind
{
if ! mountpoint $LFS/$1 >/dev/null; then
$SUDO mount --bind /$1 $LFS/$1
echo $LFS/$1 mounted
else
echo $LFS/$1 already mounted
fi
}
function mounttype
{
if ! mountpoint $LFS/$1 >/dev/null; then
$SUDO mount -t $2 $3 $4 $5 $LFS/$1
echo $LFS/$1 mounted
else
echo $LFS/$1 already mounted
fi
}
if [ $EUID -ne 0 ]; then
SUDO=sudo
else
SUDO=""
fi
if [ x$LFS == x ]; then
echo "LFS not set"
exit 1
fi
mountbind dev
mounttype dev/pts devpts devpts -o gid=5,mode=620
mounttype proc proc proc
mounttype sys sysfs sysfs
mounttype run tmpfs run
if [ -h $LFS/dev/shm ]; then
install -v -d -m 1777 $LFS$(realpath /dev/shm)
else
mounttype dev/shm tmpfs tmpfs -o nosuid,nodev
fi
#mountbind usr/src
#mountbind boot
#mountbind home
EOF
Note that the last three commands in the script are commented out. These are useful if those directories are mounted as separate partitions on the host system and will be mounted when booting the completed LFS/BLFS system.
The script can be run with bash ~/mount-virt.sh as
either a regular user (recommended) or as root
. If run as a regular user, sudo is
required on the host system.
Another issue pointed out by the script is where to store downloaded package files. This location is arbitrary. It can be in a regular user's home directory such as ~/sources or in a global location like /usr/src. Our recommendation is not to mix BLFS sources and LFS sources in (from the chroot environment) /sources. In any case, the packages must be accessible inside the chroot environment.
A last convenience feature presented here is to streamline the process of entering the chroot environment. This can be done with an alias placed in a user's ~/.bashrc file on the host system:
alias lfs='sudo /usr/sbin/chroot /mnt/lfs /usr/bin/env -i HOME=/root TERM="$TERM" PS1="\u:\w\\\\$ "
PATH=/usr/bin:/usr/sbin /bin/bash --login'
This alias is a little tricky because of the quoting and levels of backslash characters. It must be all on a single line. The above command has been split in two for presentation purposes.
This method also provides a full graphical environment, but first requires installing sshd on the LFS system, usually in chroot. It also requires a second computer. This method has the advantage of being simple by not requiring the complexity of the chroot environment. It also uses your LFS built kernel for all additional packages and still provides a complete system for installing packages.
You may use the scp command to upload the package sources to be built onto the LFS system. If you want to download the sources onto the LFS system directly instead, install libtasn1, p11-kit, make-ca, and wget in chroot (or upload their sources using scp after booting the LFS system).
This method requires installing libtasn1, p11-kit, make-ca, wget, gpm, and links (or lynx) in chroot and then rebooting into the new LFS system. At this point the default system has six virtual consoles. Switching consoles is as easy as using the Alt+Fx key combinations where Fx is between F1 and F6. The Alt+← and Alt+→ combinations also will change the console.
At this point you can log into two different virtual consoles and run the links or lynx browser in one console and bash in the other. GPM then allows copying commands from the browser with the left mouse button, switching consoles, and pasting into the other console.
As a side note, switching of virtual consoles can also be done from an X Window instance with the Ctrl+Alt+Fx key combination, but the mouse copy operation does not work between the graphical interface and a virtual console. You can return to the X Window display with the Ctrl+Alt+Fx combination, where Fx is usually F1 but may be F7.
ABI |
Application Binary Interface |
ALFS |
Automated Linux From Scratch |
API |
Application Programming Interface |
ASCII |
American Standard Code for Information Interchange |
BIOS |
Basic Input/Output System |
BLFS |
Beyond Linux From Scratch |
BSD |
Berkeley Software Distribution |
chroot |
change root |
CMOS |
Complementary Metal Oxide Semiconductor |
COS |
Class Of Service |
CPU |
Central Processing Unit |
CRC |
Cyclic Redundancy Check |
CVS |
Concurrent Versions System |
DHCP |
Dynamic Host Configuration Protocol |
DNS |
Domain Name Service |
EGA |
Enhanced Graphics Adapter |
ELF |
Executable and Linkable Format |
EOF |
End of File |
EQN |
equation |
ext2 |
second extended file system |
ext3 |
third extended file system |
ext4 |
fourth extended file system |
FAQ |
Frequently Asked Questions |
FHS |
Filesystem Hierarchy Standard |
FIFO |
First-In, First Out |
FQDN |
Fully Qualified Domain Name |
FTP |
File Transfer Protocol |
GB |
Gigabytes |
GCC |
GNU Compiler Collection |
GID |
Group Identifier |
GMT |
Greenwich Mean Time |
HTML |
Hypertext Markup Language |
IDE |
Integrated Drive Electronics |
IEEE |
Institute of Electrical and Electronic Engineers |
IO |
Input/Output |
IP |
Internet Protocol |
IPC |
Inter-Process Communication |
IRC |
Internet Relay Chat |
ISO |
International Organization for Standardization |
ISP |
Internet Service Provider |
KB |
Kilobytes |
LED |
Light Emitting Diode |
LFS |
Linux From Scratch |
LSB |
Linux Standard Base |
MB |
Megabytes |
MBR |
Master Boot Record |
MD5 |
Message Digest 5 |
NIC |
Network Interface Card |
NLS |
Native Language Support |
NNTP |
Network News Transport Protocol |
NPTL |
Native POSIX Threading Library |
OSS |
Open Sound System |
PCH |
Pre-Compiled Headers |
PCRE |
Perl Compatible Regular Expression |
PID |
Process Identifier |
PTY |
pseudo terminal |
QOS |
Quality Of Service |
RAM |
Random Access Memory |
RPC |
Remote Procedure Call |
RTC |
Real Time Clock |
SBU |
Standard Build Unit |
SCO |
The Santa Cruz Operation |
SHA1 |
Secure-Hash Algorithm 1 |
TLDP |
The Linux Documentation Project |
TFTP |
Trivial File Transfer Protocol |
TLS |
Thread-Local Storage |
UID |
User Identifier |
umask |
user file-creation mask |
USB |
Universal Serial Bus |
UTC |
Coordinated Universal Time |
UUID |
Universally Unique Identifier |
VC |
Virtual Console |
VGA |
Video Graphics Array |
VT |
Virtual Terminal |
We would like to thank the following people and organizations for their contributions to the Linux From Scratch Project.
Gerard Beekmans <gerard AT linuxfromscratch D0T org> – LFS Creator
Bruce Dubbs <bdubbs AT linuxfromscratch D0T org> – LFS Managing Editor
Jim Gifford <jim AT linuxfromscratch D0T org> – CLFS Project Co-Leader
Pierre Labastie <pierre AT linuxfromscratch D0T org> – BLFS Editor and ALFS Lead
DJ Lucas <dj AT linuxfromscratch D0T org> – LFS and BLFS Editor
Ken Moffat <ken AT linuxfromscratch D0T org> – BLFS Editor
Countless other people on the various LFS and BLFS mailing lists who helped make this book possible by giving their suggestions, testing the book, and submitting bug reports, instructions, and their experiences with installing various packages.
Manuel Canales Esparcia <macana AT macana-es D0T com> – Spanish LFS translation project
Johan Lenglet <johan AT linuxfromscratch D0T org> – French LFS translation project until 2008
Jean-Philippe Mengual <jmengual AT linuxfromscratch D0T org> – French LFS translation project 2008-2016
Julien Lepiller <jlepiller AT linuxfromscratch D0T org> – French LFS translation project 2017-present
Anderson Lizardo <lizardo AT linuxfromscratch D0T org> – Portuguese LFS translation project historical
Jamenson Espindula <jafesp AT gmail D0T com> – Portuguese LFS translation project 2022-present
Thomas Reitelbach <tr AT erdfunkstelle D0T de> – German LFS translation project
Scott Kveton <scott AT osuosl D0T org> – lfs.oregonstate.edu mirror
William Astle <lost AT l-w D0T net> – ca.linuxfromscratch.org mirror
Eujon Sellers <jpolen@rackspace.com> – lfs.introspeed.com mirror
Justin Knierim <tim@idge.net> – lfs-matrix.net mirror
Manuel Canales Esparcia <manuel AT linuxfromscratch D0T org> – lfsmirror.lfs-es.info mirror
Luis Falcon <Luis Falcon> – torredehanoi.org mirror
Guido Passet <guido AT primerelay D0T net> – nl.linuxfromscratch.org mirror
Bastiaan Jacques <baafie AT planet D0T nl> – lfs.pagefault.net mirror
Sven Cranshoff <sven D0T cranshoff AT lineo D0T be> – lfs.lineo.be mirror
Scarlet Belgium – lfs.scarlet.be mirror
Sebastian Faulborn <info AT aliensoft D0T org> – lfs.aliensoft.org mirror
Stuart Fox <stuart AT dontuse D0T ms> – lfs.dontuse.ms mirror
Ralf Uhlemann <admin AT realhost D0T de> – lfs.oss-mirror.org mirror
Antonin Sprinzl <Antonin D0T Sprinzl AT tuwien D0T ac D0T at> – at.linuxfromscratch.org mirror
Fredrik Danerklint <fredan-lfs AT fredan D0T org> – se.linuxfromscratch.org mirror
Franck <franck AT linuxpourtous D0T com> – lfs.linuxpourtous.com mirror
Philippe Baque <baque AT cict D0T fr> – lfs.cict.fr mirror
Vitaly Chekasin <gyouja AT pilgrims D0T ru> – lfs.pilgrims.ru mirror
Benjamin Heil <kontakt AT wankoo D0T org> – lfs.wankoo.org mirror
Anton Maisak <info AT linuxfromscratch D0T org D0T ru> – linuxfromscratch.org.ru mirror
Satit Phermsawang <satit AT wbac D0T ac D0T th> – lfs.phayoune.org mirror
Shizunet Co.,Ltd. <info AT shizu-net D0T jp> – lfs.mirror.shizu-net.jp mirror
Jason Andrade <jason AT dstc D0T edu D0T au> – au.linuxfromscratch.org mirror
Christine Barczak <theladyskye AT linuxfromscratch D0T org> – LFS Book Editor
Archaic <archaic@linuxfromscratch.org> – LFS Technical Writer/Editor, HLFS Project Leader, BLFS Editor, Hints and Patches Project Maintainer
Matthew Burgess <matthew AT linuxfromscratch D0T org> – LFS Project Leader, LFS Technical Writer/Editor
Nathan Coulson <nathan AT linuxfromscratch D0T org> – LFS-Bootscripts Maintainer
Timothy Bauscher
Robert Briggs
Ian Chilton
Jeroen Coumans <jeroen AT linuxfromscratch D0T org> – Website Developer, FAQ Maintainer
Manuel Canales Esparcia <manuel AT linuxfromscratch D0T org> – LFS/BLFS/HLFS XML and XSL Maintainer
Alex Groenewoud – LFS Technical Writer
Marc Heerdink
Jeremy Huntwork <jhuntwork AT linuxfromscratch D0T org> – LFS Technical Writer, LFS LiveCD Maintainer
Bryan Kadzban <bryan AT linuxfromscratch D0T org> – LFS Technical Writer
Mark Hymers
Seth W. Klein – FAQ maintainer
Nicholas Leippe <nicholas AT linuxfromscratch D0T org> – Wiki Maintainer
Anderson Lizardo <lizardo AT linuxfromscratch D0T org> – Website Backend-Scripts Maintainer
Randy McMurchy <randy AT linuxfromscratch D0T org> – BLFS Project Leader, LFS Editor
Dan Nicholson <dnicholson AT linuxfromscratch D0T org> – LFS and BLFS Editor
Alexander E. Patrakov <alexander AT linuxfromscratch D0T org> – LFS Technical Writer, LFS Internationalization Editor, LFS Live CD Maintainer
Simon Perreault
Scot Mc Pherson <scot AT linuxfromscratch D0T org> – LFS NNTP Gateway Maintainer
Douglas R. Reno <renodr AT linuxfromscratch D0T org> – Systemd Editor
Ryan Oliver <ryan AT linuxfromscratch D0T org> – CLFS Project Co-Leader
Greg Schafer <gschafer AT zip D0T com D0T au> – LFS Technical Writer and Architect of the Next Generation 64-bit-enabling Build Method
Jesse Tie-Ten-Quee – LFS Technical Writer
James Robertson <jwrober AT linuxfromscratch D0T org> – Bugzilla Maintainer
Tushar Teredesai <tushar AT linuxfromscratch D0T org> – BLFS Book Editor, Hints and Patches Project Leader
Jeremy Utley <jeremy AT linuxfromscratch D0T org> – LFS Technical Writer, Bugzilla Maintainer, LFS-Bootscripts Maintainer
Zack Winkles <zwinkles AT gmail D0T com> – LFS Technical Writer
Every package built in LFS relies on one or more other packages in order to build and install properly. Some packages even participate in circular dependencies, that is, the first package depends on the second which in turn depends on the first. Because of these dependencies, the order in which packages are built in LFS is very important. The purpose of this page is to document the dependencies of each package built in LFS.
For each package that is built, there are three, and sometimes up to five types of dependencies listed below. The first lists what other packages need to be available in order to compile and install the package in question. The second lists the packages that must be available when any programs or libraries from the package are used at runtime. The third lists what packages, in addition to those on the first list, need to be available in order to run the test suites. The fourth list of dependencies are packages that require this package to be built and installed in its final location before they are built and installed.
The last list of dependencies are optional packages that are not addressed in LFS, but could be useful to the user. These packages may have additional mandatory or optional dependencies of their own. For these dependencies, the recommended practice is to install them after completion of the LFS book and then go back and rebuild the LFS package. In several cases, re-installation is addressed in BLFS.
The scripts in this appendix are listed by the directory where
they normally reside. The order is /etc/rc.d/init.d
, /etc/sysconfig
, /etc/sysconfig/network-devices
, and
/etc/sysconfig/network-devices/services
.
Within each section, the files are listed in the order they are
normally called.
The rc
script is the first
script called by init and
initiates the boot process.
#!/bin/bash ######################################################################## # Begin rc # # Description : Main Run Level Control Script # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # : DJ Lucas - dj AT linuxfromscratch D0T org # Updates : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # : Pierre Labastie - pierre AT linuxfromscratch D0T org # # Version : LFS 7.0 # # Notes : Updates March 24th, 2022: new semantics of S/K files # - Instead of testing that S scripts were K scripts in the # previous runlevel, test that they were not S scripts # - Instead of testing that K scripts were S scripts in the # previous runlevel, test that they were not K scripts # - S scripts in runlevel 0 or 6 are now run with # "script start" (was "script stop" previously). ######################################################################## . /lib/lsb/init-functions print_error_msg() { log_failure_msg # $i is set when called MSG="FAILURE:\n\nYou should not be reading this error message.\n\n" MSG="${MSG}It means that an unforeseen error took place in\n" MSG="${MSG}${i},\n" MSG="${MSG}which exited with a return value of ${error_value}.\n" MSG="${MSG}If you're able to track this error down to a bug in one of\n" MSG="${MSG}the files provided by the ${DISTRO_MINI} book,\n" MSG="${MSG}please be so kind to inform us at ${DISTRO_CONTACT}.\n" log_failure_msg "${MSG}" log_info_msg "Press Enter to continue..." wait_for_user } check_script_status() { # $i is set when called if [ ! -f ${i} ]; then log_warning_msg "${i} is not a valid symlink." SCRIPT_STAT="1" fi if [ ! -x ${i} ]; then log_warning_msg "${i} is not executable, skipping." SCRIPT_STAT="1" fi } run() { if [ -z $interactive ]; then ${1} ${2} return $? fi while true; do read -p "Run ${1} ${2} (Yes/no/continue)? " -n 1 runit echo case ${runit} in c | C) interactive="" ${i} ${2} ret=${?} break; ;; n | N) return 0 ;; y | Y) ${i} ${2} ret=${?} break ;; esac done return $ret } # Read any local settings/overrides [ -r /etc/sysconfig/rc.site ] && source /etc/sysconfig/rc.site DISTRO=${DISTRO:-"Linux From Scratch"} DISTRO_CONTACT=${DISTRO_CONTACT:-"lfs-dev@lists.linuxfromscratch.org (Registration required)"} DISTRO_MINI=${DISTRO_MINI:-"LFS"} IPROMPT=${IPROMPT:-"no"} # These 3 signals will not cause our script to exit trap "" INT QUIT TSTP [ "${1}" != "" ] && runlevel=${1} if [ "${runlevel}" == "" ]; then echo "Usage: ${0} <runlevel>" >&2 exit 1 fi previous=${PREVLEVEL} [ "${previous}" == "" ] && previous=N if [ ! -d /etc/rc.d/rc${runlevel}.d ]; then log_info_msg "/etc/rc.d/rc${runlevel}.d does not exist.\n" exit 1 fi if [ "$runlevel" == "6" -o "$runlevel" == "0" ]; then IPROMPT="no"; fi # Note: In ${LOGLEVEL:-7}, it is ':' 'dash' '7', not minus 7 if [ "$runlevel" == "S" ]; then [ -r /etc/sysconfig/console ] && source /etc/sysconfig/console dmesg -n "${LOGLEVEL:-7}" fi if [ "${IPROMPT}" == "yes" -a "${runlevel}" == "S" ]; then # The total length of the distro welcome string, without escape codes wlen=${wlen:-$(echo "Welcome to ${DISTRO}" | wc -c )} welcome_message=${welcome_message:-"Welcome to ${INFO}${DISTRO}${NORMAL}"} # The total length of the interactive string, without escape codes ilen=${ilen:-$(echo "Press 'I' to enter interactive startup" | wc -c )} i_message=${i_message:-"Press '${FAILURE}I${NORMAL}' to enter interactive startup"} # dcol and icol are spaces before the message to center the message # on screen. itime is the amount of wait time for the user to press a key wcol=$(( ( ${COLUMNS} - ${wlen} ) / 2 )) icol=$(( ( ${COLUMNS} - ${ilen} ) / 2 )) itime=${itime:-"3"} echo -e "\n\n" echo -e "\\033[${wcol}G${welcome_message}" echo -e "\\033[${icol}G${i_message}${NORMAL}" echo "" read -t "${itime}" -n 1 interactive 2>&1 > /dev/null fi # Make lower case [ "${interactive}" == "I" ] && interactive="i" [ "${interactive}" != "i" ] && interactive="" # Read the state file if it exists from runlevel S [ -r /run/interactive ] && source /run/interactive # Stop all services marked as K, except if marked as K in the previous # runlevel: it is the responsibility of the script to not try to kill # a non running service if [ "${previous}" != "N" ]; then for i in $(ls -v /etc/rc.d/rc${runlevel}.d/K* 2> /dev/null) do check_script_status if [ "${SCRIPT_STAT}" == "1" ]; then SCRIPT_STAT="0" continue fi suffix=${i#/etc/rc.d/rc${runlevel}.d/K[0-9][0-9]} [ -e /etc/rc.d/rc${previous}.d/K[0-9][0-9]$suffix ] && continue run ${i} stop error_value=${?} if [ "${error_value}" != "0" ]; then print_error_msg; fi done fi if [ "${previous}" == "N" ]; then export IN_BOOT=1; fi if [ "$runlevel" == "6" -a -n "${FASTBOOT}" ]; then touch /fastboot fi # Start all services marked as S in this runlevel, except if marked as # S in the previous runlevel # it is the responsibility of the script to not try to start an already running # service for i in $( ls -v /etc/rc.d/rc${runlevel}.d/S* 2> /dev/null) do if [ "${previous}" != "N" ]; then suffix=${i#/etc/rc.d/rc${runlevel}.d/S[0-9][0-9]} [ -e /etc/rc.d/rc${previous}.d/S[0-9][0-9]$suffix ] && continue fi check_script_status if [ "${SCRIPT_STAT}" == "1" ]; then SCRIPT_STAT="0" continue fi run ${i} start error_value=${?} if [ "${error_value}" != "0" ]; then print_error_msg; fi done # Store interactive variable on switch from runlevel S and remove if not if [ "${runlevel}" == "S" -a "${interactive}" == "i" ]; then echo "interactive=\"i\"" > /run/interactive else rm -f /run/interactive 2> /dev/null fi # Copy the boot log on initial boot only if [ "${previous}" == "N" -a "${runlevel}" != "S" ]; then cat $BOOTLOG >> /var/log/boot.log # Mark the end of boot echo "--------" >> /var/log/boot.log # Remove the temporary file rm -f $BOOTLOG 2> /dev/null fi # End rc
#!/bin/sh ######################################################################## # # Begin /lib/lsb/init-funtions # # Description : Run Level Control Functions # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # : DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # # Notes : With code based on Matthias Benkmann's simpleinit-msb # http://winterdrache.de/linux/newboot/index.html # # The file should be located in /lib/lsb # ######################################################################## ## Environmental setup # Setup default values for environment umask 022 export PATH="/bin:/usr/bin:/sbin:/usr/sbin" ## Set color commands, used via echo # Please consult `man console_codes for more information # under the "ECMA-48 Set Graphics Rendition" section # # Warning: when switching from a 8bit to a 9bit font, # the linux console will reinterpret the bold (1;) to # the top 256 glyphs of the 9bit font. This does # not affect framebuffer consoles NORMAL="\\033[0;39m" # Standard console grey SUCCESS="\\033[1;32m" # Success is green WARNING="\\033[1;33m" # Warnings are yellow FAILURE="\\033[1;31m" # Failures are red INFO="\\033[1;36m" # Information is light cyan BRACKET="\\033[1;34m" # Brackets are blue # Use a colored prefix BMPREFIX=" " SUCCESS_PREFIX="${SUCCESS} * ${NORMAL} " FAILURE_PREFIX="${FAILURE}*****${NORMAL} " WARNING_PREFIX="${WARNING} *** ${NORMAL} " SKIP_PREFIX="${INFO} S ${NORMAL}" SUCCESS_SUFFIX="${BRACKET}[${SUCCESS} OK ${BRACKET}]${NORMAL}" FAILURE_SUFFIX="${BRACKET}[${FAILURE} FAIL ${BRACKET}]${NORMAL}" WARNING_SUFFIX="${BRACKET}[${WARNING} WARN ${BRACKET}]${NORMAL}" SKIP_SUFFIX="${BRACKET}[${INFO} SKIP ${BRACKET}]${NORMAL}" BOOTLOG=/run/bootlog KILLDELAY=3 SCRIPT_STAT="0" # Set any user specified environment variables e.g. HEADLESS [ -r /etc/sysconfig/rc.site ] && . /etc/sysconfig/rc.site # If HEADLESS is set, use that. # If file descriptor 1 or 2 (stdout and stderr) is not open or # does not refer to a terminal, consider the script headless. [ ! -t 1 -o ! -t 2 ] && HEADLESS=${HEADLESS:-yes} if [ "x$HEADLESS" != "xyes" ] then ## Screen Dimensions # Find current screen size if [ -z "${COLUMNS}" ]; then COLUMNS=$(stty size) COLUMNS=${COLUMNS##* } fi else COLUMNS=80 fi # When using remote connections, such as a serial port, stty size returns 0 if [ "${COLUMNS}" = "0" ]; then COLUMNS=80 fi ## Measurements for positioning result messages COL=$((${COLUMNS} - 8)) WCOL=$((${COL} - 2)) ## Set Cursor Position Commands, used via echo SET_COL="\\033[${COL}G" # at the $COL char SET_WCOL="\\033[${WCOL}G" # at the $WCOL char CURS_UP="\\033[1A\\033[0G" # Up one line, at the 0'th char CURS_ZERO="\\033[0G" ################################################################################ # start_daemon() # # Usage: start_daemon [-f] [-n nicelevel] [-p pidfile] pathname [args...] # # # # Purpose: This runs the specified program as a daemon # # # # Inputs: -f: (force) run the program even if it is already running. # # -n nicelevel: specify a nice level. See 'man nice(1)'. # # -p pidfile: use the specified file to determine PIDs. # # pathname: the complete path to the specified program # # args: additional arguments passed to the program (pathname) # # # # Return values (as defined by LSB exit codes): # # 0 - program is running or service is OK # # 1 - generic or unspecified error # # 2 - invalid or excessive argument(s) # # 5 - program is not installed # ################################################################################ start_daemon() { local force="" local nice="0" local pidfile="" local pidlist="" local retval="" # Process arguments while true do case "${1}" in -f) force="1" shift 1 ;; -n) nice="${2}" shift 2 ;; -p) pidfile="${2}" shift 2 ;; -*) return 2 ;; *) program="${1}" break ;; esac done # Check for a valid program if [ ! -e "${program}" ]; then return 5; fi # Execute if [ -z "${force}" ]; then if [ -z "${pidfile}" ]; then # Determine the pid by discovery pidlist=`pidofproc "${1}"` retval="${?}" else # The PID file contains the needed PIDs # Note that by LSB requirement, the path must be given to pidofproc, # however, it is not used by the current implementation or standard. pidlist=`pidofproc -p "${pidfile}" "${1}"` retval="${?}" fi # Return a value ONLY # It is the init script's (or distribution's functions) responsibility # to log messages! case "${retval}" in 0) # Program is already running correctly, this is a # successful start. return 0 ;; 1) # Program is not running, but an invalid pid file exists # remove the pid file and continue rm -f "${pidfile}" ;; 3) # Program is not running and no pidfile exists # do nothing here, let start_deamon continue. ;; *) # Others as returned by status values shall not be interpreted # and returned as an unspecified error. return 1 ;; esac fi # Do the start! nice -n "${nice}" "${@}" } ################################################################################ # killproc() # # Usage: killproc [-p pidfile] pathname [signal] # # # # Purpose: Send control signals to running processes # # # # Inputs: -p pidfile, uses the specified pidfile # # pathname, pathname to the specified program # # signal, send this signal to pathname # # # # Return values (as defined by LSB exit codes): # # 0 - program (pathname) has stopped/is already stopped or a # # running program has been sent specified signal and stopped # # successfully # # 1 - generic or unspecified error # # 2 - invalid or excessive argument(s) # # 5 - program is not installed # # 7 - program is not running and a signal was supplied # ################################################################################ killproc() { local pidfile local program local prefix local progname local signal="-TERM" local fallback="-KILL" local nosig local pidlist local retval local pid local delay="30" local piddead local dtime # Process arguments while true; do case "${1}" in -p) pidfile="${2}" shift 2 ;; *) program="${1}" if [ -n "${2}" ]; then signal="${2}" fallback="" else nosig=1 fi # Error on additional arguments if [ -n "${3}" ]; then return 2 else break fi ;; esac done # Check for a valid program if [ ! -e "${program}" ]; then return 5; fi # Check for a valid signal check_signal "${signal}" if [ "${?}" -ne "0" ]; then return 2; fi # Get a list of pids if [ -z "${pidfile}" ]; then # determine the pid by discovery pidlist=`pidofproc "${1}"` retval="${?}" else # The PID file contains the needed PIDs # Note that by LSB requirement, the path must be given to pidofproc, # however, it is not used by the current implementation or standard. pidlist=`pidofproc -p "${pidfile}" "${1}"` retval="${?}" fi # Return a value ONLY # It is the init script's (or distribution's functions) responsibility # to log messages! case "${retval}" in 0) # Program is running correctly # Do nothing here, let killproc continue. ;; 1) # Program is not running, but an invalid pid file exists # Remove the pid file. progname=${program##*/} if [[ -e "/run/${progname}.pid" ]]; then pidfile="/run/${progname}.pid" rm -f "${pidfile}" fi # This is only a success if no signal was passed. if [ -n "${nosig}" ]; then return 0 else return 7 fi ;; 3) # Program is not running and no pidfile exists # This is only a success if no signal was passed. if [ -n "${nosig}" ]; then return 0 else return 7 fi ;; *) # Others as returned by status values shall not be interpreted # and returned as an unspecified error. return 1 ;; esac # Perform different actions for exit signals and control signals check_sig_type "${signal}" if [ "${?}" -eq "0" ]; then # Signal is used to terminate the program # Account for empty pidlist (pid file still exists and no # signal was given) if [ "${pidlist}" != "" ]; then # Kill the list of pids for pid in ${pidlist}; do kill -0 "${pid}" 2> /dev/null if [ "${?}" -ne "0" ]; then # Process is dead, continue to next and assume all is well continue else kill "${signal}" "${pid}" 2> /dev/null # Wait up to ${delay}/10 seconds to for "${pid}" to # terminate in 10ths of a second while [ "${delay}" -ne "0" ]; do kill -0 "${pid}" 2> /dev/null || piddead="1" if [ "${piddead}" = "1" ]; then break; fi sleep 0.1 delay="$(( ${delay} - 1 ))" done # If a fallback is set, and program is still running, then # use the fallback if [ -n "${fallback}" -a "${piddead}" != "1" ]; then kill "${fallback}" "${pid}" 2> /dev/null sleep 1 # Check again, and fail if still running kill -0 "${pid}" 2> /dev/null && return 1 fi fi done fi # Check for and remove stale PID files. if [ -z "${pidfile}" ]; then # Find the basename of $program prefix=`echo "${program}" | sed 's/[^/]*$//'` progname=`echo "${program}" | sed "s@${prefix}@@"` if [ -e "/run/${progname}.pid" ]; then rm -f "/run/${progname}.pid" 2> /dev/null fi else if [ -e "${pidfile}" ]; then rm -f "${pidfile}" 2> /dev/null; fi fi # For signals that do not expect a program to exit, simply # let kill do its job, and evaluate kill's return for value else # check_sig_type - signal is not used to terminate program for pid in ${pidlist}; do kill "${signal}" "${pid}" if [ "${?}" -ne "0" ]; then return 1; fi done fi } ################################################################################ # pidofproc() # # Usage: pidofproc [-p pidfile] pathname # # # # Purpose: This function returns one or more pid(s) for a particular daemon # # # # Inputs: -p pidfile, use the specified pidfile instead of pidof # # pathname, path to the specified program # # # # Return values (as defined by LSB status codes): # # 0 - Success (PIDs to stdout) # # 1 - Program is dead, PID file still exists (remaining PIDs output) # # 3 - Program is not running (no output) # ################################################################################ pidofproc() { local pidfile local program local prefix local progname local pidlist local lpids local exitstatus="0" # Process arguments while true; do case "${1}" in -p) pidfile="${2}" shift 2 ;; *) program="${1}" if [ -n "${2}" ]; then # Too many arguments # Since this is status, return unknown return 4 else break fi ;; esac done # If a PID file is not specified, try and find one. if [ -z "${pidfile}" ]; then # Get the program's basename prefix=`echo "${program}" | sed 's/[^/]*$//'` if [ -z "${prefix}" ]; then progname="${program}" else progname=`echo "${program}" | sed "s@${prefix}@@"` fi # If a PID file exists with that name, assume that is it. if [ -e "/run/${progname}.pid" ]; then pidfile="/run/${progname}.pid" fi fi # If a PID file is set and exists, use it. if [ -n "${pidfile}" -a -e "${pidfile}" ]; then # Use the value in the first line of the pidfile pidlist=`/bin/head -n1 "${pidfile}"` else # Use pidof pidlist=`pidof "${program}"` fi # Figure out if all listed PIDs are running. for pid in ${pidlist}; do kill -0 ${pid} 2> /dev/null if [ "${?}" -eq "0" ]; then lpids="${lpids}${pid} " else exitstatus="1" fi done if [ -z "${lpids}" -a ! -f "${pidfile}" ]; then return 3 else echo "${lpids}" return "${exitstatus}" fi } ################################################################################ # statusproc() # # Usage: statusproc [-p pidfile] pathname # # # # Purpose: This function prints the status of a particular daemon to stdout # # # # Inputs: -p pidfile, use the specified pidfile instead of pidof # # pathname, path to the specified program # # # # Return values: # # 0 - Status printed # # 1 - Input error. The daemon to check was not specified. # ################################################################################ statusproc() { local pidfile local pidlist if [ "${#}" = "0" ]; then echo "Usage: statusproc [-p pidfle] {program}" exit 1 fi # Process arguments while true; do case "${1}" in -p) pidfile="${2}" shift 2 ;; *) if [ -n "${2}" ]; then echo "Too many arguments" return 1 else break fi ;; esac done if [ -n "${pidfile}" ]; then pidlist=`pidofproc -p "${pidfile}" $@` else pidlist=`pidofproc $@` fi # Trim trailing blanks pidlist=`echo "${pidlist}" | sed -r 's/ +$//'` base="${1##*/}" if [ -n "${pidlist}" ]; then /bin/echo -e "${INFO}${base} is running with Process" \ "ID(s) ${pidlist}.${NORMAL}" else if [ -n "${base}" -a -e "/run/${base}.pid" ]; then /bin/echo -e "${WARNING}${1} is not running but" \ "/run/${base}.pid exists.${NORMAL}" else if [ -n "${pidfile}" -a -e "${pidfile}" ]; then /bin/echo -e "${WARNING}${1} is not running" \ "but ${pidfile} exists.${NORMAL}" else /bin/echo -e "${INFO}${1} is not running.${NORMAL}" fi fi fi } ################################################################################ # timespec() # # # # Purpose: An internal utility function to format a timestamp # # a boot log file. Sets the STAMP variable. # # # # Return value: Not used # ################################################################################ timespec() { STAMP="$(echo `date +"%b %d %T %:z"` `hostname`) " return 0 } ################################################################################ # log_success_msg() # # Usage: log_success_msg ["message"] # # # # Purpose: Print a successful status message to the screen and # # a boot log file. # # # # Inputs: $@ - Message # # # # Return values: Not used # ################################################################################ log_success_msg() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${BMPREFIX}${@}" /bin/echo -e "${CURS_ZERO}${SUCCESS_PREFIX}${SET_COL}${SUCCESS_SUFFIX}" else logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo -e "${logmessage} OK" fi # Strip non-printable characters from log file logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` timespec /bin/echo -e "${STAMP} ${logmessage} OK" >> ${BOOTLOG} return 0 } log_success_msg2() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${BMPREFIX}${@}" /bin/echo -e "${CURS_ZERO}${SUCCESS_PREFIX}${SET_COL}${SUCCESS_SUFFIX}" else echo " OK" fi echo " OK" >> ${BOOTLOG} return 0 } ################################################################################ # log_failure_msg() # # Usage: log_failure_msg ["message"] # # # # Purpose: Print a failure status message to the screen and # # a boot log file. # # # # Inputs: $@ - Message # # # # Return values: Not used # ################################################################################ log_failure_msg() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${BMPREFIX}${@}" /bin/echo -e "${CURS_ZERO}${FAILURE_PREFIX}${SET_COL}${FAILURE_SUFFIX}" else logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo -e "${logmessage} FAIL" fi # Strip non-printable characters from log file timespec logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo -e "${STAMP} ${logmessage} FAIL" >> ${BOOTLOG} return 0 } log_failure_msg2() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${BMPREFIX}${@}" /bin/echo -e "${CURS_ZERO}${FAILURE_PREFIX}${SET_COL}${FAILURE_SUFFIX}" else echo "FAIL" fi echo "FAIL" >> ${BOOTLOG} return 0 } ################################################################################ # log_warning_msg() # # Usage: log_warning_msg ["message"] # # # # Purpose: Print a warning status message to the screen and # # a boot log file. # # # # Return values: Not used # ################################################################################ log_warning_msg() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${BMPREFIX}${@}" /bin/echo -e "${CURS_ZERO}${WARNING_PREFIX}${SET_COL}${WARNING_SUFFIX}" else logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo -e "${logmessage} WARN" fi # Strip non-printable characters from log file logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` timespec /bin/echo -e "${STAMP} ${logmessage} WARN" >> ${BOOTLOG} return 0 } log_skip_msg() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${BMPREFIX}${@}" /bin/echo -e "${CURS_ZERO}${SKIP_PREFIX}${SET_COL}${SKIP_SUFFIX}" else logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo "SKIP" fi # Strip non-printable characters from log file logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo "SKIP" >> ${BOOTLOG} return 0 } ################################################################################ # log_info_msg() # # Usage: log_info_msg message # # # # Purpose: Print an information message to the screen and # # a boot log file. Does not print a trailing newline character. # # # # Return values: Not used # ################################################################################ log_info_msg() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${BMPREFIX}${@}" else logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo -n -e "${logmessage}" fi # Strip non-printable characters from log file logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` timespec /bin/echo -n -e "${STAMP} ${logmessage}" >> ${BOOTLOG} return 0 } log_info_msg2() { if [ "x$HEADLESS" != "xyes" ] then /bin/echo -n -e "${@}" else logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo -n -e "${logmessage}" fi # Strip non-printable characters from log file logmessage=`echo "${@}" | sed 's/\\\033[^a-zA-Z]*.//g'` /bin/echo -n -e "${logmessage}" >> ${BOOTLOG} return 0 } ################################################################################ # evaluate_retval() # # Usage: Evaluate a return value and print success or failure as appropriate # # # # Purpose: Convenience function to terminate an info message # # # # Return values: Not used # ################################################################################ evaluate_retval() { local error_value="${?}" if [ ${error_value} = 0 ]; then log_success_msg2 else log_failure_msg2 fi } ################################################################################ # check_signal() # # Usage: check_signal [ -{signal} ] # # # # Purpose: Check for a valid signal. This is not defined by any LSB draft, # # however, it is required to check the signals to determine if the # # signals chosen are invalid arguments to the other functions. # # # # Inputs: Accepts a single string value in the form of -{signal} # # # # Return values: # # 0 - Success (signal is valid # # 1 - Signal is not valid # ################################################################################ check_signal() { local valsig # Add error handling for invalid signals valsig=" -ALRM -HUP -INT -KILL -PIPE -POLL -PROF -TERM -USR1 -USR2" valsig="${valsig} -VTALRM -STKFLT -PWR -WINCH -CHLD -URG -TSTP -TTIN" valsig="${valsig} -TTOU -STOP -CONT -ABRT -FPE -ILL -QUIT -SEGV -TRAP" valsig="${valsig} -SYS -EMT -BUS -XCPU -XFSZ -0 -1 -2 -3 -4 -5 -6 -8 -9" valsig="${valsig} -11 -13 -14 -15 " echo "${valsig}" | grep -- " ${1} " > /dev/null if [ "${?}" -eq "0" ]; then return 0 else return 1 fi } ################################################################################ # check_sig_type() # # Usage: check_signal [ -{signal} | {signal} ] # # # # Purpose: Check if signal is a program termination signal or a control signal # # This is not defined by any LSB draft, however, it is required to # # check the signals to determine if they are intended to end a # # program or simply to control it. # # # # Inputs: Accepts a single string value in the form or -{signal} or {signal} # # # # Return values: # # 0 - Signal is used for program termination # # 1 - Signal is used for program control # ################################################################################ check_sig_type() { local valsig # The list of termination signals (limited to generally used items) valsig=" -ALRM -INT -KILL -TERM -PWR -STOP -ABRT -QUIT -2 -3 -6 -9 -14 -15 " echo "${valsig}" | grep -- " ${1} " > /dev/null if [ "${?}" -eq "0" ]; then return 0 else return 1 fi } ################################################################################ # wait_for_user() # # # # Purpose: Wait for the user to respond if not a headless system # # # ################################################################################ wait_for_user() { # Wait for the user by default [ "${HEADLESS=0}" = "0" ] && read ENTER return 0 } ################################################################################ # is_true() # # # # Purpose: Utility to test if a variable is true | yes | 1 # # # ################################################################################ is_true() { [ "$1" = "1" ] || [ "$1" = "yes" ] || [ "$1" = "true" ] || [ "$1" = "y" ] || [ "$1" = "t" ] } # End /lib/lsb/init-functions
#!/bin/sh ######################################################################## # Begin mountvirtfs # # Description : Ensure proc, sysfs, run, and dev are mounted # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # Xi Ruoyao - xry111@xry111.site # # Version : LFS 12.0 # ######################################################################## ### BEGIN INIT INFO # Provides: mountvirtfs # Required-Start: $first # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Mounts various special fs needed at start # Description: Mounts /sys and /proc virtual (kernel) filesystems. # Mounts /run (tmpfs) and /dev (devtmpfs). # This is done only if they are not already mounted. # with the kernel config proposed in the book, dev # should be automatically mounted by the kernel. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) # Make sure /run is available before logging any messages if ! mountpoint /run >/dev/null; then mount /run || failed=1 fi mkdir -p /run/lock chmod 1777 /run/lock log_info_msg "Mounting virtual file systems: ${INFO}/run" if ! mountpoint /proc >/dev/null; then log_info_msg2 " ${INFO}/proc" mount -o nosuid,noexec,nodev /proc || failed=1 fi if ! mountpoint /sys >/dev/null; then log_info_msg2 " ${INFO}/sys" mount -o nosuid,noexec,nodev /sys || failed=1 fi if ! mountpoint /dev >/dev/null; then log_info_msg2 " ${INFO}/dev" mount -o mode=0755,nosuid /dev || failed=1 fi mkdir -p /dev/shm log_info_msg2 " ${INFO}/dev/shm" mount -o nosuid,nodev /dev/shm || failed=1 mkdir -p /sys/fs/cgroup log_info_msg2 " ${INFO}/sys/fs/cgroup" mount -o nosuid,noexec,nodev /sys/fs/cgroup || failed=1 (exit ${failed}) evaluate_retval if [ "${failed}" = 1 ]; then exit 1 fi log_info_msg "Create symlinks in /dev targeting /proc: ${INFO}/dev/stdin" ln -sf /proc/self/fd/0 /dev/stdin || failed=1 log_info_msg2 " ${INFO}/dev/stdout" ln -sf /proc/self/fd/1 /dev/stdout || failed=1 log_info_msg2 " ${INFO}/dev/stderr" ln -sf /proc/self/fd/2 /dev/stderr || failed=1 log_info_msg2 " ${INFO}/dev/fd" ln -sfn /proc/self/fd /dev/fd || failed=1 if [ -e /proc/kcore ]; then log_info_msg2 " ${INFO}/dev/core" ln -sf /proc/kcore /dev/core || failed=1 fi (exit ${failed}) evaluate_retval exit $failed ;; *) echo "Usage: ${0} {start}" exit 1 ;; esac # End mountvirtfs
#!/bin/sh ######################################################################## # Begin modules # # Description : Module auto-loading script # # Authors : Zack Winkles # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: modules # Required-Start: mountvirtfs # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Loads required modules. # Description: Loads modules listed in /etc/sysconfig/modules. # X-LFS-Provided-By: LFS ### END INIT INFO # Assure that the kernel has module support. [ -e /proc/modules ] || exit 0 . /lib/lsb/init-functions case "${1}" in start) # Exit if there's no modules file or there are no # valid entries [ -r /etc/sysconfig/modules ] || exit 0 grep -E -qv '^($|#)' /etc/sysconfig/modules || exit 0 log_info_msg "Loading modules:" # Only try to load modules if the user has actually given us # some modules to load. while read module args; do # Ignore comments and blank lines. case "$module" in ""|"#"*) continue ;; esac # Attempt to load the module, passing any arguments provided. modprobe ${module} ${args} >/dev/null # Print the module name if successful, otherwise take note. if [ $? -eq 0 ]; then log_info_msg2 " ${module}" else failedmod="${failedmod} ${module}" fi done < /etc/sysconfig/modules # Print a message about successfully loaded modules on the correct line. log_success_msg2 # Print a failure message with a list of any modules that # may have failed to load. if [ -n "${failedmod}" ]; then log_failure_msg "Failed to load modules:${failedmod}" exit 1 fi ;; *) echo "Usage: ${0} {start}" exit 1 ;; esac exit 0 # End modules
#!/bin/sh ######################################################################## # Begin udev # # Description : Udev cold-plugging script # # Authors : Zack Winkles, Alexander E. Patrakov # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # Xi Ruoyao - xry111@xry111.site # # Version : LFS 12.0 # ######################################################################## ### BEGIN INIT INFO # Provides: udev $time # Required-Start: localnet # Should-Start: modules # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Populates /dev with device nodes. # Description: Mounts a tempfs on /dev and starts the udevd daemon. # Device nodes are created as defined by udev. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) log_info_msg "Populating /dev with device nodes... " if ! grep -q '[[:space:]]sysfs' /proc/mounts; then log_failure_msg2 msg="FAILURE:\n\nUnable to create " msg="${msg}devices without a SysFS filesystem\n\n" msg="${msg}After you press Enter, this system " msg="${msg}will be halted and powered off.\n\n" log_info_msg "$msg" log_info_msg "Press Enter to continue..." wait_for_user /etc/rc.d/init.d/halt start fi # Start the udev daemon to continually watch for, and act on, # uevents SYSTEMD_LOG_TARGET=kmsg /sbin/udevd --daemon # Now traverse /sys in order to "coldplug" devices that have # already been discovered /bin/udevadm trigger --action=add --type=subsystems /bin/udevadm trigger --action=add --type=devices /bin/udevadm trigger --action=change --type=devices # Now wait for udevd to process the uevents we triggered if ! is_true "$OMIT_UDEV_SETTLE"; then /bin/udevadm settle fi # If any LVM based partitions are on the system, ensure they # are activated so they can be used. if [ -x /sbin/vgchange ]; then /sbin/vgchange -a y >/dev/null; fi log_success_msg2 ;; *) echo "Usage ${0} {start}" exit 1 ;; esac exit 0 # End udev
#!/bin/sh ######################################################################## # Begin swap # # Description : Swap Control Script # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: swap # Required-Start: udev # Should-Start: modules # Required-Stop: localnet # Should-Stop: $local_fs # Default-Start: S # Default-Stop: 0 6 # Short-Description: Activates and deactivates swap partitions. # Description: Activates and deactivates swap partitions defined in # /etc/fstab. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) log_info_msg "Activating all swap files/partitions..." swapon -a evaluate_retval ;; stop) log_info_msg "Deactivating all swap files/partitions..." swapoff -a evaluate_retval ;; restart) ${0} stop sleep 1 ${0} start ;; status) log_success_msg "Retrieving swap status." swapon -s ;; *) echo "Usage: ${0} {start|stop|restart|status}" exit 1 ;; esac exit 0 # End swap
#!/bin/sh ######################################################################## # Begin setclock # # Description : Setting Linux Clock # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: # Required-Start: # Should-Start: modules # Required-Stop: # Should-Stop: $syslog # Default-Start: S # Default-Stop: # Short-Description: Stores and restores time from the hardware clock # Description: On boot, system time is obtained from hwclock. The # hardware clock can also be set on shutdown. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions [ -r /etc/sysconfig/clock ] && . /etc/sysconfig/clock case "${UTC}" in yes|true|1) CLOCKPARAMS="${CLOCKPARAMS} --utc" ;; no|false|0) CLOCKPARAMS="${CLOCKPARAMS} --localtime" ;; esac case ${1} in start) hwclock --hctosys ${CLOCKPARAMS} >/dev/null ;; stop) log_info_msg "Setting hardware clock..." hwclock --systohc ${CLOCKPARAMS} >/dev/null evaluate_retval ;; *) echo "Usage: ${0} {start|stop}" exit 1 ;; esac exit 0
#!/bin/sh ######################################################################## # Begin checkfs # # Description : File System Check # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # A. Luebke - luebke@users.sourceforge.net # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # # Based on checkfs script from LFS-3.1 and earlier. # # From man fsck # 0 - No errors # 1 - File system errors corrected # 2 - System should be rebooted # 4 - File system errors left uncorrected # 8 - Operational error # 16 - Usage or syntax error # 32 - Fsck canceled by user request # 128 - Shared library error # ######################################################################### ### BEGIN INIT INFO # Provides: checkfs # Required-Start: udev swap # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Checks local filesystems before mounting. # Description: Checks local filesystems before mounting. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) if [ -f /fastboot ]; then msg="/fastboot found, will omit " msg="${msg} file system checks as requested.\n" log_info_msg "${msg}" exit 0 fi log_info_msg "Mounting root file system in read-only mode... " mount -n -o remount,ro / >/dev/null if [ ${?} != 0 ]; then log_failure_msg2 msg="\n\nCannot check root " msg="${msg}filesystem because it could not be mounted " msg="${msg}in read-only mode.\n\n" msg="${msg}After you press Enter, this system will be " msg="${msg}halted and powered off.\n\n" log_failure_msg "${msg}" log_info_msg "Press Enter to continue..." wait_for_user /etc/rc.d/init.d/halt start else log_success_msg2 fi if [ -f /forcefsck ]; then msg="/forcefsck found, forcing file" msg="${msg} system checks as requested." log_success_msg "$msg" options="-f" else options="" fi log_info_msg "Checking file systems..." # Note: -a option used to be -p; but this fails e.g. on fsck.minix if is_true "$VERBOSE_FSCK"; then fsck ${options} -a -A -C -T else fsck ${options} -a -A -C -T >/dev/null fi error_value=${?} if [ "${error_value}" = 0 ]; then log_success_msg2 fi if [ "${error_value}" = 1 ]; then msg="\nWARNING:\n\nFile system errors " msg="${msg}were found and have been corrected.\n" msg="${msg} You may want to double-check that " msg="${msg}everything was fixed properly." log_warning_msg "$msg" fi if [ "${error_value}" = 2 -o "${error_value}" = 3 ]; then msg="\nWARNING:\n\nFile system errors " msg="${msg}were found and have been " msg="${msg}corrected, but the nature of the " msg="${msg}errors require this system to be rebooted.\n\n" msg="${msg}After you press enter, " msg="${msg}this system will be rebooted\n\n" log_failure_msg "$msg" log_info_msg "Press Enter to continue..." wait_for_user reboot -f fi if [ "${error_value}" -gt 3 -a "${error_value}" -lt 16 ]; then msg="\nFAILURE:\n\nFile system errors " msg="${msg}were encountered that could not be " msg="${msg}fixed automatically.\nThis system " msg="${msg}cannot continue to boot and will " msg="${msg}therefore be halted until those " msg="${msg}errors are fixed manually by a " msg="${msg}System Administrator.\n\n" msg="${msg}After you press Enter, this system will be " msg="${msg}halted and powered off.\n\n" log_failure_msg "$msg" log_info_msg "Press Enter to continue..." wait_for_user /etc/rc.d/init.d/halt start fi if [ "${error_value}" -ge 16 ]; then msg="FAILURE:\n\nUnexpected failure " msg="${msg}running fsck. Exited with error " msg="${msg} code: ${error_value}.\n" log_info_msg $msg exit ${error_value} fi exit 0 ;; *) echo "Usage: ${0} {start}" exit 1 ;; esac # End checkfs
#!/bin/sh ######################################################################## # Begin mountfs # # Description : File System Mount Script # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: $local_fs # Required-Start: udev checkfs # Should-Start: modules # Required-Stop: localnet # Should-Stop: # Default-Start: S # Default-Stop: 0 6 # Short-Description: Mounts/unmounts local filesystems defined in /etc/fstab. # Description: Remounts root filesystem read/write and mounts all # remaining local filesystems defined in /etc/fstab on # start. Remounts root filesystem read-only and unmounts # remaining filesystems on stop. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) log_info_msg "Remounting root file system in read-write mode..." mount --options remount,rw / >/dev/null evaluate_retval # Remove fsck-related file system watermarks. rm -f /fastboot /forcefsck # Make sure /dev/pts exists mkdir -p /dev/pts # This will mount all filesystems that do not have _netdev in # their option list. _netdev denotes a network filesystem. log_info_msg "Mounting remaining file systems..." failed=0 mount --all --test-opts no_netdev >/dev/null || failed=1 evaluate_retval exit $failed ;; stop) # Don't unmount virtual file systems like /run log_info_msg "Unmounting all other currently mounted file systems..." # Ensure any loop devices are removed losetup -D umount --all --detach-loop --read-only \ --types notmpfs,nosysfs,nodevtmpfs,noproc,nodevpts >/dev/null evaluate_retval # Make sure / is mounted read only (umount bug) mount --options remount,ro / # Make all LVM volume groups unavailable, if appropriate # This fails if swap or / are on an LVM partition #if [ -x /sbin/vgchange ]; then /sbin/vgchange -an > /dev/null; fi if [ -r /etc/mdadm.conf ]; then log_info_msg "Mark arrays as clean..." mdadm --wait-clean --scan evaluate_retval fi ;; *) echo "Usage: ${0} {start|stop}" exit 1 ;; esac # End mountfs
#!/bin/sh ######################################################################## # Begin udev_retry # # Description : Udev cold-plugging script (retry) # # Authors : Alexander E. Patrakov # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # Bryan Kadzban - # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: udev_retry # Required-Start: udev # Should-Start: $local_fs cleanfs # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Replays failed uevents and creates additional devices. # Description: Replays any failed uevents that were skipped due to # slow hardware initialization, and creates those needed # device nodes # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) log_info_msg "Retrying failed uevents, if any..." rundir=/run/udev # From Debian: "copy the rules generated before / was mounted # read-write": for file in ${rundir}/tmp-rules--*; do dest=${file##*tmp-rules--} [ "$dest" = '*' ] && break cat $file >> /etc/udev/rules.d/$dest rm -f $file done # Re-trigger the uevents that may have failed, # in hope they will succeed now /bin/sed -e 's/#.*$//' /etc/sysconfig/udev_retry | /bin/grep -v '^$' | \ while read line ; do for subsystem in $line ; do /bin/udevadm trigger --subsystem-match=$subsystem --action=add done done # Now wait for udevd to process the uevents we triggered if ! is_true "$OMIT_UDEV_RETRY_SETTLE"; then /bin/udevadm settle fi evaluate_retval ;; *) echo "Usage ${0} {start}" exit 1 ;; esac exit 0 # End udev_retry
#!/bin/sh ######################################################################## # Begin cleanfs # # Description : Clean file system # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: cleanfs # Required-Start: $local_fs # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Cleans temporary directories early in the boot process. # Description: Cleans temporary directories /run, /var/lock, and # optionally, /tmp. cleanfs also creates /run/utmp # and any files defined in /etc/sysconfig/createfiles. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions # Function to create files/directory on boot. create_files() { # Input to file descriptor 9 and output to stdin (redirection) exec 9>&0 < /etc/sysconfig/createfiles while read name type perm usr grp dtype maj min junk do # Ignore comments and blank lines. case "${name}" in ""|\#*) continue ;; esac # Ignore existing files. if [ ! -e "${name}" ]; then # Create stuff based on its type. case "${type}" in dir) mkdir "${name}" ;; file) :> "${name}" ;; dev) case "${dtype}" in char) mknod "${name}" c ${maj} ${min} ;; block) mknod "${name}" b ${maj} ${min} ;; pipe) mknod "${name}" p ;; *) log_warning_msg "\nUnknown device type: ${dtype}" ;; esac ;; *) log_warning_msg "\nUnknown type: ${type}" continue ;; esac # Set up the permissions, too. chown ${usr}:${grp} "${name}" chmod ${perm} "${name}" fi done # Close file descriptor 9 (end redirection) exec 0>&9 9>&- return 0 } case "${1}" in start) log_info_msg "Cleaning file systems:" if [ "${SKIPTMPCLEAN}" = "" ]; then log_info_msg2 " /tmp" cd /tmp && find . -xdev -mindepth 1 ! -name lost+found -delete || failed=1 fi > /run/utmp if grep -q '^utmp:' /etc/group ; then chmod 664 /run/utmp chgrp utmp /run/utmp fi (exit ${failed}) evaluate_retval if grep -E -qv '^(#|$)' /etc/sysconfig/createfiles 2>/dev/null; then log_info_msg "Creating files and directories... " create_files # Always returns 0 evaluate_retval fi exit $failed ;; *) echo "Usage: ${0} {start}" exit 1 ;; esac # End cleanfs
#!/bin/sh ######################################################################## # Begin console # # Description : Sets keymap and screen font # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # Alexander E. Patrakov # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: console # Required-Start: $local_fs # Should-Start: udev_retry # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Sets up a localised console. # Description: Sets up fonts and language settings for the user's # local as defined by /etc/sysconfig/console. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions # Native English speakers probably don't have /etc/sysconfig/console at all [ -r /etc/sysconfig/console ] && . /etc/sysconfig/console failed=0 case "${1}" in start) # See if we need to do anything if [ -z "${KEYMAP}" ] && [ -z "${KEYMAP_CORRECTIONS}" ] && [ -z "${FONT}" ] && [ -z "${LEGACY_CHARSET}" ] && ! is_true "${UNICODE}"; then exit 0 fi # There should be no bogus failures below this line! log_info_msg "Setting up Linux console..." # Figure out if a framebuffer console is used [ -d /sys/class/graphics/fbcon ] && use_fb=1 || use_fb=0 # Figure out the command to set the console into the # desired mode is_true "${UNICODE}" && MODE_COMMAND="echo -en '\033%G' && kbd_mode -u" || MODE_COMMAND="echo -en '\033%@\033(K' && kbd_mode -a" # On framebuffer consoles, font has to be set for each vt in # UTF-8 mode. This doesn't hurt in non-UTF-8 mode also. ! is_true "${use_fb}" || [ -z "${FONT}" ] || MODE_COMMAND="${MODE_COMMAND} && setfont ${FONT}" # Apply that command to all consoles mentioned in # /etc/inittab. Important: in the UTF-8 mode this should # happen before setfont, otherwise a kernel bug will # show up and the unicode map of the font will not be # used. for TTY in `grep '^[^#].*respawn:/sbin/agetty' /etc/inittab | grep -o '\btty[[:digit:]]*\b'` do openvt -f -w -c ${TTY#tty} -- \ /bin/sh -c "${MODE_COMMAND}" || failed=1 done # Set the font (if not already set above) and the keymap [ "${use_fb}" == "1" ] || [ -z "${FONT}" ] || setfont $FONT || failed=1 [ -z "${KEYMAP}" ] || loadkeys ${KEYMAP} >/dev/null 2>&1 || failed=1 [ -z "${KEYMAP_CORRECTIONS}" ] || loadkeys ${KEYMAP_CORRECTIONS} >/dev/null 2>&1 || failed=1 # Convert the keymap from $LEGACY_CHARSET to UTF-8 [ -z "$LEGACY_CHARSET" ] || dumpkeys -c "$LEGACY_CHARSET" | loadkeys -u >/dev/null 2>&1 || failed=1 # If any of the commands above failed, the trap at the # top would set $failed to 1 ( exit $failed ) evaluate_retval exit $failed ;; *) echo "Usage: ${0} {start}" exit 1 ;; esac # End console
#!/bin/sh ######################################################################## # Begin localnet # # Description : Loopback device # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: localnet # Required-Start: mountvirtfs # Should-Start: modules # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: 0 6 # Short-Description: Starts the local network. # Description: Sets the hostname of the machine and starts the # loopback interface. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions [ -r /etc/sysconfig/network ] && . /etc/sysconfig/network [ -r /etc/hostname ] && HOSTNAME=`cat /etc/hostname` case "${1}" in start) log_info_msg "Bringing up the loopback interface..." ip addr add 127.0.0.1/8 label lo dev lo ip link set lo up evaluate_retval log_info_msg "Setting hostname to ${HOSTNAME}..." hostname ${HOSTNAME} evaluate_retval ;; stop) log_info_msg "Bringing down the loopback interface..." ip link set lo down evaluate_retval ;; restart) ${0} stop sleep 1 ${0} start ;; status) echo "Hostname is: $(hostname)" ip link show lo ;; *) echo "Usage: ${0} {start|stop|restart|status}" exit 1 ;; esac exit 0 # End localnet
#!/bin/sh ######################################################################## # Begin sysctl # # Description : File uses /etc/sysctl.conf to set kernel runtime # parameters # # Authors : Nathan Coulson (nathan AT linuxfromscratch D0T org) # Matthew Burgress (matthew AT linuxfromscratch D0T org) # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: sysctl # Required-Start: mountvirtfs # Should-Start: console # Required-Stop: # Should-Stop: # Default-Start: S # Default-Stop: # Short-Description: Makes changes to the proc filesystem # Description: Makes changes to the proc filesystem as defined in # /etc/sysctl.conf. See 'man sysctl(8)'. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) if [ -f "/etc/sysctl.conf" ]; then log_info_msg "Setting kernel runtime parameters..." sysctl -q -p evaluate_retval fi ;; status) sysctl -a ;; *) echo "Usage: ${0} {start|status}" exit 1 ;; esac exit 0 # End sysctl
#!/bin/sh ######################################################################## # Begin sysklogd # # Description : Sysklogd loader # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org LFS12.1 # Remove kernel log daemon. The functionality has been # merged with syslogd. # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: $syslog # Required-Start: $first localnet # Should-Start: # Required-Stop: $local_fs # Should-Stop: sendsignals # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts system log daemon. # Description: Starts system log daemon. # /etc/fstab. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) log_info_msg "Starting system log daemon..." parms=${SYSKLOGD_PARMS-'-m 0'} start_daemon /sbin/syslogd $parms evaluate_retval ;; stop) log_info_msg "Stopping system log daemon..." killproc /sbin/syslogd evaluate_retval ;; reload) log_info_msg "Reloading system log daemon config file..." pid=`pidofproc syslogd` kill -HUP "${pid}" evaluate_retval ;; restart) ${0} stop sleep 1 ${0} start ;; status) statusproc /sbin/syslogd ;; *) echo "Usage: ${0} {start|stop|reload|restart|status}" exit 1 ;; esac exit 0 # End sysklogd
#!/bin/sh ######################################################################## # Begin network # # Description : Network Control Script # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # Nathan Coulson - nathan AT linuxfromscratch D0T org # Kevin P. Fleming - kpfleming@linuxfromscratch.org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: $network # Required-Start: $local_fs localnet swap # Should-Start: $syslog firewalld iptables nftables # Required-Stop: $local_fs localnet swap # Should-Stop: $syslog firewalld iptables nftables # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts and configures network interfaces. # Description: Starts and configures network interfaces. # X-LFS-Provided-By: LFS ### END INIT INFO case "${1}" in start) # if the default route exists, network is already configured if ip route | grep -q "^default"; then return 0; fi # Start all network interfaces for file in /etc/sysconfig/ifconfig.* do interface=${file##*/ifconfig.} # Skip if $file is * (because nothing was found) if [ "${interface}" = "*" ]; then continue; fi /sbin/ifup ${interface} done ;; stop) # Unmount any network mounted file systems umount --all --force --types nfs,cifs,nfs4 # Reverse list net_files="" for file in /etc/sysconfig/ifconfig.* do net_files="${file} ${net_files}" done # Stop all network interfaces for file in ${net_files} do interface=${file##*/ifconfig.} # Skip if $file is * (because nothing was found) if [ "${interface}" = "*" ]; then continue; fi # See if interface exists if [ ! -e /sys/class/net/$interface ]; then continue; fi # Is interface UP? ip link show $interface 2>/dev/null | grep -q "state UP" if [ $? -ne 0 ]; then continue; fi /sbin/ifdown ${interface} done ;; restart) ${0} stop sleep 1 ${0} start ;; *) echo "Usage: ${0} {start|stop|restart}" exit 1 ;; esac exit 0 # End network
#!/bin/sh ######################################################################## # Begin sendsignals # # Description : Sendsignals Script # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## ### BEGIN INIT INFO # Provides: sendsignals # Required-Start: # Should-Start: # Required-Stop: $local_fs swap localnet # Should-Stop: # Default-Start: # Default-Stop: 0 6 # Short-Description: Attempts to kill remaining processes. # Description: Attempts to kill remaining processes. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in stop) omit=$(pidof mdmon) [ -n "$omit" ] && omit="-o $omit" log_info_msg "Sending all processes the TERM signal..." killall5 -15 $omit error_value=${?} sleep ${KILLDELAY} if [ "${error_value}" = 0 -o "${error_value}" = 2 ]; then log_success_msg else log_failure_msg fi log_info_msg "Sending all processes the KILL signal..." killall5 -9 $omit error_value=${?} sleep ${KILLDELAY} if [ "${error_value}" = 0 -o "${error_value}" = 2 ]; then log_success_msg else log_failure_msg fi ;; *) echo "Usage: ${0} {stop}" exit 1 ;; esac exit 0 # End sendsignals
#!/bin/sh ######################################################################## # Begin reboot # # Description : Reboot Scripts # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Updates : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # : Pierre Labastie - pierre AT linuxfromscratch D0T org # # Version : LFS 7.0 # # Notes : Update March 24th, 2022: change "stop" to "start". # Add the $last facility to Required-start # ######################################################################## ### BEGIN INIT INFO # Provides: reboot # Required-Start: $last # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: 6 # Default-Stop: # Short-Description: Reboots the system. # Description: Reboots the System. # X-LFS-Provided-By: LFS ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) log_info_msg "Restarting system..." reboot -d -f -i ;; *) echo "Usage: ${0} {start}" exit 1 ;; esac # End reboot
#!/bin/sh ######################################################################## # Begin halt # # Description : Halt Script # # Authors : Gerard Beekmans - gerard AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # : Pierre Labastie - pierre AT linuxfromscratch D0T org # # Version : LFS 7.0 # # Notes : Update March 24th, 2022: change "stop" to "start". # Add the $last facility to Required-start # ######################################################################## ### BEGIN INIT INFO # Provides: halt # Required-Start: $last # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: 0 # Default-Stop: # Short-Description: Halts the system. # Description: Halts the System. # X-LFS-Provided-By: LFS ### END INIT INFO case "${1}" in start) halt -d -f -i -p ;; *) echo "Usage: {start}" exit 1 ;; esac # End halt
#!/bin/sh ######################################################################## # Begin scriptname # # Description : # # Authors : # # Version : LFS x.x # # Notes : # ######################################################################## ### BEGIN INIT INFO # Provides: template # Required-Start: # Should-Start: # Required-Stop: # Should-Stop: # Default-Start: # Default-Stop: # Short-Description: # Description: # X-LFS-Provided-By: ### END INIT INFO . /lib/lsb/init-functions case "${1}" in start) log_info_msg "Starting..." # if it is possible to use start_daemon start_daemon fully_qualified_path # if it is not possible to use start_daemon # (command to start the daemon is not simple enough) if ! pidofproc daemon_name_as_reported_by_ps >/dev/null; then command_to_start_the_service fi evaluate_retval ;; stop) log_info_msg "Stopping..." # if it is possible to use killproc killproc fully_qualified_path # if it is not possible to use killproc # (the daemon shouldn't be stopped by killing it) if pidofproc daemon_name_as_reported_by_ps >/dev/null; then command_to_stop_the_service fi evaluate_retval ;; restart) ${0} stop sleep 1 ${0} start ;; *) echo "Usage: ${0} {start|stop|restart}" exit 1 ;; esac exit 0 # End scriptname
######################################################################## # Begin /etc/sysconfig/modules # # Description : Module auto-loading configuration # # Authors : # # Version : 00.00 # # Notes : The syntax of this file is as follows: # <module> [<arg1> <arg2> ...] # # Each module should be on its own line, and any options that you want # passed to the module should follow it. The line deliminator is either # a space or a tab. ######################################################################## # End /etc/sysconfig/modules
######################################################################## # Begin /etc/sysconfig/createfiles # # Description : Createfiles script config file # # Authors : # # Version : 00.00 # # Notes : The syntax of this file is as follows: # if type is equal to "file" or "dir" # <filename> <type> <permissions> <user> <group> # if type is equal to "dev" # <filename> <type> <permissions> <user> <group> <devtype> # <major> <minor> # # <filename> is the name of the file which is to be created # <type> is either file, dir, or dev. # file creates a new file # dir creates a new directory # dev creates a new device # <devtype> is either block, char or pipe # block creates a block device # char creates a character device # pipe creates a pipe, this will ignore the <major> and # <minor> fields # <major> and <minor> are the major and minor numbers used for # the device. ######################################################################## # End /etc/sysconfig/createfiles
######################################################################## # Begin /etc/sysconfig/udev_retry # # Description : udev_retry script configuration # # Authors : # # Version : 00.00 # # Notes : Each subsystem that may need to be re-triggered after mountfs # runs should be listed in this file. Probable subsystems to be # listed here are rtc (due to /var/lib/hwclock/adjtime) and sound # (due to both /var/lib/alsa/asound.state and /usr/sbin/alsactl). # Entries are whitespace-separated. ######################################################################## rtc # End /etc/sysconfig/udev_retry
#!/bin/sh ######################################################################## # Begin /sbin/ifup # # Description : Interface Up # # Authors : Nathan Coulson - nathan AT linuxfromscratch D0T org # Kevin P. Fleming - kpfleming@linuxfromscratch.org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # DJ Lucas - dj AT linuxfromscratch D0T org # # Version : LFS 7.7 # # Notes : The IFCONFIG variable is passed to the SERVICE script # in the /lib/services directory, to indicate what file the # service should source to get interface specifications. # ######################################################################## up() { log_info_msg "Bringing up the ${1} interface..." if ip link show $1 > /dev/null 2>&1; then link_status=`ip link show $1` if [ -n "${link_status}" ]; then if ! echo "${link_status}" | grep -q UP; then ip link set $1 up fi fi else log_failure_msg "Interface ${IFACE} doesn't exist." exit 1 fi evaluate_retval } RELEASE="7.7" USAGE="Usage: $0 [ -hV ] [--help] [--version] interface" VERSTR="LFS ifup, version ${RELEASE}" while [ $# -gt 0 ]; do case "$1" in --help | -h) help="y"; break ;; --version | -V) echo "${VERSTR}"; exit 0 ;; -*) echo "ifup: ${1}: invalid option" >&2 echo "${USAGE}" >& 2 exit 2 ;; *) break ;; esac done if [ -n "$help" ]; then echo "${VERSTR}" echo "${USAGE}" echo cat << HERE_EOF ifup is used to bring up a network interface. The interface parameter, e.g. eth0 or eth0:2, must match the trailing part of the interface specifications file, e.g. /etc/sysconfig/ifconfig.eth0:2. HERE_EOF exit 0 fi file=/etc/sysconfig/ifconfig.${1} # Skip backup files [ "${file}" = "${file%""~""}" ] || exit 0 . /lib/lsb/init-functions if [ ! -r "${file}" ]; then log_failure_msg "Unable to bring up ${1} interface! ${file} is missing or cannot be accessed." exit 1 fi . $file if [ "$IFACE" = "" ]; then log_failure_msg "Unable to bring up ${1} interface! ${file} does not define an interface [IFACE]." exit 1 fi # Do not process this service if started by boot, and ONBOOT # is not set to yes if [ "${IN_BOOT}" = "1" -a "${ONBOOT}" != "yes" ]; then exit 0 fi # Bring up the interface if [ "$VIRTINT" != "yes" ]; then up ${IFACE} fi for S in ${SERVICE}; do if [ ! -x "/lib/services/${S}" ]; then MSG="\nUnable to process ${file}. Either " MSG="${MSG}the SERVICE '${S} was not present " MSG="${MSG}or cannot be executed." log_failure_msg "$MSG" exit 1 fi done #if [ "${SERVICE}" = "wpa" ]; then log_success_msg; fi # Create/configure the interface for S in ${SERVICE}; do IFCONFIG=${file} /lib/services/${S} ${IFACE} up done # Set link up virtual interfaces if [ "${VIRTINT}" == "yes" ]; then up ${IFACE} fi # Bring up any additional interface components for I in $INTERFACE_COMPONENTS; do up $I; done # Set MTU if requested. Check if MTU has a "good" value. if test -n "${MTU}"; then if [[ ${MTU} =~ ^[0-9]+$ ]] && [[ $MTU -ge 68 ]] ; then for I in $IFACE $INTERFACE_COMPONENTS; do ip link set dev $I mtu $MTU; done else log_info_msg2 "Invalid MTU $MTU" fi fi # Set the route default gateway if requested if [ -n "${GATEWAY}" ]; then if ip route | grep -q default; then log_warning_msg "Gateway already setup; skipping." else log_info_msg "Adding default gateway ${GATEWAY} to the ${IFACE} interface..." ip route add default via ${GATEWAY} dev ${IFACE} evaluate_retval fi fi # End /sbin/ifup
#!/bin/bash ######################################################################## # Begin /sbin/ifdown # # Description : Interface Down # # Authors : Nathan Coulson - nathan AT linuxfromscratch D0T org # Kevin P. Fleming - kpfleming@linuxfromscratch.org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # # Notes : the IFCONFIG variable is passed to the scripts found # in the /lib/services directory, to indicate what file the # service should source to get interface specifications. # ######################################################################## RELEASE="7.0" USAGE="Usage: $0 [ -hV ] [--help] [--version] interface" VERSTR="LFS ifdown, version ${RELEASE}" while [ $# -gt 0 ]; do case "$1" in --help | -h) help="y"; break ;; --version | -V) echo "${VERSTR}"; exit 0 ;; -*) echo "ifup: ${1}: invalid option" >&2 echo "${USAGE}" >& 2 exit 2 ;; *) break ;; esac done if [ -n "$help" ]; then echo "${VERSTR}" echo "${USAGE}" echo cat << HERE_EOF ifdown is used to bring down a network interface. The interface parameter, e.g. eth0 or eth0:2, must match the trailing part of the interface specifications file, e.g. /etc/sysconfig/ifconfig.eth0:2. HERE_EOF exit 0 fi file=/etc/sysconfig/ifconfig.${1} # Skip backup files [ "${file}" = "${file%""~""}" ] || exit 0 . /lib/lsb/init-functions if [ ! -r "${file}" ]; then log_warning_msg "${file} is missing or cannot be accessed." exit 1 fi . ${file} if [ "$IFACE" = "" ]; then log_failure_msg "${file} does not define an interface [IFACE]." exit 1 fi # We only need to first service to bring down the interface S=`echo ${SERVICE} | cut -f1 -d" "` if ip link show ${IFACE} > /dev/null 2>&1; then if [ -n "${S}" -a -x "/lib/services/${S}" ]; then IFCONFIG=${file} /lib/services/${S} ${IFACE} down else MSG="Unable to process ${file}. Either " MSG="${MSG}the SERVICE variable was not set " MSG="${MSG}or the specified service cannot be executed." log_failure_msg "$MSG" exit 1 fi else log_warning_msg "Interface ${1} doesn't exist." fi # Leave the interface up if there are additional interfaces in the device link_status=`ip link show ${IFACE} 2>/dev/null` if [ -n "${link_status}" ]; then if [ "$(echo "${link_status}" | grep UP)" != "" ]; then if [ "$(ip addr show ${IFACE} | grep 'inet ')" == "" ]; then log_info_msg "Bringing down the ${IFACE} interface..." ip link set ${IFACE} down evaluate_retval fi fi fi # End /sbin/ifdown
#!/bin/sh ######################################################################## # Begin /lib/services/ipv4-static # # Description : IPV4 Static Boot Script # # Authors : Nathan Coulson - nathan AT linuxfromscratch D0T org # Kevin P. Fleming - kpfleming@linuxfromscratch.org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## . /lib/lsb/init-functions . ${IFCONFIG} if [ -z "${IP}" ]; then log_failure_msg "\nIP variable missing from ${IFCONFIG}, cannot continue." exit 1 fi if [ -z "${PREFIX}" -a -z "${PEER}" ]; then log_warning_msg "\nPREFIX variable missing from ${IFCONFIG}, assuming 24." PREFIX=24 args="${args} ${IP}/${PREFIX}" elif [ -n "${PREFIX}" -a -n "${PEER}" ]; then log_failure_msg "\nPREFIX and PEER both specified in ${IFCONFIG}, cannot continue." exit 1 elif [ -n "${PREFIX}" ]; then args="${args} ${IP}/${PREFIX}" elif [ -n "${PEER}" ]; then args="${args} ${IP} peer ${PEER}" fi if [ -n "${LABEL}" ]; then args="${args} label ${LABEL}" fi if [ -n "${BROADCAST}" ]; then args="${args} broadcast ${BROADCAST}" fi case "${2}" in up) if [ "$(ip addr show ${1} 2>/dev/null | grep ${IP}/)" = "" ]; then log_info_msg "Adding IPv4 address ${IP} to the ${1} interface..." ip addr add ${args} dev ${1} evaluate_retval else log_warning_msg "Cannot add IPv4 address ${IP} to ${1}. Already present." fi ;; down) if [ "$(ip addr show ${1} 2>/dev/null | grep ${IP}/)" != "" ]; then log_info_msg "Removing IPv4 address ${IP} from the ${1} interface..." ip addr del ${args} dev ${1} evaluate_retval fi if [ -n "${GATEWAY}" ]; then # Only remove the gateway if there are no remaining ipv4 addresses if [ "$(ip addr show ${1} 2>/dev/null | grep 'inet ')" != "" ]; then log_info_msg "Removing default gateway..." ip route del default evaluate_retval fi fi ;; *) echo "Usage: ${0} [interface] {up|down}" exit 1 ;; esac # End /lib/services/ipv4-static
#!/bin/sh ######################################################################## # Begin /lib/services/ipv4-static-route # # Description : IPV4 Static Route Script # # Authors : Kevin P. Fleming - kpfleming@linuxfromscratch.org # DJ Lucas - dj AT linuxfromscratch D0T org # Update : Bruce Dubbs - bdubbs AT linuxfromscratch D0T org # # Version : LFS 7.0 # ######################################################################## . /lib/lsb/init-functions . ${IFCONFIG} case "${TYPE}" in ("" | "network") need_ip=1 need_gateway=1 ;; ("default") need_gateway=1 args="${args} default" desc="default" ;; ("host") need_ip=1 ;; ("unreachable") need_ip=1 args="${args} unreachable" desc="unreachable " ;; (*) log_failure_msg "Unknown route type (${TYPE}) in ${IFCONFIG}, cannot continue." exit 1 ;; esac if [ -n "${GATEWAY}" ]; then MSG="The GATEWAY variable cannot be set in ${IFCONFIG} for static routes.\n" log_failure_msg "$MSG Use STATIC_GATEWAY only, cannot continue" exit 1 fi if [ -n "${need_ip}" ]; then if [ -z "${IP}" ]; then log_failure_msg "IP variable missing from ${IFCONFIG}, cannot continue." exit 1 fi if [ -z "${PREFIX}" ]; then log_failure_msg "PREFIX variable missing from ${IFCONFIG}, cannot continue." exit 1 fi args="${args} ${IP}/${PREFIX}" desc="${desc}${IP}/${PREFIX}" fi if [ -n "${need_gateway}" ]; then if [ -z "${STATIC_GATEWAY}" ]; then log_failure_msg "STATIC_GATEWAY variable missing from ${IFCONFIG}, cannot continue." exit 1 fi args="${args} via ${STATIC_GATEWAY}" fi if [ -n "${SOURCE}" ]; then args="${args} src ${SOURCE}" fi case "${2}" in up) log_info_msg "Adding '${desc}' route to the ${1} interface..." ip route add ${args} dev ${1} evaluate_retval ;; down) log_info_msg "Removing '${desc}' route from the ${1} interface..." ip route del ${args} dev ${1} evaluate_retval ;; *) echo "Usage: ${0} [interface] {up|down}" exit 1 ;; esac # End /lib/services/ipv4-static-route
The rules in this appendix are listed for convenience. Installation is normally done via instructions in Section 8.76, “Udev from Systemd-257.”
# /etc/udev/rules.d/55-lfs.rules: Rule definitions for LFS. # Core kernel devices # This causes the system clock to be set as soon as /dev/rtc becomes available. SUBSYSTEM=="rtc", ACTION=="add", MODE="0644", RUN+="/etc/rc.d/init.d/setclock start" KERNEL=="rtc", ACTION=="add", MODE="0644", RUN+="/etc/rc.d/init.d/setclock start"
This book is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 2.0 License.
Computer instructions may be extracted from the book under the MIT License.
Creative Commons Legal Code
Attribution-NonCommercial-ShareAlike 2.0
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM ITS USE.
License
THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS.
Definitions
"Collective Work" means a work, such as a periodical issue, anthology or encyclopedia, in which the Work in its entirety in unmodified form, along with a number of other contributions, constituting separate and independent works in themselves, are assembled into a collective whole. A work that constitutes a Collective Work will not be considered a Derivative Work (as defined below) for the purposes of this License.
"Derivative Work" means a work based upon the Work or upon the Work and other pre-existing works, such as a translation, musical arrangement, dramatization, fictionalization, motion picture version, sound recording, art reproduction, abridgment, condensation, or any other form in which the Work may be recast, transformed, or adapted, except that a work that constitutes a Collective Work will not be considered a Derivative Work for the purpose of this License. For the avoidance of doubt, where the Work is a musical composition or sound recording, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered a Derivative Work for the purpose of this License.
"Licensor" means the individual or entity that offers the Work under the terms of this License.
"Original Author" means the individual or entity who created the Work.
"Work" means the copyrightable work of authorship offered under the terms of this License.
"You" means an individual or entity exercising rights under this License who has not previously violated the terms of this License with respect to the Work, or who has received express permission from the Licensor to exercise rights under this License despite a previous violation.
"License Elements" means the following high-level license attributes as selected by Licensor and indicated in the title of this License: Attribution, Noncommercial, ShareAlike.
Fair Use Rights. Nothing in this license is intended to reduce, limit, or restrict any rights arising from fair use, first sale or other limitations on the exclusive rights of the copyright owner under copyright law or other applicable laws.
License Grant. Subject to the terms and conditions of this License, Licensor hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the duration of the applicable copyright) license to exercise the rights in the Work as stated below:
to reproduce the Work, to incorporate the Work into one or more Collective Works, and to reproduce the Work as incorporated in the Collective Works;
to create and reproduce Derivative Works;
to distribute copies or phonorecords of, display publicly, perform publicly, and perform publicly by means of a digital audio transmission the Work including as incorporated in Collective Works;
to distribute copies or phonorecords of, display publicly, perform publicly, and perform publicly by means of a digital audio transmission Derivative Works;
The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such modifications as are technically necessary to exercise the rights in other media and formats. All rights not expressly granted by Licensor are hereby reserved, including but not limited to the rights set forth in Sections 4(e) and 4(f).
Restrictions.The license granted in Section 3 above is expressly made subject to and limited by the following restrictions:
You may distribute, publicly display, publicly perform, or publicly digitally perform the Work only under the terms of this License, and You must include a copy of, or the Uniform Resource Identifier for, this License with every copy or phonorecord of the Work You distribute, publicly display, publicly perform, or publicly digitally perform. You may not offer or impose any terms on the Work that alter or restrict the terms of this License or the recipients' exercise of the rights granted hereunder. You may not sublicense the Work. You must keep intact all notices that refer to this License and to the disclaimer of warranties. You may not distribute, publicly display, publicly perform, or publicly digitally perform the Work with any technological measures that control access or use of the Work in a manner inconsistent with the terms of this License Agreement. The above applies to the Work as incorporated in a Collective Work, but this does not require the Collective Work apart from the Work itself to be made subject to the terms of this License. If You create a Collective Work, upon notice from any Licensor You must, to the extent practicable, remove from the Collective Work any reference to such Licensor or the Original Author, as requested. If You create a Derivative Work, upon notice from any Licensor You must, to the extent practicable, remove from the Derivative Work any reference to such Licensor or the Original Author, as requested.
You may distribute, publicly display, publicly perform, or publicly digitally perform a Derivative Work only under the terms of this License, a later version of this License with the same License Elements as this License, or a Creative Commons iCommons license that contains the same License Elements as this License (e.g. Attribution-NonCommercial-ShareAlike 2.0 Japan). You must include a copy of, or the Uniform Resource Identifier for, this License or other license specified in the previous sentence with every copy or phonorecord of each Derivative Work You distribute, publicly display, publicly perform, or publicly digitally perform. You may not offer or impose any terms on the Derivative Works that alter or restrict the terms of this License or the recipients' exercise of the rights granted hereunder, and You must keep intact all notices that refer to this License and to the disclaimer of warranties. You may not distribute, publicly display, publicly perform, or publicly digitally perform the Derivative Work with any technological measures that control access or use of the Work in a manner inconsistent with the terms of this License Agreement. The above applies to the Derivative Work as incorporated in a Collective Work, but this does not require the Collective Work apart from the Derivative Work itself to be made subject to the terms of this License.
You may not exercise any of the rights granted to You in Section 3 above in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation. The exchange of the Work for other copyrighted works by means of digital file-sharing or otherwise shall not be considered to be intended for or directed toward commercial advantage or private monetary compensation, provided there is no payment of any monetary compensation in connection with the exchange of copyrighted works.
If you distribute, publicly display, publicly perform, or publicly digitally perform the Work or any Derivative Works or Collective Works, You must keep intact all copyright notices for the Work and give the Original Author credit reasonable to the medium or means You are utilizing by conveying the name (or pseudonym if applicable) of the Original Author if supplied; the title of the Work if supplied; to the extent reasonably practicable, the Uniform Resource Identifier, if any, that Licensor specifies to be associated with the Work, unless such URI does not refer to the copyright notice or licensing information for the Work; and in the case of a Derivative Work, a credit identifying the use of the Work in the Derivative Work (e.g., "French translation of the Work by Original Author," or "Screenplay based on original Work by Original Author"). Such credit may be implemented in any reasonable manner; provided, however, that in the case of a Derivative Work or Collective Work, at a minimum such credit will appear where any other comparable authorship credit appears and in a manner at least as prominent as such other comparable authorship credit.
For the avoidance of doubt, where the Work is a musical composition:
Performance Royalties Under Blanket Licenses. Licensor reserves the exclusive right to collect, whether individually or via a performance rights society (e.g. ASCAP, BMI, SESAC), royalties for the public performance or public digital performance (e.g. webcast) of the Work if that performance is primarily intended for or directed toward commercial advantage or private monetary compensation.
Mechanical Rights and Statutory Royalties. Licensor reserves the exclusive right to collect, whether individually or via a music rights agency or designated agent (e.g. Harry Fox Agency), royalties for any phonorecord You create from the Work ("cover version") and distribute, subject to the compulsory license created by 17 USC Section 115 of the US Copyright Act (or the equivalent in other jurisdictions), if Your distribution of such cover version is primarily intended for or directed toward commercial advantage or private monetary compensation. 6. Webcasting Rights and Statutory Royalties. For the avoidance of doubt, where the Work is a sound recording, Licensor reserves the exclusive right to collect, whether individually or via a performance-rights society (e.g. SoundExchange), royalties for the public digital performance (e.g. webcast) of the Work, subject to the compulsory license created by 17 USC Section 114 of the US Copyright Act (or the equivalent in other jurisdictions), if Your public digital performance is primarily intended for or directed toward commercial advantage or private monetary compensation.
Webcasting Rights and Statutory Royalties. For the avoidance of doubt, where the Work is a sound recording, Licensor reserves the exclusive right to collect, whether individually or via a performance-rights society (e.g. SoundExchange), royalties for the public digital performance (e.g. webcast) of the Work, subject to the compulsory license created by 17 USC Section 114 of the US Copyright Act (or the equivalent in other jurisdictions), if Your public digital performance is primarily intended for or directed toward commercial advantage or private monetary compensation.
Representations, Warranties and Disclaimer
UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Termination
This License and the rights granted hereunder will terminate automatically upon any breach by You of the terms of this License. Individuals or entities who have received Derivative Works or Collective Works from You under this License, however, will not have their licenses terminated provided such individuals or entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of this License.
Subject to the above terms and conditions, the license granted here is perpetual (for the duration of the applicable copyright in the Work). Notwithstanding the above, Licensor reserves the right to release the Work under different license terms or to stop distributing the Work at any time; provided, however that any such election will not serve to withdraw this License (or any other license that has been, or is required to be, granted under the terms of this License), and this License will continue in full force and effect unless terminated as stated above.
Miscellaneous
Each time You distribute or publicly digitally perform the Work or a Collective Work, the Licensor offers to the recipient a license to the Work on the same terms and conditions as the license granted to You under this License.
Each time You distribute or publicly digitally perform a Derivative Work, Licensor offers to the recipient a license to the original Work on the same terms and conditions as the license granted to You under this License.
If any provision of this License is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this License, and without further action by the parties to this agreement, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable.
No term or provision of this License shall be deemed waived and no breach consented to unless such waiver or consent shall be in writing and signed by the party to be charged with such waiver or consent.
This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here. Licensor shall not be bound by any additional provisions that may appear in any communication from You. This License may not be modified without the mutual written agreement of the Licensor and You.
Creative Commons is not a party to this License, and makes no warranty whatsoever in connection with the Work. Creative Commons will not be liable to You or any party on any legal theory for any damages whatsoever, including without limitation any general, special, incidental or consequential damages arising in connection to this license. Notwithstanding the foregoing two (2) sentences, if Creative Commons has expressly identified itself as the Licensor hereunder, it shall have all rights and obligations of Licensor.
Except for the limited purpose of indicating to the public that the Work is licensed under the CCPL, neither party will use the trademark "Creative Commons" or any related trademark or logo of Creative Commons without the prior written consent of Creative Commons. Any permitted use will be in compliance with Creative Commons' then-current trademark usage guidelines, as may be published on its website or otherwise made available upon request from time to time.
Creative Commons may be contacted at http://creativecommons.org/.
Copyright © 1999-2024 Gerard Beekmans
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.