Toolchain technical notes

This section attempts to explain some of the rationale and technical details behind the overall build method. It's not essential that you understand everything here immediately. Most of it will make sense once you have performed an actual build. Feel free to refer back here at any time.

The overall goal of Chapter 5 is to provide a sane, temporary environment that we can chroot into, and from which we can produce a clean, trouble-free build of the target LFS system in Chapter 6. Along the way, we attempt to divorce ourselves from the host system as much as possible, and in so doing build a self-contained and self-hosted toolchain. It should be noted that the build process has been designed to minimize the risks for new readers and provide maximum educational value at the same time. In other words, more advanced techniques could be used to build the system.

[Important] Important

Before continuing, you really should be aware of the name of your working platform, often also referred to as the target triplet. For many folks the target triplet will probably be i686-pc-linux-gnu. A simple way to determine your target triplet is to run the config.guess script that comes with the source for many packages. Unpack the Binutils sources and run the script: ./config.guess and note the output.

You'll also need to be aware of the name of your platform's dynamic linker, often also referred to as the dynamic loader, not to be confused with the standard linker ld that is part of Binutils. The dynamic linker is provided by Glibc and has the job of finding and loading the shared libraries needed by a program, preparing the program to run and then running it. For most folks the name of the dynamic linker will be ld-linux.so.2. On platforms that are less prevalent, the name might be ld.so.1 and newer 64 bit platforms might even have something completely different. You should be able to determine the name of your platform's dynamic linker by looking in the /lib directory on your host system. A sure-fire way is to inspect a random binary from your host system by running: readelf -l <name of binary> | grep interpreter and noting the output. The authoritative reference covering all platforms is in the shlib-versions file in the root of the Glibc source tree.

Some key technical points of how the Chapter 5 build method works:

Binutils is installed first because the ./configure runs of both GCC and Glibc perform various feature tests on the assembler and linker to determine which software features to enable or disable. This is more important than one might first realize. An incorrectly configured GCC or Glibc can result in a subtly broken toolchain where the impact of such breakage might not show up until near the end of the build of a whole distribution. Thankfully, a test suite failure will usually alert us before too much time is wasted.

Binutils installs its assembler and linker into two locations, /tools/bin and /tools/$TARGET_TRIPLET/bin. In reality, the tools in one location are hard linked to the other. An important facet of the linker is its library search order. Detailed information can be obtained from ld by passing it the --verbose flag. For example: ld --verbose | grep SEARCH will show you the current search paths and their order. You can see what files are actually linked by ld by compiling a dummy program and passing the --verbose switch to the linker. For example: gcc dummy.c -Wl,--verbose 2>&1 | grep succeeded will show you all the files successfully opened during the linking.

The next package installed is GCC and during its run of ./configure you'll see, for example:

checking what assembler to use... /tools/i686-pc-linux-gnu/bin/as
checking what linker to use... /tools/i686-pc-linux-gnu/bin/ld

This is important for the reasons mentioned above. It also demonstrates that GCC's configure script does not search the PATH directories to find which tools to use. However, during the actual operation of gcc itself, the same search paths are not necessarily used. You can find out which standard linker gcc will use by running: gcc -print-prog-name=ld. Detailed information can be obtained from gcc by passing it the -v flag while compiling a dummy program. For example: gcc -v dummy.c will show you detailed information about the preprocessor, compilation and assembly stages, including gcc's include search paths and their order.

The next package installed is Glibc. The most important considerations for building Glibc are the compiler, binary tools and kernel headers. The compiler is generally no problem as Glibc will always use the gcc found in a PATH directory. The binary tools and kernel headers can be a little more troublesome. Therefore we take no risks and use the available configure switches to enforce the correct selections. After the run of ./configure you can check the contents of the config.make file in the glibc-build directory for all the important details. You'll note some interesting items like the use of CC="gcc -B/tools/bin/" to control which binary tools are used, and also the use of the -nostdinc and -isystem flags to control the compiler's include search path. These items help to highlight an important aspect of the Glibc package: it is very self-sufficient in terms of its build machinery and generally does not rely on toolchain defaults.

After the Glibc installation, we make some adjustments to ensure that searching and linking take place only within our /tools prefix. We install an adjusted ld, which has a hard-wired search path limited to /tools/lib. Then we amend gcc's specs file to point to our new dynamic linker in /tools/lib. This last step is vital to the whole process. As mentioned above, a hard-wired path to a dynamic linker is embedded into every ELF shared executable. You can inspect this by running: readelf -l <name of binary> | grep interpreter. By amending gcc's specs file, we are ensuring that every program compiled from here through the end of this chapter will use our new dynamic linker in /tools/lib.

The need to use the new dynamic linker is also the reason why we apply the Specs patch for the second pass of GCC. Failure to do so will result in the GCC programs themselves having the name of the dynamic linker from the host system's /lib directory embedded into them, which would defeat our goal of getting away from the host.

During the second pass of Binutils, we are able to utilize the --with-lib-path configure switch to control ld's library search path. From this point onwards, the core toolchain is self-contained and self-hosted. The remainder of the Chapter 5 packages all build against the new Glibc in /tools and all is well.

Upon entering the chroot environment in Chapter 6, the first major package we install is Glibc, due to its self-sufficient nature that we mentioned above. Once this Glibc is installed into /usr, we perform a quick changeover of the toolchain defaults, then proceed for real in building the rest of the target LFS system.

Notes on static linking

Most programs have to perform, beside their specific task, many rather common and sometimes trivial operations. These include allocating memory, searching directories, reading and writing files, string handling, pattern matching, arithmetic and many other tasks. Instead of obliging each program to reinvent the wheel, the GNU system provides all these basic functions in ready-made libraries. The major library on any Linux system is Glibc.

There are two primary ways of linking the functions from a library to a program that uses them: statically or dynamically. When a program is linked statically, the code of the used functions is included in the executable, resulting in a rather bulky program. When a program is dynamically linked, what is included is a reference to the dynamic linker, the name of the library, and the name of the function, resulting in a much smaller executable. (A third way is to use the programming interface of the dynamic linker. See the dlopen man page for more information.)

Dynamic linking is the default on Linux and has three major advantages over static linking. First, you need only one copy of the executable library code on your hard disk, instead of having many copies of the same code included into a whole bunch of programs -- thus saving disk space. Second, when several programs use the same library function at the same time, only one copy of the function's code is required in core -- thus saving memory space. Third, when a library function gets a bug fixed or is otherwise improved, you only need to recompile this one library, instead of having to recompile all the programs that make use of the improved function.

If dynamic linking has several advantages, why then do we statically link the first two packages in this chapter? The reasons are threefold: historical, educational, and technical. Historical, because earlier versions of LFS statically linked every program in this chapter. Educational, because knowing the difference is useful. Technical, because we gain an element of independence from the host in doing so, meaning that those programs can be used independently of the host system. However, it's worth noting that an overall successful LFS build can still be achieved when the first two packages are built dynamically.