LFS 6 System Won't Boot

Mark Olbert molbert at iterobiopharm.com
Sat Jan 26 11:00:47 PST 2008


Dan,

Thanks for the quick reply.

I've done some more spelunking:

1) The inittab file seems to be in good order. The parameters match (with differences in terminology) the working inittab file on the Slackware distro which I keep on a separate partition for emergencies.

2) I put some echo statements in /etc/rc.d/init.d/rc to watch what was happening. After running the last script in /etc/rc.d/rcsysinit.d it exists normally. But I also had it display the running processes before it exited (via ps ax), and I noticed that I have two init processes running. Here's a snip:

PID    TTY    STAT    TIME    COMMAND
1        ?        ?                0:00    init boot
...
973    ?        Ss+          0:00    init boot
976    ?        S+            0:00    /bin/sh /etc/rc.d/init.d/rc sysinit

I don't no much at all about init, but it seems odd to me that I have two copies of it running. Then again, maybe it just spawns processes to do its work.

3) My kernel and udev are 2.6.10 and 030 (I think; at least, the source file on the system, which I think is the same one that I used when I built the LFS system years ago, is udev-030.tar.bz2 (I may have the file extensions wrong, but that's not important).

Regarding your comment about the udev rules and the kernel, is there a way to configure udev to have it log what it's told by the kernel? All I can tell you is that the config file I saw when I first encountered this problem did not have any entries for hd and sd and no sd nodes got created (there was an hdb node created, I think, for  the DVD drive). After I added the Slackware rules (and some associated scripts) it had those rules and the nodes got created.

I tried downloading the udev rules from an LFS archive from around the time I built the system (that downloaded file had hd and sd rules in it). But no sda nodes got created with that file (I don't know why). Here's the section from the LFS archives (udev-config-4.rules, LFS v6.1.1):

# Storage/memory devices

KERNEL=="fd[0-9]*",                 GROUP="floppy"
KERNEL=="ram[0-9]*",                GROUP="disk"
KERNEL=="raw[0-9]*", NAME="raw/%k", GROUP="disk"
KERNEL=="hd*",                      GROUP="disk"
KERNEL=="sd[a-z]",                  GROUP="disk"
KERNEL=="sd[a-z][0-9]*",            GROUP="disk"
KERNEL=="sd[a-i][a-z]",             GROUP="disk"
KERNEL=="sd[a-i][a-z][0-9]*",       GROUP="disk"
KERNEL=="s[grt][0-9]*",             GROUP="disk"
KERNEL=="scd[0-9]*",                GROUP="cdrom"
KERNEL=="dasd[0-9]*",               GROUP="disk"
KERNEL=="ataraid[0-9]*",            GROUP="disk"
KERNEL=="loop[0-9]*",               GROUP="disk"
KERNEL=="md[0-9]*",                 GROUP="disk"
KERNEL=="dm-*",                     GROUP="disk",   MODE="0640"
KERNEL=="ht[0-9]*",                 GROUP="tape"
KERNEL=="nht[0-9]*",                GROUP="tape"
KERNEL=="pt[0-9]*",                 GROUP="tape"
KERNEL=="npt[0-9]*",                GROUP="tape"
KERNEL=="st[0-9]*",                 GROUP="tape"
KERNEL=="nst[0-9]*",                GROUP="tape"
KERNEL=="iseries/vcd*",             GROUP="disk"
KERNEL=="iseries/vd*",              GROUP="disk"
- Mark
    

----- Original Message ----
From: Dan Nicholson <dbn.lists at gmail.com>
To: LFS Support List <lfs-support at linuxfromscratch.org>
Sent: Saturday, January 26, 2008 10:32:06 AM
Subject: Re: LFS 6 System Won't Boot


On Jan 26, 2008 10:06 AM, Mark Olbert <molbert at iterobiopharm.com>
 wrote:
> I have an oddball problem that is causing me to rip out hair :)
>
> My LFS 6 system, which has performed like a champ for years, suddenly
 stopped booting the other day. Initially the problem was that it
 couldn't "see" /dev/sda1, /dev/sda3 and /dev/sda4 (which is how my SATA
 drive shows up in the system), so when it came time to mount the
 filesystems it tossed off some error messages, told me to hit enter, and froze.
>
> I traced this particular problem to some kind of corruption in the
 udev rules: apparently the rules that described how to set up /dev/sd*
 went away (I'm not sure how that happened, it's possible it could have
 been my doing, but I tend to be pretty careful about making backup copies
 of files).

It's actually the kernel that decides that the devices are named sd*,
so all the udev rules should be doing for you is setting the ownership
and permissions of the group. If they're not being created, I'd say
that the kernel isn't telling udev about them at all.

On a guess, it sounds like a hardware problem with your SATA drive (or
maybe the controller). Does dmesg say anything interesting? Could you
try booting from a LiveCD and seeing what happens?


> Unfortunately, I didn't have the udev config rules file from when I
 built the system several years ago. But I was able to scrounge some
 rules from a working Slackware (2.6.10) distro that I keep on the same
 drive, in a different partition, for emergencies like this. Copying the
 Slackware rules into the LFS rules folder solved the initial problem.

That sounds strange. What kernel and udev where you running?

> I'd appreciate any suggestions as to how to resolve this. In
 particular, can someone explain to me the "normal" sequence of events after
 /etc/rc.d/rcsysinit.d is processed? I thought that the boot sequence then
 started working its way through the shell scripts in /etc/rc.d/rc3.d.
 But I put some echo statements in the first such script (sysklogd), and
 the file is never entered. So the boot sequence must go someplace else
 after /etc/rc.d/rcsysinit.d. But I don't know where!

It's all defined in /etc/inittab. The si: entry says that on sysinit
we'll execute /etc/rc.d/init.d/rc with the sysinit argument. After
sysinit is done, it should go to whatever run level you specified on
the kernel command line or the initdefault in /etc/inittab.

--
Dan
-- 
http://linuxfromscratch.org/mailman/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/lfs/faq.html
Unsubscribe: See the above information page






More information about the lfs-support mailing list