strange badblocks problem

Bill's LFS Login lfsbill at
Tue Jan 13 15:54:00 PST 2004

On Tue, 13 Jan 2004, marnixk wrote:

> Bill's LFS Login wrote:
> > Yep. Take the car out of reverse!  ;) When the partition is *bigger*,
> > no problems. The FS can be created with *fewer* sectors than the
> > partition has and there will be no problem, only the reverse causes the
> > problem. The kernel protects against accessing *outside* the partition.
> You've lost me. The partition (/dev/hda9) with the errors has a block size
> of 4k (tune2fs) and a block count of 734965. The blocking factor is 8 so
> the filesystem occupies 5879720 sectors, right? Now sfdisk -l -uS on that
> drive shows that /dev/hda9 is 5879727 sectors. If I understand your post
> correctly than this should not give problems because partition is bigger
> than the FS. However, this partition is the one with the problems... Am I
> insane? Maybe the protection by the kernel is failing due to some other
> issue after all?

First, if the last block starts on sector 5879721, and you have 8
sectors per "block", you will fail when trying to read the last 4K
block. My previous comments always assume that the partition size is
large enough to hold all the sectors the file system refs (and that is
what I meant when saying "if the partition is larger...").

Traditionally, many FSs I've dealt with do *not* use the first sector
for a true part of the file system. It had a boot block. The super block
began in sector 1, then came various i-node lists, ... I don't recall
if ext2/3 also doesn't use sector 0, but I think it does not. This
indicates that the last block would start on sector 5879721 (or maybe
higher, I really haven't taken the time to learn all its details).

So it seems likely to me that if you reduce the blocksize/count combo or
increase partition size just a little you'll be OK.

Re "You've lost me":

Now, we're both lost then. IIRC, on your previous post, you ref'd
another system that apparently had one more block in the partition than
the calculated FS size. IIRC, you asked why that one wasn't failing.
That's what I was referring to. Its combo of blocksize and partition size
was big enough to hold all the sectors and may even have more sectors
than needed (I don't recall). That's OK.

On the example you mention in the above, you *may* right. Problem can
be the one I mention (block start on *1 and occupies 8 sectors that
extend beyond...) and can come from two other places I can think of
right away.

FS corruption: if a pointer references an invalid block, that is, one
outside of the partition. An fsck should detect this and allow you to
correct it. I presume this is unlikely because you've run fsck.

Utilities (or any program) like dd: will try to read the number of
blocks specified and will accept a blocksize on the command line. If
it's combination of bs= (or any of the pertinent blocksize specs) and
count= (in conjunction with any possible skip= or seek=, as appropriate)
causes it to attempt a read *after* the return indicates an end-of-file,
you can also get this problem.

Another uncertainty is the counts you show. Do this

  dd if=/dev/hda9 of=/dev/null bs=512

It will be slow, but will show the number of sectors in the partition.
To speed it up, you could add skip=<a large number of sectors> but then
you need to do an extra math step.

You can then calculate and see if we have been misled by the old
"some things count from one, some things count from zero" routine.

I also don't think this is the problem.

With the count from above, you should be able to do the calculations and
confirm if it is really just a "block" extending beyond the partition.

And I really think it is. But I am often wrong. So, the "acid test"

  dd if=/dev/hda9 of=/dev/null bs=4096 skip=734964

further confirmation can be gained by converting everything to sectors
and attempting to read the last two "blocks". I expect you will get 7 or
fewer sectors returned.

> Marnix


NOTE: I'm on a new ISP, if I'm in your address book ...
Bill Maltby
Fix line above & use it to mail me direct.

More information about the lfs-support mailing list