strange badblocks problem

Bill's LFS Login lfsbill at nospam.dot
Tue Jan 13 09:52:11 PST 2004


On Tue, 13 Jan 2004, marnixk wrote:

> Bill's LFS Login wrote:
>
> > From your post, I believe you really have the answer already, but just
> > didn't recognize it.<snip>

> I have found the thread ("errors using dd for backups") and indeed I see
> what you mean about having the error already...
>
> > IIRC, it has to do with partitioning tool (partitions in sectors, blocks,
> > tracks or cyls) and block sizes. When the partition end contains the
> > *start* of a block that encompasses *multiple* sectors and one or more
> > of the remaining sectors would be beyond the end of the partition, you
> > get these failures.
>
> Strange thing is that I have never seen this problem before and I have never
> before minded the block sizes with respect to the sector size when
> partitioning systems. But then again I have not used dd and badblocks that
> often either.
> However, on one of my other Gentoo systems I have a partition with a block
> size of 4K and tune2fs shows a block count of 2682847. The blocking factor
> being 8 (am I right?) I get a total of 21462776 sector occupied by the
> filesystem. However, using sfdisk -l -uS on that disk shows a sector size
> of 512 bytes and 21462777 sectors on the partition. So this partition is
> one sector bigger than the filesystem. On this drive I have no problems
> using badblocks or dd. So why does it work correctly on one machine and not
> on the other? Or am I missing something?

Yep. Take the car out of reverse!  ;) When the partition is *bigger*,
no problems. The FS can be created with *fewer* sectors than the
partition has and there will be no problem, only the reverse causes the
problem. The kernel protects against accessing *outside* the partition.

>
> On a side node, are only programs like dd and badblocks (that access the
> disk directly) affected by this problem, or could I loose data because some
> app is trying to write on one of those sectors outside the fs? If not than
> I am less concerned about all this...

Any application *could* attempt this. The same calls that are used by
dd, *fs* utils are available to any program. But generally, it would be
foolish for an application that does not need to use these low-level
calls to use them. So most apps that I know of that are not hardware or
geometry dependent don't use them. Wisely.

> > From that "skimming", I would guess you had a 2K block size (4 sectors)
> > and the partition does not end on a multiple of 4 sectors.
>
> Instead I have a 4K block size in the specific partition (8 sectors), but
> the idea is the same I guess.

Yes. That means the first 5 blocks were in the partition and the last 3
were outside.

>
> > If you can repartition so that the number of sectors is evenly divisible
> > by the sectors per block, your problem should go away.
>
> I am going to try this:
> Repartition the drive and make sure the size of each partition (in sectors)
> is evenly divisible by 8. (this would mean that blocksizes in the
> filesystems of 1K, 2K and 4K would work ok, would it not?)

Yes. Sometimes that may be hard to do though. It depends on sectors per
track, and most partitioning software will prefer (for maybe good
historical reasons) to start/stop partitions on cyl or track boundaries.

With sfdisk, this can be overcome, but I advise it is not worth the
trouble.

Also, let me suggest this. Don't worry too much about making that
partition match all those factors. Instead, make the file system
specifying the number of blocks that will fit inside the partition. Just
take the number of sectors, divide by sectors/block and through away the
lost ones. Generally this will be only a few sectors (often on 30K out
of gigs). You could also reduce the blocking factor. That will allow
loss of fewer blocks.

Of course there is a performance trade-off. For a typical ext2 system
that is not dedicated to a specific high-performance requirement with
very large blocksizes, you can actually gain space because of the
tendancy of the general-purpose FSs to have many wasted blocks (e.g. 4K
blocks that have a lot of files that use less than 1K). I have found
through sloppy experimentation that 1K blocks give the most space for
typical things like /etc, most /usr directories, etc. It is
counter-intuitive because we think of all the i-node waste. But the
i-node count can also be reduced to free up that space (larger files,
generally, take fewer i-nodes on this FS type).

On FSs that tend to have a block-size distribution skewed to the larger
side, I've had good results with 2K block sizes.

>
> Although I still am concerned that there is more to it than this, since the
> other machine does not show this behaviour and the number of sectors in
> that partition is also not evenly divisible by 8. And if there indeed is
> more to it, then what will go wrong next... Well, maybe I am just being
> paranoid...

Nah! Like I said above, you're stuck in reverse is all. Just keep in
mind that larger partition with smaller FS is OK, the reverse is not.

>
> Many thanks to you for your reply and the forementioned thread!
>
> Marnix

Forge on!

-- 
NOTE: I'm on a new ISP, if I'm in your address book ...
Bill Maltby
lfsbillATearthlinkDOTnet
Fix line above & use it to mail me direct.



More information about the lfs-support mailing list