Partition Sizes, AGAIN!

Mike McCarty Mike.McCarty at sbcglobal.net
Thu Mar 4 11:13:58 PST 2010


I'm a laid off engineer on a zero income budget, so $$$ are a prime
consideration. I bought a basic system for $1 at at swap meet. It
had no hard drive, and a burnt out power supply, and only 64M of RAM.
I've added another 128M of RAM from a junker, and transplanted a PS
from another machine, and added a 40G WD HD.

I've added a couple of CD-ROM drives, one of which can write, and so
I've got a base system up. It boots the LiveCD (6.3) and I've used it to
build LFS 6.4, which it boots and runs on another drive temporarily.

Now, I'm looking for partitioning info on this very minimal system.

Here's my initial thoughts...

prtn	size	mount point
----	----	-----------
hda1	100M	/boot
hda2	10G	/	(main)
hda3	10G	/	(build)
hda5	20G	/home

The two partitions are (main) for use and (build) for building
the "next" system. Is that enough for a real desktop system?
If not, then as an alternate, I could add a 100G external USB
drive for /home, and make the two / partitions be 20G apiece.

I'd need to add a faster USB port, as the ports on that machine
are only USB 1.1, which is slow for a disc.

Does the build partition really need to be 10G? Would 5G be enough
to build a new BLFS with smallish desktop, like fluxbox, not a big
GNOME or KDE? If so, then /home could grow by another 10G, which
would be nice.

Failing one of those, then I could add another 40G drive, but that
would require using a really iffy hard disc. I was given it a couple
of weeks ago, and SMART is reporting lots of reallocation and it fails
the extended test with uncorrectable errors. So, while mkfs would skip
the bad regions, presumably, I suspect this disc is on its very last
legs, and is doomed to fail soon.

Mike
-- 
p="p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
Oppose globalization and One World Governments like the UN.
This message made from 100% recycled bits.
You have found the bank of Larn.
I speak only for myself, and I am unanimous in that!



More information about the lfs-support mailing list