Quote:

The standard way is to have a smallish RAID1 partition for /boot (or the rootfs), and a RAID5 partition for the rest. On each drive. Just like in the article I linked earlier.


That article also suggested going with a rather complex LVM setup on top of the RAID5, and then a complex partitioning setup on top of that. This also implies having the swap partition there, on the LVM on the RAID5. One plus with this, is that then swap won't go bonkers if a disk fails, thus avoiding another source of system crashes.

EDIT: LVM (Logical Volume Manager) is just a way of grouping things together into a new virtual "block device (disk)", which can then be partitioned etc.. just like a real drive (and more..). The reason you need it here, is that the RAID drivers in Linux don't currently support being partitioned themselves. So by assigning LVM to manage the entire RAID, one can then partition the LVM instead of the lower-level RAID, and it works. Hack. /EDIT.

Personally, I'd partition each drive identically: smallish RAID1 partition, largish RAID5 partition. Put /boot on the RAID1, use LVM on the RAID5. Then partition the LVM with a smallish swap area, a pair of small/medium O/S partitions, and a massive data partition.

With the two O/S partitions, you can experiment with a newer O/S version in the future, and yet still have your original working O/S version intact until all is well with the new one.

Cheers


Edited by mlord (14/04/2007 16:10)