Squeezebox Setup

Reinstalling the OS

One of the reasons for the slightly more complex layout of this setup is that by having the OS on one drive, and the data seperate you can simply ditch the OS, reinstall and re-attatch the data. With a normal data partition you can simply remount the partition (either something like mount /dev/sdb1 /mnt, or by editing fstab).  However as both RAID and LVM are being used this is a little more complex than normal.  The first thing to do is make sure before you have backups of the LVM config, and the RAID setup.  To backup the LVM config do:

[root@tranquilpc ~]# vgcfgbackup
Volume group "VolGroup00" successfully backed up.
Volume group "raiddata" successfully backed up.
[root@tranquilpc ~]# ls /etc/lvm/backup/
raiddata  VolGroup00

Which creates two files in the backup directory.  VolGroup00 is the main system drive, so this can be ignored, but raiddata is the LVM config on the RAID disks.  Incidentally this is the reason for having two volume groups, rather than one volume group with 2 logical partitions – it makes this bit a lot easier.  Take these two files and store them on another computer.  The RAID config is in /etc/mdadm/mdadm.conf – so this should be backed up on a seperate computer too.

[root@tranquilpc ~]# ls /etc/mdadm/
mdadm.conf
[root@tranquilpc ~]# cat /etc/mdadm/mdadm.conf
DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=2 spares=1 UUID=5c713030:1e6b4f73:26ef991b:66021bdf

At this point you can power down, unplug the two RAID disks (the easiest way of preventing mistakes!) and install the new OS.  Once that is up and running we can start to reconnect the RAID array.  First the RAID config – for the madam.conf you will probably need to create the mdadm directory too:

[root@tranquilpc ~]# cd /etc
[root@tranquilpc etc]# mkdir mdadm

Before copying the file in.  If you didn’t back the file up you should be able to re-create it by doing:

[root@tranquilpc etc]# echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
[root@tranquilpc etc]# mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Next we need to activate the array:

[root@tranquilpc ~]# mdadm -A -s

You can see if all looks good by checking the /proc/mdstat file.

[root@tranquilpc ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[1] sdb1[0]
976759936 blocks level 5, 64k chunk, algorithm 2 [2/2] [UU]

unused devices: <none>

Before starting on the LVM restore we should backup what we have first:

[root@tranquilpc ~]# vgcfgbackup
Volume group "VolGroup00" successfully backed up.

For the LVM config first copy the volgroup config for your RAID disks back into /etc/lvm/backup.  You should only copy the file for your RAID disks (raiddata in my case), not the system volume group (if you restored that you’d break your new install).  If you have a name conflict (for example your new system is on VolGroup00, as was your old RAID partition) then you can rename the file to something else, and edit it to change all the references to the new name.   If you dont have the old config backed up you can get it back with:

[root@tranquilpc ~]# dd if=/dev/md0 bs=512 count=255 skip=1 of=/tmp/md0-raw-start

This will create a file in /tmp called md0-raw-start – it will contain a lot of binary rubbish, but you should be able to edit it down to get the lvm config.

Then to restore the lvm config we use the vgcfgrestore command:

[root@tranquilpc backup]# vgcfgrestore -f raiddata raiddata
Restored volume group raiddata

We can check it’s worked with vgscan:

[root@tranquilpc backup]# vgscan
Reading all physical volumes.  This may take a while…
Found volume group "VolGroup00" using metadata type lvm2
Found volume group "raiddata" using metadata type lvm2

And also vgdisplay:

[root@tranquilpc backup]# vgdisplay
--- Volume group ---
VG Name               VolGroup00
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                2
Open LV               2
Max PV                0
Cur PV                1
Act PV                1
VG Size               37.16 GB
PE Size               32.00 MB
Total PE              1189
Alloc PE / Size       1189 / 37.16 GB
Free  PE / Size       0 / 0
VG UUID               k7tifx-TZAd-H2zx-rey8-dlq1-Rp3u-DgMkbf

--- Volume group ---
VG Name               raiddata
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  3
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                1
Open LV               1
Max PV                0
Cur PV                1
Act PV                1
VG Size               931.51 GB
PE Size               4.00 MB
Total PE              238466
Alloc PE / Size       238466 / 931.51 GB
Free  PE / Size       0 / 0
VG UUID               Xy1xg2-cGjS-yC1A-5nkJ-w1bH-w0KO-z1zAby

We can check all the physical volumes are OK too:

[root@tranquilpc backup]# pvscan
PV /dev/sda2   VG VolGroup00   lvm2 [37.16 GB / 0    free]
PV /dev/md0    VG raiddata     lvm2 [931.51 GB / 0    free]
Total: 2 [968.66 GB] / in use: 2 [968.66 GB] / in no VG: 0 [0   ]

Next we need to make the volume group active:

[root@tranquilpc backup]# vgchange raiddata -a y
1 logical volume(s) in volume group "raiddata" now active

Which means the logical volumes are there as well:

[root@tranquilpc backup]# lvdisplay
--- Logical volume ---
LV Name                /dev/VolGroup00/LogVol00
VG Name                VolGroup00
LV UUID                OJKpvD-sDx5-8f13-9xYs-1fYk-00H5-efeXvX
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                33.22 GB
Current LE             1063
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:0

--- Logical volume ---
LV Name                /dev/VolGroup00/LogVol01
VG Name                VolGroup00
LV UUID                K5fmWK-Guv0-3ePh-bgcX-wOBo-Y3jE-1MjrXL
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                3.94 GB
Current LE             126
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:1

--- Logical volume ---
LV Name                /dev/raiddata/lvm0
VG Name                raiddata
LV UUID                4IzYlz-91cz-lmjN-svaG-l2nP-MCxK-tCk7I3
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                931.51 GB
Current LE             238466
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:2

These can now be mounted following the instructions on the mounting the RAID array section of this site.

Finally – this page borrows heavily from a linuxjournal article here by Richard Bullington-McGuire – much thanks to him.  If you get stuck it’s definately worth having a read of his article too.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: