Squeezebox Setup

LVMing the RAID

Now we have the RAID partition we could just format it (these days as you can resize ext3 without too much effort this may be worth considering), but to give as much flexibility as possible I’m going to put LVM onto the array. So at the start we have /dev/md0 with no partitions or data on it. First check its all ok:

[root@tranquilpc etc]# mdadm --detail /dev/md0
Version : 00.90.03
Creation Time : Mon Jun 8 18:29:25 2009
Raid Level : raid5
Array Size : 976759936 (931.51 GiB 1000.20 GB)
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Jun 8 19:24:38 2009
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 64K

Rebuild Status : 13% complete

UUID : 5c713030:1e6b4f73:26ef991b:66021bdf
Events : 0.14

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
2 8 33 1 spare rebuilding /dev/sdc1

So my array is still rebuilding, but thats fine for formatting and so on. So lets create the physical volume and volume group:

[root@tranquilpc etc]# pvcreate /dev/md0
Physical volume "/dev/md0" successfully created

[root@tranquilpc etc]# vgcreate raiddata /dev/md0
Volume group "raiddata" successfully created
[root@tranquilpc etc]# vgdisplay raiddata
--- Volume group ---
VG Name raiddata
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.51 GB
PE Size 4.00 MB
Total PE 238466
Alloc PE / Size 0 / 0
Free PE / Size 238466 / 931.51 GB
VG UUID Xy1xg2-cGjS-yC1A-5nkJ-w1bH-w0KO-z1zAby

The first command creates the physical volume, the second the volume group. Finally vgdisplay shows what we’ve got. The important thing from that output is that there are 238466 PE – extents. So we can feed this number into the logical volume creation to ensure that we are using all the available space:

[root@tranquilpc etc]# lvcreate -l 238466 raiddata -n lvm0
Logical volume "lvm0" created

[root@tranquilpc etc]# lvdisplay raiddata
--- Logical volume ---
LV Name /dev/raiddata/lvm0
VG Name raiddata
LV UUID 4IzYlz-91cz-lmjN-svaG-l2nP-MCxK-tCk7I3
LV Write Access read/write
LV Status available
# open 0
LV Size 931.51 GB
Current LE 238466
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 256
Block device 253:2

Finally we need to put a filesystem on there. As its a data drive we don’t need to reserve any system space (usually 5 %) – the m 0 flag is setting that to zero.

[root@tranquilpc etc]# mkfs.ext3 -m 0 /dev/raiddata/lvm0
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
122109952 inodes, 244189184 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
7453 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848

Writing inode tables: 290/7453


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: