On CURRENT back in February, some changes were commited to gvinum (the frontend for vinum) which made the interface not suck balls. Before you had to weave some weird magic configuration file and sacrifice your first male child:
$ gvinum printconfig
drive gvinumdrive3 device /dev/ad8
drive gvinumdrive2 device /dev/ad6
drive gvinumdrive1 device /dev/ad3
drive gvinumdrive0 device /dev/ad1
volume gv0
plex name gv0.p0 org raid5 512s vol gv0
sd name gv0.p0.s3 drive gvinumdrive3 len 6s driveoffset 265s plex gv0.p0 plexoffset 1536s
sd name gv0.p0.s2 drive gvinumdrive2 len 6s driveoffset 265s plex gv0.p0 plexoffset 1024s
sd name gv0.p0.s1 drive gvinumdrive1 len 6s driveoffset 265s plex gv0.p0 plexoffset 512s
sd name gv0.p0.s0 drive gvinumdrive0 len 6s driveoffset 265s plex gv0.p0 plexoffset 0s
But in the new release of gvinum, instead of creating that pig disgusting configuration file and passing it to gvinum create, you can simply invoke it with
$ gvinum raid5 -n gv0 ad1 ad3 ad6 ad8
and boom, there’s your new volume to run newfs on or whatever.
For some reason though, the RAID5 volume is horribly slow for me:
$ dd if=/dev/zero of=test bs=1m count=100
100+0 records in
100+0 records out
bytes transferred in 9.002813 secs (11647204 bytes/sec)
11MB/s write speed is paltry compared to the 33MB/s both my laptop’s single drive and the gmirror array in the same machine get. This might be a problem with the shitty on-board Intel controller I’m using, or it might be a problem with the disks (because I’m using different disks for the RAID1 and RAID5 volumes), though the read speeds on the RAID5 are fine. I’m testing speeds with systat which isn’t really accurate for benchmarking — I’ll have to tinker around with it more later.
I figure I can test the disks by pulling a drive off the RAID5 and snapping the RAID1 and do tests on one of each of the type of disk. I don’t think there’s a way to simulate a disk failure — I tried first using gvinum detach gv0.p0.s0, but simply got an error code spit back at me. Next I took one of the disks and overwrote the partition table, thinking that gvinum might be using that. It doesn’t :D
So now I’m rebuilding the volume with gvinum rebuildparity gv0.p0, getting all kinds of parity errors to the syslog etc; we’ll have to see what happens when it finishes running. I suspect that the only way for me to break the volume is to physically remove the disk or something which is bleh :(
2 comments