Scientific Linux Forum.org



  Reply to this topicStart new topicStart Poll

> RAID5 on LVM, Problem creating RAID 5 LV
LinuxLearner
 Posted: Jan 12 2014, 08:39 PM
Quote Post


SLF Newbie


Group: Members
Posts: 4
Member No.: 2881
Joined: 8-January 14









I'm new to Linux Systems Administration and am preparing to rebuild my old Intel Core2Quad Q6600 Machine with SL6 on 3x WD750GB drives.

I am trying to prepare by determining the optimum setting for the storage splitting the disks into partitions and using different RAID modes for different areas of the disk. Thus the idea is to configure the disks with various block sizes from 4kb up to the largest available in each layer of software (RAID/LVM/FS) and perform some tests using dd or fio or some other kind of testing software to determine the optimum settings.

As this is training exercise for myself and not a business server I am not concerned about splitting different aspects of the Operating System, Applications and data over different RAIDed Disk sets. This machine only has 3 physical disks and it is difficult to structures these creatively as far as I can tell.

I am doing my best to make the best of the limitations of this Machine, thus I had provisionally decided to structure the 3 disks as follows:

GPT
Partition Size RAID Disks Cap
Offset 01MiB
1 01GiB None 1,2,3 3x1GiB /boot (cloned to disk 2,3)
2 03GiB 0 1,2,3 09GiB SWAP/Hib
3 16GiB 5 1,2,3 32GiB System 1 (SL7.x)
4 16GiB 5 1,2,3 32GiB System 2 (SL6.x)
5+ As required for storage of Files, Data, Virtual Machines, etc

I have learned that LVM is capable of RAID5 although there is very little information on it beyond the manual pages. I would prefer to use LVM RAID5, rather than mdraid RAID5 and LVM, as this helps by reducing the need for alignment.

My initial attempts to build test logical volumes using the partitions reserved for System 1 using a SL6.4 Live DVD have proven less than successful.

Considering that there may be missing components or unpatch issues, I have simulated the same on a Virtual Machine with a full installation of SL6.4 and using 3x16GB Disks. The result is the same

The output from my attempt is as follows:

=====================================================
[root@localhost ~]# parted
GNU Parted 2.1
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) select /dev/sdb
Using /dev/sdb
(parted) mklabel gpt
(parted) mkpart primary 1 16384
(parted) set 1 lvm on
(parted) toggle 1 lvm
(parted) p
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdb: 16384MiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1.00MiB 16384MiB 16383MiB primary

(parted) select /dev/sdc
Using /dev/sdc
(parted) mklabel gpt
(parted) mkpart primary 1 16384
(parted) set 1 lvm on
(parted) toggle 1 lvm
(parted) p
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 16384MiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1.00MiB 16384MiB 16383MiB primary

(parted) select /dev/sdd
Using /dev/sdd
(parted) mklabel gpt
(parted) mkpart primary 1 16384
(parted) set 1 lvm on
(parted) toggle 1 lvm
(parted) p
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdd: 16384MiB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
1 1.00MiB 16384MiB 16383MiB primary

(parted) q
Information: You may need to update /etc/fstab.

[root@localhost ~]# pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1
Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created
Physical volume "/dev/sdd1" successfully created

[root@localhost ~]# vgcreate vg_system1 -s4k /dev/sdb1 /dev/sdc1 /dev/sdd1
Volume group "vg_system1" successfully created

[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- 11.51g 0
vg_system1 3 0 0 wz--n- 47.99g 47.99g

[root@localhost ~]# lvcreate --type raid5 -i2 -I4k -l66%FREE -n lv_system1 vg_system1
Using reduced mirror region size of 16 sectors
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: write failed after 0 of 4096 at 0: Input/output error
Logical volume "lv_system1" created
[root@localhost ~]#
=====================================================

I really have no clue why I am getting the errors when creating the Logical Volume.

Using reduced mirror region size of 16 sectors
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: write failed after 0 of 4096 at 0: Input/output error

This seems to happen regardless of the size specified for the Logical Volume or the block size used.

Does anyone have any suggestion as to what I am doing wrong, or what if anything is wrong with LVM as I have had no luck with googling the issue.

If I cannot get this to work then I will have to fall back to using mdraid which is far less flexible.

Thanks and Regards

LinuxLearner
PMEmail Poster
^
Bluejay
 Posted: Jan 13 2014, 02:31 PM
Quote Post


SLF Member
***

Group: Members
Posts: 62
Member No.: 42
Joined: 13-April 11









QUOTE (LinuxLearner @ Jan 12 2014, 03:39 PM)
The output from my attempt is as follows:


Your log shows you partitioning /dev/sdc twice and /dev/sdd once. Did /dev/sdb get partitioned as required?

PM
^
LinuxLearner
 Posted: Jan 13 2014, 06:47 PM
Quote Post


SLF Newbie


Group: Members
Posts: 4
Member No.: 2881
Joined: 8-January 14









QUOTE (Bluejay @ Jan 13 2014, 02:31 PM)
QUOTE (LinuxLearner @ Jan 12 2014, 03:39 PM)
The output from my attempt is as follows:


Your log shows you partitioning /dev/sdc twice and /dev/sdd once. Did /dev/sdb get partitioned as required?



My Fault,

I copied and pasted the wrong sections, I can confirm that sdb, sdc and sdd are all prepared in exactly the same way.

Furthur, I have noted that as far as I can tell, while the Faulty Logical Volume exists, most pv/vg/lv commands display similar errors while still apparently completing their respective functions.

If I drop the LV, the errors cease.

======================================================
[root@localhost ~]# pvs
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011873280: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011938816: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096: Input/output error
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup lvm2 a-- 11.51g 0
/dev/sdb1 vg_system1 lvm2 a-- 16.00g 163.82m
/dev/sdc1 vg_system1 lvm2 a-- 16.00g 163.82m
/dev/sdd1 vg_system1 lvm2 a-- 16.00g 163.82m
[root@localhost ~]# pvscan
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011873280: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011938816: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096: Input/output error
PV /dev/sdb1 VG vg_system1 lvm2 [16.00 GiB / 163.82 MiB free]
PV /dev/sdc1 VG vg_system1 lvm2 [16.00 GiB / 163.82 MiB free]
PV /dev/sdd1 VG vg_system1 lvm2 [16.00 GiB / 163.82 MiB free]
PV /dev/sda2 VG VolGroup lvm2 [11.51 GiB / 0 free]
Total: 4 [59.50 GiB] / in use: 4 [59.50 GiB] / in no VG: 0 [0 ]
[root@localhost ~]#
[root@localhost ~]# vgs
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011873280: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011938816: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096: Input/output error
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- 11.51g 0
vg_system1 3 1 0 wz--n- 47.99g 491.45m
[root@localhost ~]# vgscan
Reading all physical volumes. This may take a while...
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011873280: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011938816: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096: Input/output error
Found volume group "vg_system1" using metadata type lvm2
Found volume group "VolGroup" using metadata type lvm2
[root@localhost ~]#
[root@localhost ~]# lvs
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011873280: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011938816: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096: Input/output error
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao--- 9.51g
lv_swap VolGroup -wi-ao--- 2.00g
lv_system1 vg_system1 rwi-a-r-- 31.68g 100.00
[root@localhost ~]# lvscan
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011873280: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011938816: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096: Input/output error
ACTIVE '/dev/vg_system1/lv_system1' [31.68 GiB] inherit
ACTIVE '/dev/VolGroup/lv_root' [9.51 GiB] inherit
ACTIVE '/dev/VolGroup/lv_swap' [2.00 GiB] inherit
[root@localhost ~]#
======================================================

I note that the lvs reports different Attributes (rwi-a-r--) for the RAID5, from other standard LV created as part of the installation process (-wi-ao---)

again, sorry for the duff output

Regards

LinuxLearner
PMEmail Poster
^
LinuxLearner
 Posted: Jan 23 2014, 10:27 PM
Quote Post


SLF Newbie


Group: Members
Posts: 4
Member No.: 2881
Joined: 8-January 14









Following my previous post "RAID5 on LVM" , in the absence of any guidance and as I am getting desperate to resolve this, I have attempted to make some progress on my own.

I have downloaded the SL6.5-rc1 LiveDVD and booted with this to determine if the more up-to-date LVM2, resolved any of the issues with both the SL6.4 LiveDVD and a (virtual) system built with the standard SL6.4 installation Media.

To recap, the issue is the errors displayed on creation of a RAID5 LV and with subsequent use of most LV commands while one or more RAID5 LV exists on the system:

creation:
=====
[root@localhost ~]# lvcreate --type raid5 -i2 -I4k -l66%FREE -n lv_system1 vg_system1
Using reduced mirror region size of 16 sectors
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: write failed after 0 of 4096 at 0: Input/output error
Logical volume "lv_system1" created
=====

commands (same errors on most pv/vg/lv commands):
=====
[root@localhost ~]# lvscan
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011873280: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 34011938816: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 0: Input/output error
/dev/vg_system1/lv_system1: read failed after 0 of 4096 at 4096: Input/output error
ACTIVE '/dev/vg_system1/lv_system1' [31.68 GiB] inherit
ACTIVE '/dev/VolGroup/lv_root' [9.51 GiB] inherit
ACTIVE '/dev/VolGroup/lv_swap' [2.00 GiB] inherit
=====

I can confirm that the issue of errors when executing commands in the presence of a RAID5 LV no longer generate the above errors as far as I can tell.

Unfortunately the creation of a RAID5 LV, still generates the same messages in SL6.5 as in SL6.4.

As a consequence in the absence of SL6.6 or SL7.0, I am forced to assume that it may be necessary to upgrade the LVM2 package from the SL6.5 LVM2-2.02.100-8 to a later version such as the one present in the RHEL7Beta LVM2-2.02.103-6 or the latest version which I understand is LVM2-2.02.105.

I have tried to decode the information from the LVM2 website but as a lay-person, I haven't the first clue how or where the issues I experienced in SL6.4 has been partially resolved in updates applied to SL6.5 or which version might contain the remaining fixes should the issue have been resolved at some point.

I therefor assume to potentially fix this issue, I need to upgrade to a new version of LVM2 and as such need locate a suitable RPM repositories with newer versions of the LVM2 (including associated dependencies).

Does any one have any advice on how best to approach this, there are no updates available in the SL repositories as far as I can tell, so I do not know if it is possible or safe to attach to other repositories such as RHEL6.5 or RHEL7beta and update from them.

At present I am using the SL6.4/SL6.5-rc1 Live DVD to test my system with LVM2 RAID to try to determine the best sizings for PV/VG/LV and RAID prior to installation of SL6.4 or SL6.5 and hopefully soon SL7.0.

I look forward to being able to properly evaluate SL in my first installation with the minimum of delay and hope these issues will not be a common occurrence!

Thank you in advance for any assistance you may be able to provide.
PMEmail Poster
^
redman
 Posted: Jan 24 2014, 03:47 PM
Quote Post


Retired SLF Administrator
********

Group: Admins
Posts: 1276
Member No.: 2
Joined: 8-April 11









QUOTE (LinuxLearner @ Jan 24 2014, 12:27 AM)
Following my previous post "RAID5 on LVM" , in the absence of any guidance and as I am getting desperate to resolve this, I have attempted to make some progress on my own.

Please keep things into a single thread, that makes it easier to read/follow for others wink.gif

--------------------
"Sometimes the best helping hand you can give is a good, firm push."
PM
^
0 User(s) are reading this topic (0 Guests and 0 Anonymous Users)
0 Members:

Topic Options Reply to this topicStart new topicStart Poll