linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Can't mount LVM RAID5 drives
@ 2014-04-04 21:32 Ryan Davis
  2014-04-07 13:22 ` Peter Rajnoha
  0 siblings, 1 reply; 13+ messages in thread
From: Ryan Davis @ 2014-04-04 21:32 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 6357 bytes --]

Hi,

 

I have 3 drives in a RAID 5 configuration as a LVM volume.  These disks
contain /home

After performing a shutdown and moving the computer I can't get the drives
to mount automatically.

 

This is all new to me so I am not sure if this is a LVM issue but any help
is appreciated.  LVS shows I have a mapped device present without tables.

When I try to mount the volume to home this happens:

 

[root@hobbes ~]# mount  -t ext4 /dev/vg_data/lv_home /home

mount: wrong fs type, bad option, bad superblock on /dev/vg_data/lv_home,

       missing codepage or other error

       (could this be the IDE device where you in fact use

       ide-scsi so that sr0 or sda or so is needed?)

       In some cases useful info is found in syslog - try

       dmesg | tail  or so

 

[root@hobbes ~]# dmesg | tail

 

EXT4-fs (dm-0): unable to read superblock

 

[root@hobbes ~]# fsck.ext4 -v /dev/sdc1 

e4fsck 1.41.12 (17-May-2010)

fsck.ext4: Superblock invalid, trying backup blocks...

fsck.ext4: Bad magic number in super-block while trying to open /dev/sdc1

 

The superblock could not be read or does not describe a correct ext2

filesystem.  If the device is valid and it really contains an ext2

filesystem (and not swap or ufs or something else), then the superblock

is corrupt, and you might try running e4fsck with an alternate superblock:

    e4fsck -b 8193 <device>

 

[root@hobbes ~]# mke2fs -n /dev/sdc1 

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

488292352 inodes, 976555199 blocks

48827759 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=4294967296

29803 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks: 

        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208, 

        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 

        102400000, 214990848, 512000000, 550731776, 644972544

 

Is the superblock issue causing the lvm issues?

Thanks for any input you might have.

 

Here are useful outputs about the system.

 

 

Here are some of the packages installed

#rpm -qa | egrep -i '(kernel|lvm2|device-mapper)'

 

device-mapper-1.02.67-2.el5

kernel-devel-2.6.18-348.18.1.el5

device-mapper-event-1.02.67-2.el5

kernel-headers-2.6.18-371.6.1.el5

lvm2-2.02.88-12.el5

device-mapper-1.02.67-2.el5

kernel-devel-2.6.18-371.3.1.el5

device-mapper-multipath-0.4.7-59.el5

kernel-2.6.18-371.6.1.el5

kernel-devel-2.6.18-371.6.1.el5

kernel-2.6.18-371.3.1.el5

kernel-2.6.18-348.18.1.el5

lvm2-cluster-2.02.88-9.el5_10.2

 

#uname -a

Linux hobbes 2.6.18-371.6.1.el5 #1 SMP Wed Mar 12 20:03:51 EDT 2014 x86_64
x86_64 x86_64 GNU/Linux

 

 

LVM info:

#vgs

  VG      #PV #LV #SN Attr   VSize VFree

  vg_data   1   1   0 wz--n- 3.64T    0 

#lvs

  LV      VG      Attr   LSize Origin Snap%  Move Log Copy%  Convert

  lv_home vg_data -wi-d- 3.64T                                      

 

Looks like I have a mapped device present without tables (d) attribute.

 

#pvs

  PV         VG      Fmt  Attr PSize PFree

  /dev/sdc1  vg_data lvm2 a--  3.64T    0 

 

#ls /dev/vg_data

lv_home

 

#vgscan --mknodes

 

  Reading all physical volumes.  This may take a while...

  Found volume group "vg_data" using metadata type lvm2

  

#pvscan

  PV /dev/sdc1   VG vg_data   lvm2 [3.64 TB / 0    free]

  Total: 1 [3.64 TB] / in use: 1 [3.64 TB] / in no VG: 0 [0   ]

  

#vgchange -ay

  1 logical volume(s) in volume group "vg_data" now active

  device-mapper: ioctl: error adding target to table

  

#dmesg |tail

device-mapper: table: device 8:33 too small for target

device-mapper: table: 253:0: linear: dm-linear: Device lookup failed

device-mapper: ioctl: error adding target to table

 

 

#vgdisplay -v

  --- Volume group ---

  VG Name               vg_data

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  2

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                1

  Open LV               0

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               3.64 TB

  PE Size               4.00 MB

  Total PE              953668

  Alloc PE / Size       953668 / 3.64 TB

  Free  PE / Size       0 / 0   

  VG UUID               b2w9mR-hvSc-Rm0k-3yHL-iEgc-6nMq-uq69E1

   

  --- Logical volume ---

  LV Name                /dev/vg_data/lv_home

  VG Name                vg_data

  LV UUID                13TmTm-YqIo-6xIp-1NHf-AJTu-9ImE-SHwLz6

  LV Write Access        read/write

  LV Status              available

  # open                 0

  LV Size                3.64 TB

  Current LE             953668

  Segments               1

  Allocation             inherit

  Read ahead sectors     16384

  - currently set to     256

  Block device           253:0

   

  --- Physical volumes ---

  PV Name               /dev/sdc1     

  PV UUID               8D67bX-xg4s-QRy1-4E8n-XfiR-0C2r-Oi1Blf

  PV Status             allocatable

  Total PE / Free PE    953668 / 0

   

#lvscan

  ACTIVE            '/dev/vg_data/lv_home' [3.64 TB] inherit

 

  

#partprobe -s

/dev/sda: msdos partitions 1 2 3 4 <5 6 7 8 9 10>

/dev/sdb: msdos partitions 1 2 3 4 <5 6 7 8 9 10>

/dev/sdc: gpt partitions 1

 

 

#dmsetup table

vg_data-lv_home: 

 

 

#dmsetup ls

vg_data-lv_home            (253, 0)

 

#lvdisplay -m

  --- Logical volume ---

  LV Name                /dev/vg_data/lv_home

  VG Name                vg_data

  LV UUID                13TmTm-YqIo-6xIp-1NHf-AJTu-9ImE-SHwLz6

  LV Write Access        read/write

  LV Status              available

  # open                 0

  LV Size                3.64 TB

  Current LE             953668

  Segments               1

  Allocation             inherit

  Read ahead sectors     16384

  - currently set to     256

  Block device           253:0

   

  --- Segments ---

  Logical extent 0 to 953667:

    Type                  linear

    Physical volume            /dev/sdc1

    Physical extents           0 to 953667

 

Here is a link to files outputted by lvmdump:
https://www.dropbox.com/sh/isg4fdmthiyoszh/tyYOfqllya

 

 


[-- Attachment #2: Type: text/html, Size: 19246 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread
* Re: [linux-lvm] Can't mount LVM RAID5 drives
@ 2014-04-10 15:35 Ryan Davis
  2014-04-10 16:40 ` Peter Rajnoha
  0 siblings, 1 reply; 13+ messages in thread
From: Ryan Davis @ 2014-04-10 15:35 UTC (permalink / raw)
  To: Peter Rajnoha; +Cc: LVM general discussion and development

Thank you so much for the help.  I will work through your pipeline but first want to try to back some of the data up.  I have about 80% backed up from a week or so ago.  How would one go about backing it up without being able to mount this.  Sorry if that is dumb question.  

Thank you again for figuring this out.

Ryan

Peter Rajnoha <prajnoha@redhat.com> wrote:

>On 04/09/2014 06:07 PM, Ryan Davis wrote:
>> Thanks for explaining some of the aspects of LVs.  Used them for years
>> but it's not until they break that I started reading more into it.
>> 
>> Here is the block device size of scdc1:
>> 
>> [root@hobbes ~]# blockdev --getsz /dev/sdc1
>> 
>> 7812441596
>> 
>> Here is the output of pvs -o pv_all /dev/sdc1
>> 
>> 
>> Fmt PV UUID DevSize PV PMdaFree PMdaSize 1st PE PSize PFree Used Attr PE
>> Alloc PV Tags #PMda #PMdaUse lvm2 8D67bX-xg4s-QRy1-4E8n-XfiR-0C2r-Oi1Blf
>> 3.64T /dev/sdc1 92.50K 188.00K 192.00K 3.64T 0 3.64T a-- 953668 953668 1 1
>> 
>
>So we have 953668 extents, each one having 4MiB, that's 7812448256 sectors
>(512-byte sectors). Then we need to add the PE start value which is 192 KiB,
>which means the original device size during creation of this PV was
>7812448256 + 384 = 7812448640 sectors.
>
>The difference from the current device size reported is:
>
>7812441596 - 7812448640 = -7044 sectors
>
>So the disk drive is about 3.44MiB shorter now for some reason.
>That's why the LV does not fit here.
>
>I can't tell you why this happened exactly. But that's what the
>sizes show.
>
>What you can do here to fix this is to resize your filesystem/LV/PV accordingly.
>If we know that it's just one extent, we can do the following:
>
>- if it's possible, do a backup of the disk content!!!
>- double check it's really /dev/sdc1 still as during reboots,
>  it can be assigned a different name by kernel 
>
>1. you can check which LV is mapped onto the PV by issuing
>  pvdisplay --maps /dev/sdc1
>
>2. then deactivate one LV found on that PV (if there are more LVs mapped
>   on the PV, choose the LV that is mapped at the end of the disk since
>   it's more probable that the disk is shorter at the end when compared
>   to original size)
>  lvchange -an <the_LV_found_on_the_PV>
>
>3. then reduce the LV size by one extent (1 should be enough since the
>   PV is shorter with 3.44 MiB) *also* with resizing the filesystem
>   that's on the LV!!! (this is the "-f" option for the lvreduce, it's
>   very important!!!)
>   lvreduce -f -l -1 <the_LV_found_on_the_PV>
>
>4. then make the PV size in sync with actual device size with calling:
>   pvresize /dev/sdc1
>
>5. now activate the LVs you deactivated in step 2.
>   lvchange -ay <the_LVs_found_on_the_PV>   
>
>Note that this will only work if it's possible to resize the filesystem
>and the LV data are not fully allocated! (in which case you probably
>lost some data already)
>
>Take this as a hint only and be very very careful when doing this
>as you may lose data when this is done incorrectly!
>
>I'm not taking responsibility for any data loss.
>
>If you have any more questions, feel free to ask.
>
>-- 
>Peter

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-04-24  9:38 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-04 21:32 [linux-lvm] Can't mount LVM RAID5 drives Ryan Davis
2014-04-07 13:22 ` Peter Rajnoha
2014-04-09 16:07   ` Ryan Davis
2014-04-10 14:10     ` Peter Rajnoha
2014-04-10 14:14       ` Peter Rajnoha
2014-04-18 18:23       ` Ryan Davis
2014-04-22 11:14         ` Peter Rajnoha
2014-04-22 18:43           ` Ryan Davis
2014-04-23  7:59             ` Zdenek Kabelac
2014-04-23 16:56               ` Ryan Davis
2014-04-24  9:38                 ` Zdenek Kabelac
  -- strict thread matches above, loose matches on Subject: below --
2014-04-10 15:35 Ryan Davis
2014-04-10 16:40 ` Peter Rajnoha

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).