From: Peter Rajnoha <prajnoha@redhat.com>
To: rrdavis@ucdavis.edu
Cc: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Can't mount LVM RAID5 drives
Date: Mon, 07 Apr 2014 15:22:49 +0200 [thread overview]
Message-ID: <5342A6A9.6060104@redhat.com> (raw)
In-Reply-To: <016e01cf504d$68e3b0d0$3aab1270$@edu>
On 04/04/2014 11:32 PM, Ryan Davis wrote:
> [root@hobbes ~]# mount -t ext4 /dev/vg_data/lv_home /home
>
> mount: wrong fs type, bad option, bad superblock on /dev/vg_data/lv_home,
>
> missing codepage or other error
>
> (could this be the IDE device where you in fact use
>
> ide-scsi so that sr0 or sda or so is needed?)
>
> In some cases useful info is found in syslog - try
>
> dmesg | tail or so
>
>
>
> [root@hobbes ~]# dmesg | tail
>
>
>
> EXT4-fs (dm-0): unable to read superblock
>
>
>
That's because an LV that is represented by a device-mapper
mapping doesn't have a proper table loaded (as you already
mentioned later). So such device is unusable until proper
tables are loaded...
> [root@hobbes ~]# mke2fs -n /dev/sdc1
>
> mke2fs 1.39 (29-May-2006)
>
> Filesystem label=
>
> OS type: Linux
>
> Block size=4096 (log=2)
>
> Fragment size=4096 (log=2)
>
> 488292352 inodes, 976555199 blocks
>
> 48827759 blocks (5.00%) reserved for the super user
>
> First data block=0
>
> Maximum filesystem blocks=4294967296
>
> 29803 block groups
>
> 32768 blocks per group, 32768 fragments per group
>
> 16384 inodes per group
>
> Superblock backups stored on blocks:
>
> 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
> 2654208,
>
> 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
>
> 102400000, 214990848, 512000000, 550731776, 644972544
>
Oh! Don't use the PV directly (the /dev/sdc1), but always use the
LV on top (/dev/vg_data/lv_home) otherwise you'll destroy the PV.
(Here you used "-n" so it didn't do anything to the PV fortunately.)
>
>
> Is the superblock issue causing the lvm issues?
>
> Thanks for any input you might have.
>
>
We need to see why the table load failed for the LV.
That's the exact problem here.
> LVM info:
>
> #vgs
>
> VG #PV #LV #SN Attr VSize VFree
>
> vg_data 1 1 0 wz--n- 3.64T 0
>
> #lvs
>
> LV VG Attr LSize Origin Snap% Move Log Copy% Convert
>
> lv_home vg_data -wi-d- 3.64T
>
>
>
> Looks like I have a mapped device present without tables (d) attribute.
>
>
>
> #pvs
>
> PV VG Fmt Attr PSize PFree
>
> /dev/sdc1 vg_data lvm2 a-- 3.64T 0
>
>
>
> #ls /dev/vg_data
>
> lv_home
>
>
>
> #vgscan --mknodes
>
>
>
> Reading all physical volumes. This may take a while...
>
> Found volume group "vg_data" using metadata type lvm2
>
>
>
> #pvscan
>
> PV /dev/sdc1 VG vg_data lvm2 [3.64 TB / 0 free]
>
> Total: 1 [3.64 TB] / in use: 1 [3.64 TB] / in no VG: 0 [0 ]
>
>
>
> #vgchange -ay
>
> 1 logical volume(s) in volume group "vg_data" now active
>
> device-mapper: ioctl: error adding target to table
>
>
>
> #dmesg |tail
>
> device-mapper: table: device 8:33 too small for target
>
> device-mapper: table: 253:0: linear: dm-linear: Device lookup failed
>
> device-mapper: ioctl: error adding target to table
>
>
The 8:33 is the /dev/sdc1 which is the PV used.
What's the actual size of the /dev/sdc1?
Try "blockdev --getsz /dev/sdc1" and see what the size is.
--
Peter
next prev parent reply other threads:[~2014-04-07 13:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-04 21:32 [linux-lvm] Can't mount LVM RAID5 drives Ryan Davis
2014-04-07 13:22 ` Peter Rajnoha [this message]
2014-04-09 16:07 ` Ryan Davis
2014-04-10 14:10 ` Peter Rajnoha
2014-04-10 14:14 ` Peter Rajnoha
2014-04-18 18:23 ` Ryan Davis
2014-04-22 11:14 ` Peter Rajnoha
2014-04-22 18:43 ` Ryan Davis
2014-04-23 7:59 ` Zdenek Kabelac
2014-04-23 16:56 ` Ryan Davis
2014-04-24 9:38 ` Zdenek Kabelac
-- strict thread matches above, loose matches on Subject: below --
2014-04-10 15:35 Ryan Davis
2014-04-10 16:40 ` Peter Rajnoha
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5342A6A9.6060104@redhat.com \
--to=prajnoha@redhat.com \
--cc=linux-lvm@redhat.com \
--cc=rrdavis@ucdavis.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).