* [linux-lvm] LVM pretends it has more space than it actually has @ 2011-09-18 13:58 Gijs 2011-09-19 0:21 ` Stuart D. Gathman ` (2 more replies) 0 siblings, 3 replies; 11+ messages in thread From: Gijs @ 2011-09-18 13:58 UTC (permalink / raw) To: linux-lvm Dear List, After I ran into some trouble with my raid setup, I managed to get it online again, but somehow LVM won't activate the physical volume that was on it. Well, to be exact, it will activate the PV, but not all the LVs on it. It activates two of the three LVs. When I type in "lvchange -a y /dev/raid-5/data" to activate the remaining LV, it returns the following errors: device-mapper: resume ioctl failed: Invalid argument Unable to resume raid--5-data (253:2) Checking dmesg, it says the following: device-mapper: table: 253:2: md127 too small for target: start=5897914368, len=1908400128, dev_size=7806312448 I pretty much tried everything and as a last resort I typed in the following: pvresize -v --setphysicalvolumesize 3996831973376B /dev/md127 Where 3996831973376 is the exact amount of bytes available on the raid array. However, this gave me the following info: Using physical volume(s) on command line Archiving volume group "raid-5" metadata (seqno 21). /dev/md127: Pretending size is 7806312448 not 7806314496 sectors. Resizing physical volume /dev/md127 from 952919 to 952918 extents. /dev/md127: cannot resize to 952918 extents as later ones are allocated. 0 physical volume(s) resized / 1 physical volume(s) not resized Now I wonder why LVM wants to pretend it has more space than it actually has. Because when I subtract those sectors from each other, and I calculate how much bytes are in those "pretended sectors", it turns out that that's the exact amount that is missing (1MB). I'm running Fedora 15 from a rescue USB-stick at the moment, but the system itself was formatted/configured under Fedora 14. Could that be the cause of the problem? If not, what other options do I have to fix this? (without loosing the data on the volume) Kind regards, Gijs ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-18 13:58 [linux-lvm] LVM pretends it has more space than it actually has Gijs @ 2011-09-19 0:21 ` Stuart D. Gathman 2011-09-19 17:40 ` Gijs 2011-09-19 1:13 ` adultsitesoftware@gmail.com 2011-09-19 1:23 ` adultsitesoftware@gmail.com 2 siblings, 1 reply; 11+ messages in thread From: Stuart D. Gathman @ 2011-09-19 0:21 UTC (permalink / raw) To: LVM general discussion and development On Sun, 18 Sep 2011, Gijs wrote: > Now I wonder why LVM wants to pretend it has more space than it actually has. > Because when I subtract those sectors from each other, and I calculate how > much bytes are in those "pretended sectors", it turns out that that's the > exact amount that is missing (1MB). This may not be related, but I had a problem with blockdev --getsize in Fedora 14 returning the wrong size for an IDE drive on a USB-IDE/SATA adapter. It returned the correct size for SATA drives on the USB adapter, and for IDE drives connected to the internal IDE port. So I chalked this up to wierdness on the (nearly obsolete) USB/IDE interface. But your problem suddenly made me not so sure. -- Stuart D. Gathman <stuart@bmsi.com> Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154 "Confutatis maledictis, flammis acribus addictis" - background song for a Microsoft sponsored "Where do you want to go from here?" commercial. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-19 0:21 ` Stuart D. Gathman @ 2011-09-19 17:40 ` Gijs 0 siblings, 0 replies; 11+ messages in thread From: Gijs @ 2011-09-19 17:40 UTC (permalink / raw) To: LVM general discussion and development Thanks for the reply, however I don't think the size of the blockdev command is incorrect. At least not on my machine. I had already checked that earlier and it looked fine. On 19-9-2011 2:21, Stuart D. Gathman wrote: > This may not be related, but I had a problem with blockdev --getsize > in Fedora > 14 returning the wrong size for an IDE drive on a USB-IDE/SATA > adapter. It > returned the correct size for SATA drives on the USB adapter, and for IDE > drives connected to the internal IDE port. So I chalked this up to > wierdness > on the (nearly obsolete) USB/IDE interface. But your problem suddenly > made me > not so sure. > > On Sun, 18 Sep 2011, Gijs wrote: > >> Now I wonder why LVM wants to pretend it has more space than it >> actually has. Because when I subtract those sectors from each other, >> and I calculate how much bytes are in those "pretended sectors", it >> turns out that that's the exact amount that is missing (1MB). ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-18 13:58 [linux-lvm] LVM pretends it has more space than it actually has Gijs 2011-09-19 0:21 ` Stuart D. Gathman @ 2011-09-19 1:13 ` adultsitesoftware@gmail.com 2011-09-19 1:23 ` adultsitesoftware@gmail.com 2 siblings, 0 replies; 11+ messages in thread From: adultsitesoftware@gmail.com @ 2011-09-19 1:13 UTC (permalink / raw) To: LVM general discussion and development Without grepping the source for "pretending" I'd bet the reason the PV is 1MB smaller than the device is on is due to the 1MB used for metadata. You could recreate the PV with no metadata on it if you think that's what you did before. However, are you 100% sure that you didn't previously use the RAID arryay as the PV? If you swapped the two layers, mdadm and lvm, that what cause exactly the sumption you are presenting - the PV being 1MB smaller than the RAID. Gijs <info@bsnw.nl> wrote: >Dear List, > >After I ran into some trouble with my raid setup, I managed to get it >online again, but somehow LVM won't activate the physical volume that >was on it. Well, to be exact, it will activate the PV, but not all the >LVs on it. It activates two of the three LVs. > >When I type in "lvchange -a y /dev/raid-5/data" to activate the >remaining LV, it returns the following errors: >device-mapper: resume ioctl failed: Invalid argument Unable to resume >raid--5-data (253:2) > >Checking dmesg, it says the following: >device-mapper: table: 253:2: md127 too small for target: >start=5897914368, len=1908400128, dev_size=7806312448 > >I pretty much tried everything and as a last resort I typed in the >following: >pvresize -v --setphysicalvolumesize 3996831973376B /dev/md127 >Where 3996831973376 is the exact amount of bytes available on the raid >array. However, this gave me the following info: >Using physical volume(s) on command line >Archiving volume group "raid-5" metadata (seqno 21). >/dev/md127: Pretending size is 7806312448 not 7806314496 sectors. >Resizing physical volume /dev/md127 from 952919 to 952918 extents. >/dev/md127: cannot resize to 952918 extents as later ones are >allocated. >0 physical volume(s) resized / 1 physical volume(s) not resized > >Now I wonder why LVM wants to pretend it has more space than it >actually >has. Because when I subtract those sectors from each other, and I >calculate how much bytes are in those "pretended sectors", it turns out > >that that's the exact amount that is missing (1MB). > >I'm running Fedora 15 from a rescue USB-stick at the moment, but the >system itself was formatted/configured under Fedora 14. Could that be >the cause of the problem? If not, what other options do I have to fix >this? (without loosing the data on the volume) > >Kind regards, > >Gijs > >_______________________________________________ >linux-lvm mailing list >linux-lvm@redhat.com >https://www.redhat.com/mailman/listinfo/linux-lvm >read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-18 13:58 [linux-lvm] LVM pretends it has more space than it actually has Gijs 2011-09-19 0:21 ` Stuart D. Gathman 2011-09-19 1:13 ` adultsitesoftware@gmail.com @ 2011-09-19 1:23 ` adultsitesoftware@gmail.com 2011-09-19 17:37 ` Gijs 2 siblings, 1 reply; 11+ messages in thread From: adultsitesoftware@gmail.com @ 2011-09-19 1:23 UTC (permalink / raw) To: LVM general discussion and development The PV is 1MB smaller than the device it's on due to the metadata. The symptoms you are presenting is exactly what would happen if someone had previously set up the RAID and LVM in the typical fashion, with LVM atop RAID, then you tried to activate them in reverse order. If.you value your data, backup the block devices before proceeding. Gijs <info@bsnw.nl> wrote: >Dear List, > >After I ran into some trouble with my raid setup, I managed to get it >online again, but somehow LVM won't activate the physical volume that >was on it. Well, to be exact, it will activate the PV, but not all the >LVs on it. It activates two of the three LVs. > >When I type in "lvchange -a y /dev/raid-5/data" to activate the >remaining LV, it returns the following errors: >device-mapper: resume ioctl failed: Invalid argument Unable to resume >raid--5-data (253:2) > >Checking dmesg, it says the following: >device-mapper: table: 253:2: md127 too small for target: >start=5897914368, len=1908400128, dev_size=7806312448 > >I pretty much tried everything and as a last resort I typed in the >following: >pvresize -v --setphysicalvolumesize 3996831973376B /dev/md127 >Where 3996831973376 is the exact amount of bytes available on the raid >array. However, this gave me the following info: >Using physical volume(s) on command line >Archiving volume group "raid-5" metadata (seqno 21). >/dev/md127: Pretending size is 7806312448 not 7806314496 sectors. >Resizing physical volume /dev/md127 from 952919 to 952918 extents. >/dev/md127: cannot resize to 952918 extents as later ones are >allocated. >0 physical volume(s) resized / 1 physical volume(s) not resized > >Now I wonder why LVM wants to pretend it has more space than it >actually >has. Because when I subtract those sectors from each other, and I >calculate how much bytes are in those "pretended sectors", it turns out > >that that's the exact amount that is missing (1MB). > >I'm running Fedora 15 from a rescue USB-stick at the moment, but the >system itself was formatted/configured under Fedora 14. Could that be >the cause of the problem? If not, what other options do I have to fix >this? (without loosing the data on the volume) > >Kind regards, > >Gijs > >_______________________________________________ >linux-lvm mailing list >linux-lvm@redhat.com >https://www.redhat.com/mailman/listinfo/linux-lvm >read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-19 1:23 ` adultsitesoftware@gmail.com @ 2011-09-19 17:37 ` Gijs 2011-09-19 18:48 ` Ray Morris 0 siblings, 1 reply; 11+ messages in thread From: Gijs @ 2011-09-19 17:37 UTC (permalink / raw) To: LVM general discussion and development Thank you for your response, however I don't quite understand what you mean by activating them in reverse order. Activating the PV before the raid system should be impossible right? And shouldn't activating them in the right order solve the problem? Or am I missing something here? On 19-9-2011 3:23, adultsitesoftware@gmail.com wrote: > The PV is 1MB smaller than the device it's on due to the metadata. > > The symptoms you are presenting is exactly what would happen if someone had previously set up the RAID and LVM in the typical fashion, with LVM atop RAID, then you tried to activate them in reverse order. > > If.you value your data, backup the block devices before proceeding. > > Gijs<info@bsnw.nl> wrote: > >> Dear List, >> >> After I ran into some trouble with my raid setup, I managed to get it >> online again, but somehow LVM won't activate the physical volume that >> was on it. Well, to be exact, it will activate the PV, but not all the >> LVs on it. It activates two of the three LVs. >> >> When I type in "lvchange -a y /dev/raid-5/data" to activate the >> remaining LV, it returns the following errors: >> device-mapper: resume ioctl failed: Invalid argument Unable to resume >> raid--5-data (253:2) >> >> Checking dmesg, it says the following: >> device-mapper: table: 253:2: md127 too small for target: >> start=5897914368, len=1908400128, dev_size=7806312448 >> >> I pretty much tried everything and as a last resort I typed in the >> following: >> pvresize -v --setphysicalvolumesize 3996831973376B /dev/md127 >> Where 3996831973376 is the exact amount of bytes available on the raid >> array. However, this gave me the following info: >> Using physical volume(s) on command line >> Archiving volume group "raid-5" metadata (seqno 21). >> /dev/md127: Pretending size is 7806312448 not 7806314496 sectors. >> Resizing physical volume /dev/md127 from 952919 to 952918 extents. >> /dev/md127: cannot resize to 952918 extents as later ones are >> allocated. >> 0 physical volume(s) resized / 1 physical volume(s) not resized >> >> Now I wonder why LVM wants to pretend it has more space than it >> actually >> has. Because when I subtract those sectors from each other, and I >> calculate how much bytes are in those "pretended sectors", it turns out >> >> that that's the exact amount that is missing (1MB). >> >> I'm running Fedora 15 from a rescue USB-stick at the moment, but the >> system itself was formatted/configured under Fedora 14. Could that be >> the cause of the problem? If not, what other options do I have to fix >> this? (without loosing the data on the volume) >> >> Kind regards, >> >> Gijs >> >> _______________________________________________ >> linux-lvm mailing list >> linux-lvm@redhat.com >> https://www.redhat.com/mailman/listinfo/linux-lvm >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-19 17:37 ` Gijs @ 2011-09-19 18:48 ` Ray Morris 2011-09-19 19:48 ` Gijs 0 siblings, 1 reply; 11+ messages in thread From: Ray Morris @ 2011-09-19 18:48 UTC (permalink / raw) To: linux-lvm The symptoms you are presenting, the "missing" 1MB are similar to if someone did this in the past: mdadm --create /dev/md1 /dev/sd[ab]1 pvcreate /dev/md1 Then later someone did this: mdadm --assemble /dev/md1 /dev/volgroup/firstLV The VG, or the reported size of the PV, is 1MB smaller than the device the PV is on, due to the metadata. If the RAID array, or rather any raid array, were created on /dev/sda1, for example, but then later activated using /dev/volgroup/LV, mdadm would report that the device was 1MB too small, which is exactly the message you are getting. -- Ray Morris support@bettercgi.com Strongbox - The next generation in site security: http://www.bettercgi.com/strongbox/ Throttlebox - Intelligent Bandwidth Control http://www.bettercgi.com/throttlebox/ Strongbox / Throttlebox affiliate program: http://www.bettercgi.com/affiliates/user/register.php On Mon, 19 Sep 2011 19:37:49 +0200 Gijs <info@bsnw.nl> wrote: > Thank you for your response, however I don't quite understand what > you mean by activating them in reverse order. Activating the PV > before the raid system should be impossible right? And shouldn't > activating them in the right order solve the problem? Or am I missing > something here? > > On 19-9-2011 3:23, adultsitesoftware@gmail.com wrote: > > The PV is 1MB smaller than the device it's on due to the metadata. > > > > The symptoms you are presenting is exactly what would happen if > > someone had previously set up the RAID and LVM in the typical > > fashion, with LVM atop RAID, then you tried to activate them in > > reverse order. > > > > If.you value your data, backup the block devices before proceeding. > > > > Gijs<info@bsnw.nl> wrote: > > > >> Dear List, > >> > >> After I ran into some trouble with my raid setup, I managed to get > >> it online again, but somehow LVM won't activate the physical > >> volume that was on it. Well, to be exact, it will activate the PV, > >> but not all the LVs on it. It activates two of the three LVs. > >> > >> When I type in "lvchange -a y /dev/raid-5/data" to activate the > >> remaining LV, it returns the following errors: > >> device-mapper: resume ioctl failed: Invalid argument Unable to > >> resume raid--5-data (253:2) > >> > >> Checking dmesg, it says the following: > >> device-mapper: table: 253:2: md127 too small for target: > >> start=5897914368, len=1908400128, dev_size=7806312448 > >> > >> I pretty much tried everything and as a last resort I typed in the > >> following: > >> pvresize -v --setphysicalvolumesize 3996831973376B /dev/md127 > >> Where 3996831973376 is the exact amount of bytes available on the > >> raid array. However, this gave me the following info: > >> Using physical volume(s) on command line > >> Archiving volume group "raid-5" metadata (seqno 21). > >> /dev/md127: Pretending size is 7806312448 not 7806314496 sectors. > >> Resizing physical volume /dev/md127 from 952919 to 952918 extents. > >> /dev/md127: cannot resize to 952918 extents as later ones are > >> allocated. > >> 0 physical volume(s) resized / 1 physical volume(s) not resized > >> > >> Now I wonder why LVM wants to pretend it has more space than it > >> actually > >> has. Because when I subtract those sectors from each other, and I > >> calculate how much bytes are in those "pretended sectors", it > >> turns out > >> > >> that that's the exact amount that is missing (1MB). > >> > >> I'm running Fedora 15 from a rescue USB-stick at the moment, but > >> the system itself was formatted/configured under Fedora 14. Could > >> that be the cause of the problem? If not, what other options do I > >> have to fix this? (without loosing the data on the volume) > >> > >> Kind regards, > >> > >> Gijs > >> > >> _______________________________________________ > >> linux-lvm mailing list > >> linux-lvm@redhat.com > >> https://www.redhat.com/mailman/listinfo/linux-lvm > >> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ > > > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://www.redhat.com/mailman/listinfo/linux-lvm > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-19 18:48 ` Ray Morris @ 2011-09-19 19:48 ` Gijs 2011-09-19 20:41 ` Ray Morris 0 siblings, 1 reply; 11+ messages in thread From: Gijs @ 2011-09-19 19:48 UTC (permalink / raw) To: LVM general discussion and development Ah like that. Ok, I can't imagine I typed such a command, but I guess I can never really know. But let's say I did, would it be possible to somehow retrieve the data from that volume? By increasing the underlying raid-5 maybe? Or is it forever lost? On 19-9-2011 20:48, Ray Morris wrote: > The symptoms you are presenting, the "missing" 1MB are similar > to if someone did this in the past: > > mdadm --create /dev/md1 /dev/sd[ab]1 > pvcreate /dev/md1 > > Then later someone did this: > mdadm --assemble /dev/md1 /dev/volgroup/firstLV > > The VG, or the reported size of the PV, is 1MB smaller than > the device the PV is on, due to the metadata. If the RAID array, > or rather any raid array, were created on /dev/sda1, for > example, but then later activated using /dev/volgroup/LV, > mdadm would report that the device was 1MB too small, which is > exactly the message you are getting. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-19 19:48 ` Gijs @ 2011-09-19 20:41 ` Ray Morris 2011-09-21 18:32 ` Gijs 0 siblings, 1 reply; 11+ messages in thread From: Ray Morris @ 2011-09-19 20:41 UTC (permalink / raw) To: linux-lvm First, if at all possible make a copy of the underlying block device using dd or dd_rescue. Very often the most severe damage is done during the attempt at recovery. Then let's find the oldest back up copies on the LVM meta data to see if we can verify how things were set up when they were working. This will find metadata over 50 days old: find /etc/lvm/archive -mtime +50 mainly what we're looking for is to see if any mdadm RAID devices were used as PVs at some point. Next try mdadm --assemble --readonly --assume-clean /dev/sdFOO to see if you can assemble an array using the lower level device (which is also marked as a PV right now). If it assembles, do: pvdisplay -m /dev/md0 to see if it's a PV, and check to see if it has a filesystem. Based on the messages you got, it looks like /dev/md0 at one point was the PV, rather than being assembled from LVs. -- Ray Morris support@bettercgi.com Strongbox - The next generation in site security: http://www.bettercgi.com/strongbox/ Throttlebox - Intelligent Bandwidth Control http://www.bettercgi.com/throttlebox/ Strongbox / Throttlebox affiliate program: http://www.bettercgi.com/affiliates/user/register.php On Mon, 19 Sep 2011 21:48:15 +0200 Gijs <info@bsnw.nl> wrote: > Ah like that. Ok, I can't imagine I typed such a command, but I guess > I can never really know. But let's say I did, would it be possible to > somehow retrieve the data from that volume? By increasing the > underlying raid-5 maybe? Or is it forever lost? > > On 19-9-2011 20:48, Ray Morris wrote: > > The symptoms you are presenting, the "missing" 1MB are similar > > to if someone did this in the past: > > > > mdadm --create /dev/md1 /dev/sd[ab]1 > > pvcreate /dev/md1 > > > > Then later someone did this: > > mdadm --assemble /dev/md1 /dev/volgroup/firstLV > > > > The VG, or the reported size of the PV, is 1MB smaller than > > the device the PV is on, due to the metadata. If the RAID array, > > or rather any raid array, were created on /dev/sda1, for > > example, but then later activated using /dev/volgroup/LV, > > mdadm would report that the device was 1MB too small, which is > > exactly the message you are getting. > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://www.redhat.com/mailman/listinfo/linux-lvm > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-19 20:41 ` Ray Morris @ 2011-09-21 18:32 ` Gijs 2011-10-12 20:43 ` Gijs 0 siblings, 1 reply; 11+ messages in thread From: Gijs @ 2011-09-21 18:32 UTC (permalink / raw) To: LVM general discussion and development Unfortunately I can't find all the old LVM configs that the system used. I was in the process of moving my root filesystem to the raid-5 array. Since I needed the root to be unmounted for that, I used a FC15 USB-bootable rescue system to do the copying of the root to the raid-5 array. And that's when things went wrong. Since the rescue system is pretty much run from memory, I don't have the LVM configs that were created when I was using the rescue system. I do have older configs that were created when I was creating the raid-5 array on the system itself, but those don't show anything wrong from what I can see. (and I guess that's correct, since nothing was wrong at that time) I tried assembling/recreating an array on the PV-device, but that just gave me the error "mdadm: no raid-devices specified." So I can't really find an array on the LVM devices either. Some info I got from the PV: [root@poseidon ~]# pvdisplay -m /dev/md127 --- Physical volume --- PV Name /dev/md127 VG Name raid-5 PV Size 3.64 TiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 952919 Free PE 5252 Allocated PE 947667 PV UUID ZmJtA4-cZBL-kuXT-53Ie-7o1C-7oro-uw5GB6 --- Physical Segments --- Physical extent 0 to 714687: Logical volume /dev/raid-5/data Logical extents 0 to 714687 Physical extent 714688 to 714933: FREE Physical extent 714934 to 714953: Logical volume /dev/raid-5/data Logical extents 947647 to 947666 Physical extent 714954 to 719959: FREE Physical extent 719960 to 952918: Logical volume /dev/raid-5/data Logical extents 714688 to 947646 The empty spaces inbetween are from LVs there were created before. And the 3rd segment is from when I tried to resize the data-LV to see if that made any difference. It obviously didn't since it was the PV that was actually too small, not the LV, which I figured out later. From what you say, it indeed sounds like I messed up some command that caused an array to be created on an LV, but I can't really find any evidence that I really did that. Is there any other explanation that LVM is acting this way? Is it perhaps possible to tell LVM to run of the configuration stored in /etc/lvm, instead of the metadata embedded on the PV? There's also something that I don't understand. Why is it just the data-LV? I had a swap and root LV as well, and those activated just fine. Why would LVM have trouble activating the data-LV when it had no trouble activating the swap/root-LV? On 19-9-2011 22:41, Ray Morris wrote: > First, if at all possible make a copy of the underlying block > device using dd or dd_rescue. Very often the most severe damage > is done during the attempt at recovery. > > Then let's find the oldest back up copies on the LVM meta data to > see if we can verify how things were set up when they were working. > This will find metadata over 50 days old: > > find /etc/lvm/archive -mtime +50 > > mainly what we're looking for is to see if any mdadm RAID devices > were used as PVs at some point. > > Next try mdadm --assemble --readonly --assume-clean /dev/sdFOO to see > if you can assemble an array using the lower level device (which is > also marked as a PV right now). If it assembles, do: > pvdisplay -m /dev/md0 > to see if it's a PV, and check to see if it has a filesystem. > > Based on the messages you got, it looks like /dev/md0 at one point > was the PV, rather than being assembled from LVs. ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [linux-lvm] LVM pretends it has more space than it actually has 2011-09-21 18:32 ` Gijs @ 2011-10-12 20:43 ` Gijs 0 siblings, 0 replies; 11+ messages in thread From: Gijs @ 2011-10-12 20:43 UTC (permalink / raw) To: LVM general discussion and development I managed to get all my data back by deleting the LVM volumes and recreating it without formatting the drives. I did have to run fsck on my data volume, but all data was intact as far as I could see. And I also think I know what went wrong. Pretty much every reboot my raid-1 (for /boot) and my raid-5 MD-devices switch places with each other. So sometimes it's /dev/md126, the other times it's /dev/md127. I must have used the wrong device after a reboot, mistakenly thinking it was the LVM or boot partition. On 21-9-2011 20:32, Gijs wrote: > Unfortunately I can't find all the old LVM configs that the system > used. I was in the process of moving my root filesystem to the raid-5 > array. Since I needed the root to be unmounted for that, I used a FC15 > USB-bootable rescue system to do the copying of the root to the raid-5 > array. And that's when things went wrong. Since the rescue system is > pretty much run from memory, I don't have the LVM configs that were > created when I was using the rescue system. I do have older configs > that were created when I was creating the raid-5 array on the system > itself, but those don't show anything wrong from what I can see. (and > I guess that's correct, since nothing was wrong at that time) > > I tried assembling/recreating an array on the PV-device, but that just > gave me the error "mdadm: no raid-devices specified." So I can't > really find an array on the LVM devices either. > > Some info I got from the PV: > [root@poseidon ~]# pvdisplay -m /dev/md127 > --- Physical volume --- > PV Name /dev/md127 > VG Name raid-5 > PV Size 3.64 TiB / not usable 0 > Allocatable yes > PE Size 4.00 MiB > Total PE 952919 > Free PE 5252 > Allocated PE 947667 > PV UUID ZmJtA4-cZBL-kuXT-53Ie-7o1C-7oro-uw5GB6 > > --- Physical Segments --- > Physical extent 0 to 714687: > Logical volume /dev/raid-5/data > Logical extents 0 to 714687 > Physical extent 714688 to 714933: > FREE > Physical extent 714934 to 714953: > Logical volume /dev/raid-5/data > Logical extents 947647 to 947666 > Physical extent 714954 to 719959: > FREE > Physical extent 719960 to 952918: > Logical volume /dev/raid-5/data > Logical extents 714688 to 947646 > > The empty spaces inbetween are from LVs there were created before. And > the 3rd segment is from when I tried to resize the data-LV to see if > that made any difference. It obviously didn't since it was the PV that > was actually too small, not the LV, which I figured out later. > > From what you say, it indeed sounds like I messed up some command that > caused an array to be created on an LV, but I can't really find any > evidence that I really did that. Is there any other explanation that > LVM is acting this way? Is it perhaps possible to tell LVM to run of > the configuration stored in /etc/lvm, instead of the metadata embedded > on the PV? > > There's also something that I don't understand. Why is it just the > data-LV? I had a swap and root LV as well, and those activated just > fine. Why would LVM have trouble activating the data-LV when it had no > trouble activating the swap/root-LV? > > On 19-9-2011 22:41, Ray Morris wrote: >> First, if at all possible make a copy of the underlying block >> device using dd or dd_rescue. Very often the most severe damage >> is done during the attempt at recovery. >> >> Then let's find the oldest back up copies on the LVM meta data to >> see if we can verify how things were set up when they were working. >> This will find metadata over 50 days old: >> >> find /etc/lvm/archive -mtime +50 >> >> mainly what we're looking for is to see if any mdadm RAID devices >> were used as PVs at some point. >> >> Next try mdadm --assemble --readonly --assume-clean /dev/sdFOO to see >> if you can assemble an array using the lower level device (which is >> also marked as a PV right now). If it assembles, do: >> pvdisplay -m /dev/md0 >> to see if it's a PV, and check to see if it has a filesystem. >> >> Based on the messages you got, it looks like /dev/md0 at one point >> was the PV, rather than being assembled from LVs. > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://www.redhat.com/mailman/listinfo/linux-lvm > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2011-10-12 20:43 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-09-18 13:58 [linux-lvm] LVM pretends it has more space than it actually has Gijs 2011-09-19 0:21 ` Stuart D. Gathman 2011-09-19 17:40 ` Gijs 2011-09-19 1:13 ` adultsitesoftware@gmail.com 2011-09-19 1:23 ` adultsitesoftware@gmail.com 2011-09-19 17:37 ` Gijs 2011-09-19 18:48 ` Ray Morris 2011-09-19 19:48 ` Gijs 2011-09-19 20:41 ` Ray Morris 2011-09-21 18:32 ` Gijs 2011-10-12 20:43 ` Gijs
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).