linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zkabelac@redhat.com>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] Data percentage too large after thin_dump --repair/thin_restore
Date: Tue, 02 Sep 2014 11:06:40 +0200	[thread overview]
Message-ID: <540588A0.50402@redhat.com> (raw)
In-Reply-To: <CAAN_kG2zBvS1iVhdqO2zvQ2+QW-SpyM2PhHG2j+zhSFZbOmnRg@mail.gmail.com>

Dne 1.9.2014 v 20:13 Timur Alperovich napsal(a):
> Hi there,
>
> I'm using LVM 2.02.98 and encountered a metadata corruption issue. To recover
> from it, I attempted to perform the following steps:
> 1. thin_check /dev/mapper/pool_tmeta
> 2. thin_dump /dev/mapper/pool_tmeta > /tmp/metadata

Hi

NEVER EVER use _tmeta device from running active thin-pool volume.
It's the very same case like you would be running  'fsck' on
a mounted filesystem.


> 3. dd if=/dev/zero of=/dev/mapper/pool_tmeta
> 4. thin_restore -i /tmp/metadata -o /dev/mapper/pool_tmeta
>
> All of the above steps have succeeded, however, when attempting to list the
> _metadata\_percent_ field, I get an error:

I'm surprised you've not got kernel OOPS after such brutal destruction
of life metadata device (i.e. almost equal to zeroing your root volume).

> Is this a known issue and is there a workaround? I need to be able to examine
> the _metadata\_percent_ field to make sure we don't exhaust the metadata space.
>

Normal way -

lvconvert --repair vg/pool


If this is not working - then you can 'swap' metadata out of your thin-pool
using following sequence-

- make sure pool is not active.
- build temporary local LV   (lvcreate -l1 vg -n temp)
- swap this LV with metadata of to-be-repaired pool
   (lvconvert --thinpool vg/fixpool --poolmetadata temp)
- activate 'temp' LV now with pool's metadata
   (lvchange -ay vg/temp)
- repair metadata
   (you may need other 'bigger' volume to restore fixed metadata)
   (i.e.  thin_restore -i /dev/vg/temp -o /dev/vg/biggertemp)
- thin_check restored volume
- thin_dump - and compare  vgcfgrestore & this dump are at the same
   transaction_id state (look at lvm2 metadata for thin pool)
- deactivate again related volumes
- swap repaired LV back
   (lvconvert --thinpool vg/fixpool --poolmetadata repairedtemp)
- try to active repaired thin pool
- remove unneeded volumes from vg....


Zdenek

  reply	other threads:[~2014-09-02  9:06 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-01 18:13 [linux-lvm] Data percentage too large after thin_dump --repair/thin_restore Timur Alperovich
2014-09-02  9:06 ` Zdenek Kabelac [this message]
2014-09-02 14:11   ` Timur Alperovich
2014-09-03  9:05     ` Zdenek Kabelac

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=540588A0.50402@redhat.com \
    --to=zkabelac@redhat.com \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).