* [linux-lvm] fix corrupted thin pool
@ 2014-10-24 19:59 Vasiliy Tolstov
2014-10-25 12:43 ` Zdenek Kabelac
0 siblings, 1 reply; 14+ messages in thread
From: Vasiliy Tolstov @ 2014-10-24 19:59 UTC (permalink / raw)
To: LVM, dm-devel
Hello! By mistake i'm restore by vcfgrestore thin volume, after that i
have errors on this thin pool on all volumes like :
lvchange -ay vg1/2735
Thin pool transaction_id=120, while expected: 114.
Does it possible to recovery from this? I'm try lvconvert --recover
and get tp1_tmeta0 but i'm don't understand whan i need to do next..?
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] fix corrupted thin pool
2014-10-24 19:59 [linux-lvm] fix corrupted thin pool Vasiliy Tolstov
@ 2014-10-25 12:43 ` Zdenek Kabelac
2014-10-25 18:41 ` [linux-lvm] [dm-devel] " Vasiliy Tolstov
0 siblings, 1 reply; 14+ messages in thread
From: Zdenek Kabelac @ 2014-10-25 12:43 UTC (permalink / raw)
To: device-mapper development, LVM
Dne 24.10.2014 v 21:59 Vasiliy Tolstov napsal(a):
> Hello! By mistake i'm restore by vcfgrestore thin volume, after that i
> have errors on this thin pool on all volumes like :
> lvchange -ay vg1/2735
> Thin pool transaction_id=120, while expected: 114.
> Does it possible to recovery from this? I'm try lvconvert --recover
> and get tp1_tmeta0 but i'm don't understand whan i need to do next..?
>
>
Hi
I'm not sure how you could do that 'by a mistake' since LVM is printing pretty
BIG WARNING that any vgcfgrestore with thin should be done after big thinking
and requires even extra --force option.
But anyway - if you have /etc/lvm/archive - you should probably be able to
find the 'right' version of lvm2 metadata for your kernel metadata.
However 'normally' you could be off the sequence number only by one! so
I'm quite curious what you've been able to make such big difference.
If you could - package /etc/lvm/archive so I could get closer look where the
lvm2 has holes to allow such operations ?
Which version of lvm2 and kernel is here in use ?
Have you been manipulating with thin-pool's metadata in any way ?
Regards
Zdenek
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-25 12:43 ` Zdenek Kabelac
@ 2014-10-25 18:41 ` Vasiliy Tolstov
2014-10-25 18:42 ` Vasiliy Tolstov
2014-10-25 20:18 ` Zdenek Kabelac
0 siblings, 2 replies; 14+ messages in thread
From: Vasiliy Tolstov @ 2014-10-25 18:41 UTC (permalink / raw)
To: device-mapper development; +Cc: LVM
2014-10-25 16:43 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com>:
> I'm not sure how you could do that 'by a mistake' since LVM is printing
> pretty BIG WARNING that any vgcfgrestore with thin should be done after big
> thinking and requires even extra --force option.
>
> But anyway - if you have /etc/lvm/archive - you should probably be able to
> find the 'right' version of lvm2 metadata for your kernel metadata.
>
> However 'normally' you could be off the sequence number only by one! so
> I'm quite curious what you've been able to make such big difference.
>
> If you could - package /etc/lvm/archive so I could get closer look where
> the lvm2 has holes to allow such operations ?
>
> Which version of lvm2 and kernel is here in use ?
>
> Have you been manipulating with thin-pool's metadata in any way ?
>
> Regards
>
> Zdenek
I can't provide old archive data =(, Now i only have this error..
Also in lvm conf i have issue_discards =1
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-25 18:41 ` [linux-lvm] [dm-devel] " Vasiliy Tolstov
@ 2014-10-25 18:42 ` Vasiliy Tolstov
2014-10-25 20:18 ` Zdenek Kabelac
1 sibling, 0 replies; 14+ messages in thread
From: Vasiliy Tolstov @ 2014-10-25 18:42 UTC (permalink / raw)
To: Vasiliy Tolstov; +Cc: device-mapper development, LVM
2014-10-25 22:41 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
>> Which version of lvm2 and kernel is here in use ?
>>
>> Have you been manipulating with thin-pool's metadata in any way ?
>>
I'm use 3.10.55 kernel and lvm 2.02.106
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-25 18:41 ` [linux-lvm] [dm-devel] " Vasiliy Tolstov
2014-10-25 18:42 ` Vasiliy Tolstov
@ 2014-10-25 20:18 ` Zdenek Kabelac
2014-10-25 20:53 ` Vasiliy Tolstov
1 sibling, 1 reply; 14+ messages in thread
From: Zdenek Kabelac @ 2014-10-25 20:18 UTC (permalink / raw)
To: device-mapper development; +Cc: LVM
Dne 25.10.2014 v 20:41 Vasiliy Tolstov napsal(a):
> 2014-10-25 16:43 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com>:
>> I'm not sure how you could do that 'by a mistake' since LVM is printing
>> pretty BIG WARNING that any vgcfgrestore with thin should be done after big
>> thinking and requires even extra --force option.
>>
>> But anyway - if you have /etc/lvm/archive - you should probably be able to
>> find the 'right' version of lvm2 metadata for your kernel metadata.
>>
>> However 'normally' you could be off the sequence number only by one! so
>> I'm quite curious what you've been able to make such big difference.
>>
>> If you could - package /etc/lvm/archive so I could get closer look where
>> the lvm2 has holes to allow such operations ?
>>
>> Which version of lvm2 and kernel is here in use ?
>>
>> Have you been manipulating with thin-pool's metadata in any way ?
>>
>> Regards
>>
>> Zdenek
>
>
> I can't provide old archive data =(, Now i only have this error..
> Also in lvm conf i have issue_discards =1
There is 'internal' metadata archive then -
dd if=/dev/your_pv_volume of=/tmp/1st.megabyte bs=1M count=1
It's will capture first megabyte of your PV where are embedded
metadata of your Volume group.
If you are not skilled enough - tar.gz and send this file to me.
Zdenek
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-25 20:18 ` Zdenek Kabelac
@ 2014-10-25 20:53 ` Vasiliy Tolstov
2014-10-25 22:47 ` Zdenek Kabelac
0 siblings, 1 reply; 14+ messages in thread
From: Vasiliy Tolstov @ 2014-10-25 20:53 UTC (permalink / raw)
To: device-mapper development; +Cc: LVM
[-- Attachment #1: Type: text/plain, Size: 538 bytes --]
2014-10-26 0:18 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com>:
> There is 'internal' metadata archive then -
>
> dd if=/dev/your_pv_volume of=/tmp/1st.megabyte bs=1M count=1
>
> It's will capture first megabyte of your PV where are embedded
> metadata of your Volume group.
>
> If you are not skilled enough - tar.gz and send this file to me.
I'm do dd and send it. While i'm break thin pool i'm try to restore volume 2657.
But i don't stop lvm thin pool =(.
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
[-- Attachment #2: sdb.tar.gz --]
[-- Type: application/x-gzip, Size: 15734 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-25 20:53 ` Vasiliy Tolstov
@ 2014-10-25 22:47 ` Zdenek Kabelac
2014-10-26 19:46 ` Vasiliy Tolstov
2014-10-27 6:58 ` Anatoly Pugachev
0 siblings, 2 replies; 14+ messages in thread
From: Zdenek Kabelac @ 2014-10-25 22:47 UTC (permalink / raw)
Cc: LVM
[-- Attachment #1: Type: text/plain, Size: 956 bytes --]
Dne 25.10.2014 v 22:53 Vasiliy Tolstov napsal(a):
> 2014-10-26 0:18 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com>:
>> There is 'internal' metadata archive then -
>>
>> dd if=/dev/your_pv_volume of=/tmp/1st.megabyte bs=1M count=1
>>
>> It's will capture first megabyte of your PV where are embedded
>> metadata of your Volume group.
>>
>> If you are not skilled enough - tar.gz and send this file to me.
>
>
> I'm do dd and send it. While i'm break thin pool i'm try to restore volume 2657.
> But i don't stop lvm thin pool =(.
>
From the metadata something bad was going one:
Fri Oct 24 17:03:04 2014
transaction_id = 120 - create = "3695"
And suddenly on Fri Oct 24 18:07:23 2014
pool is back on older transaction_id
transaction_id = 114
Is that the time of your vgcfgrestore?
I'm attaching those metadata which you likely should put back to get in sync
with your kernel metadata (assuming you have not modified those in any way)
Zdenek
[-- Attachment #2: recov --]
[-- Type: text/plain, Size: 5569 bytes --]
vg1 {
id = "jP49rT-lpl1-NHqu-5jHZ-gy5z-uv6v-YH9w7d"
seqno = 256
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192
max_lv = 10000
max_pv = 0
metadata_copies = 1
physical_volumes {
pv0 {
id = "hm3gZp-FjPP-AAbl-NEX3-n30T-x1Nw-9eiTw5"
device = "/dev/sdb"
status = ["ALLOCATABLE"]
flags = []
dev_size = 943718400
pe_start = 615425
pe_count = 115124
}
}
logical_volumes {
tp1 {
id = "Xk3HRX-XZSg-Fzge-Ut13-Ocxu-oegX-8gXbM2"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1407945259
segment_count = 1
segment1 {
start_extent = 0
extent_count = 113920
type = "thin-pool"
metadata = "tp1_tmeta"
pool = "tp1_tdata"
transaction_id = 120
chunk_size = 128
discards = "passdown"
zero_new_blocks = 1
message1 {
create = "3695"
}
}
}
2735 {
id = "odXzLP-K0MG-lffg-SvH7-s3Gn-hdlJ-qNDhI6"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1413304951
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 59
device_id = 15
}
}
2799 {
id = "gze0LC-Kim7-tpP3-LwW8-82LJ-DLJf-5rnfSU"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1413332125
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 60
device_id = 16
}
}
2749 {
id = "3TokZu-3nWx-mWcv-XQY2-S1lY-VRCZ-pdszYr"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1413484929
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 67
device_id = 20
}
}
3119 {
id = "jscAdT-PGuM-b80R-pwg4-5DlY-4bMf-e5Ybg0"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1413488295
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 68
device_id = 21
}
}
2679_751 {
id = "hoSlsB-eZJ0-gbrh-P3aL-5U8T-Erc0-7skUnq"
status = ["READ", "WRITE", "VISIBLE"]
flags = ["ACTIVATION_SKIP"]
creation_host = "cn05"
creation_time = 1413740222
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 85
device_id = 28
}
}
3435 {
id = "8xgMUd-pBtZ-U0Rs-qytl-XeC2-J8wb-QxLoez"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1413983820
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 95
device_id = 30
}
}
3471 {
id = "eHumdz-bxzn-7H1N-hGm1-2c7R-MSEu-nwwvUe"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1413991440
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 97
device_id = 31
}
}
2937_785 {
id = "H0Fqvl-xniV-FCE8-Eh86-Xfwb-bnb6-wzDVzO"
status = ["READ", "WRITE", "VISIBLE"]
flags = ["ACTIVATION_SKIP"]
creation_host = "cn05"
creation_time = 1413992701
segment_count = 1
segment1 {
start_extent = 0
extent_count = 40960
type = "thin"
thin_pool = "tp1"
transaction_id = 98
device_id = 32
}
}
3547 {
id = "EEPlcg-dBrQ-fclB-MJMi-HQBY-MG9B-gPimpS"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1414060010
segment_count = 1
segment1 {
start_extent = 0
extent_count = 40960
type = "thin"
thin_pool = "tp1"
transaction_id = 109
device_id = 33
}
}
3645 {
id = "f8WN0H-dcYX-p5YZ-evnU-dZlQ-7ayZ-tMvE9Q"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1414135683
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 117
device_id = 34
}
}
3647 {
id = "IrDJ48-H0Nb-QOLa-eTAk-ralE-NM1w-Pfkx6J"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1414135684
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 118
device_id = 35
}
}
3695 {
id = "0ZHo7z-aIqP-tiij-URcc-HsZp-9fzF-Wc38Ll"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "cn05"
creation_time = 1414155784
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120
type = "thin"
thin_pool = "tp1"
transaction_id = 119
device_id = 36
}
}
lvol0_pmspare {
id = "tznz20-2upf-S3Mu-RQLX-nu7g-giVI-yU2fMN"
status = ["READ", "WRITE"]
flags = []
creation_host = "cn05"
creation_time = 1407945259
segment_count = 1
segment1 {
start_extent = 0
extent_count = 155
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
tp1_tmeta {
id = "bwjbfk-4Yd3-3nbm-DpVC-jYGc-G5Go-gy0Dt1"
status = ["READ", "WRITE"]
flags = []
creation_host = "cn05"
creation_time = 1407945259
segment_count = 1
segment1 {
start_extent = 0
extent_count = 155
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 114075
]
}
}
tp1_tdata {
id = "g0NIO3-pyRi-aEvU-v0zp-2707-juvr-Ob2jG8"
status = ["READ", "WRITE"]
flags = []
creation_host = "cn05"
creation_time = 1407945259
segment_count = 1
segment1 {
start_extent = 0
extent_count = 113920
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 155
]
}
}
}
}
# Generated by LVM2 version 2.02.106(2) (2014-04-10): Fri Oct 24 17:03:04 2014
contents = "Text Format Volume Group"
version = 1
description = ""
creation_host = "cn05" # Linux cn05 3.10-3-amd64 #1 SMP Debian 3.10.49-1+0~20140721094239.35+wheezy~1.gbp298443 ( x86_64
creation_time = 1414155784 # Fri Oct 24 17:03:04 2014
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-25 22:47 ` Zdenek Kabelac
@ 2014-10-26 19:46 ` Vasiliy Tolstov
2014-10-27 9:15 ` Zdenek Kabelac
2014-10-27 6:58 ` Anatoly Pugachev
1 sibling, 1 reply; 14+ messages in thread
From: Vasiliy Tolstov @ 2014-10-26 19:46 UTC (permalink / raw)
To: LVM general discussion and development
2014-10-26 2:47 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com>:
> From the metadata something bad was going one:
>
> Fri Oct 24 17:03:04 2014
>
> transaction_id = 120 - create = "3695"
>
> And suddenly on Fri Oct 24 18:07:23 2014
> pool is back on older transaction_id
>
> transaction_id = 114
>
>
> Is that the time of your vgcfgrestore?
>
> I'm attaching those metadata which you likely should put back to get in sync
> with your kernel metadata (assuming you have not modified those in any way)
Hm yes, i miss that one vg created in this time, thanks. As i
understand transaction id needs to be changed? Or something other?
But i have error:
lvchange -ay vg1/2735
Check of thin pool vg1/tp1 failed (status:1). Manual repair required
(thin_dump --repair /dev/mapper/vg1-tp1_tmeta)!
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-25 22:47 ` Zdenek Kabelac
2014-10-26 19:46 ` Vasiliy Tolstov
@ 2014-10-27 6:58 ` Anatoly Pugachev
2014-10-27 9:05 ` Zdenek Kabelac
1 sibling, 1 reply; 14+ messages in thread
From: Anatoly Pugachev @ 2014-10-27 6:58 UTC (permalink / raw)
To: LVM general discussion and development
[-- Attachment #1: Type: text/plain, Size: 1238 bytes --]
On Sun, Oct 26, 2014 at 1:47 AM, Zdenek Kabelac <zkabelac@redhat.com> wrote:
>
> Dne 25.10.2014 v 22:53 Vasiliy Tolstov napsal(a):
>>
>> 2014-10-26 0:18 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com>:
>>>
>>> There is 'internal' metadata archive then -
>>>
>>> dd if=/dev/your_pv_volume of=/tmp/1st.megabyte bs=1M count=1
>>>
>>> It's will capture first megabyte of your PV where are embedded
>>> metadata of your Volume group.
>>>
>>> If you are not skilled enough - tar.gz and send this file to me.
>>
>>
>>
>> I'm do dd and send it. While i'm break thin pool i'm try to restore
volume 2657.
>> But i don't stop lvm thin pool =(.
>>
>
>
> From the metadata something bad was going one:
>
> Fri Oct 24 17:03:04 2014
>
> transaction_id = 120 - create = "3695"
>
> And suddenly on Fri Oct 24 18:07:23 2014
> pool is back on older transaction_id
>
> transaction_id = 114
>
>
> Is that the time of your vgcfgrestore?
>
> I'm attaching those metadata which you likely should put back to get in
sync
> with your kernel metadata (assuming you have not modified those in any
way)
Zdenek,
can you please describe (possibly in details) what have you done with
tar.gz sent to you, so everyone would know what to do next time?
Thanks a lot!
[-- Attachment #2: Type: text/html, Size: 1774 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-27 6:58 ` Anatoly Pugachev
@ 2014-10-27 9:05 ` Zdenek Kabelac
0 siblings, 0 replies; 14+ messages in thread
From: Zdenek Kabelac @ 2014-10-27 9:05 UTC (permalink / raw)
To: LVM general discussion and development
Dne 27.10.2014 v 07:58 Anatoly Pugachev napsal(a):
> On Sun, Oct 26, 2014 at 1:47 AM, Zdenek Kabelac <zkabelac@redhat.com
> <mailto:zkabelac@redhat.com>> wrote:
> >
> > Dne 25.10.2014 v 22:53 Vasiliy Tolstov napsal(a):
> >>
> >> 2014-10-26 0:18 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com
> <mailto:zkabelac@redhat.com>>:
> >>>
> >>> There is 'internal' metadata archive then -
> >>>
> >>> dd if=/dev/your_pv_volume of=/tmp/1st.megabyte bs=1M count=1
> >>>
> >>> It's will capture first megabyte of your PV where are embedded
> >>> metadata of your Volume group.
> >>>
> >>> If you are not skilled enough - tar.gz and send this file to me.
> >>
> >>
> >>
> >> I'm do dd and send it. While i'm break thin pool i'm try to restore volume
> 2657.
> >> But i don't stop lvm thin pool =(.
> >>
> >
> >
> > From the metadata something bad was going one:
> >
> > Fri Oct 24 17:03:04 2014
> >
> > transaction_id = 120 - create = "3695"
> >
> > And suddenly on Fri Oct 24 18:07:23 2014
> > pool is back on older transaction_id
> >
> > transaction_id = 114
> >
> >
> > Is that the time of your vgcfgrestore?
> >
> > I'm attaching those metadata which you likely should put back to get in sync
> > with your kernel metadata (assuming you have not modified those in any way)
>
> Zdenek,
>
> can you please describe (possibly in details) what have you done with tar.gz
> sent to you, so everyone would know what to do next time?
>
> Thanks a lot!
Any Google query on lvm2 metadata recovery will disclose this - I've picked
randomly this one:
http://microdevsys.com/wp/linux-lvm-recovering-a-lost-volume/
In this case however provided data by user were just too short since he
created 300M metadata space - so I've asked to resend 4M on my email - so you
will not exactly find the info above in the initial tar.gz file (there
are just older versions) - but if you open file in 'vi' editor - you will see
those metadata yourself.
Zdenek
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-26 19:46 ` Vasiliy Tolstov
@ 2014-10-27 9:15 ` Zdenek Kabelac
2014-10-28 13:55 ` Vasiliy Tolstov
0 siblings, 1 reply; 14+ messages in thread
From: Zdenek Kabelac @ 2014-10-27 9:15 UTC (permalink / raw)
To: LVM general discussion and development
Dne 26.10.2014 v 20:46 Vasiliy Tolstov napsal(a):
> 2014-10-26 2:47 GMT+04:00 Zdenek Kabelac <zkabelac@redhat.com>:
>> From the metadata something bad was going one:
>>
>> Fri Oct 24 17:03:04 2014
>>
>> transaction_id = 120 - create = "3695"
>>
>> And suddenly on Fri Oct 24 18:07:23 2014
>> pool is back on older transaction_id
>>
>> transaction_id = 114
>>
>>
>> Is that the time of your vgcfgrestore?
>>
>> I'm attaching those metadata which you likely should put back to get in sync
>> with your kernel metadata (assuming you have not modified those in any way)
>
>
> Hm yes, i miss that one vg created in this time, thanks. As i
> understand transaction id needs to be changed? Or something other?
> But i have error:
>
> lvchange -ay vg1/2735
> Check of thin pool vg1/tp1 failed (status:1). Manual repair required
> (thin_dump --repair /dev/mapper/vg1-tp1_tmeta)!
>
If you would have latest lvm2 tools - you could have tried:
lvconvert --repair vg/pool
With older tools - you need to go in these manual step:
1. create temporary small LV
# lvcreate -an -Zn -L10 --name temp vg
2. replace pool's metadata volume with this tempLV
# lvconvert --thinpool vg/pool --poolmetadata temp
(say 'y' to swap)
3. activate & repair metadata from 'temp' volume - you will likely need
another volume where to store repaire metadata -
so create:
# lvcreate -Lat_least_as_big_as_temp --name repaired vg
# lvchage -ay vg/temp
# thin_repair -i /dev/vg/temp /dev/vg/repaired
if everything when fine - compare visualy 'transaction_id' of repaired
metadata (thin_dump /dev/vg/repaired)
4. swap deactivated repaired volume back to your thin-pool
# lvchange -an vg/repaired
# lvconvert --thinpool vg/pool --poolmetadata repaired
try to activate pool - if it doesn't work report more problems.
Zdenek
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-27 9:15 ` Zdenek Kabelac
@ 2014-10-28 13:55 ` Vasiliy Tolstov
2014-10-28 14:09 ` Joe Thornber
0 siblings, 1 reply; 14+ messages in thread
From: Vasiliy Tolstov @ 2014-10-28 13:55 UTC (permalink / raw)
To: LVM general discussion and development
2014-10-27 12:15 GMT+03:00 Zdenek Kabelac <zdenek.kabelac@gmail.com>:
> If you would have latest lvm2 tools - you could have tried:
>
> lvconvert --repair vg/pool
>
>
> With older tools - you need to go in these manual step:
>
> 4. swap deactivated repaired volume back to your thin-pool
> # lvchange -an vg/repaired
> # lvconvert --thinpool vg/pool --poolmetadata repaired
>
> try to activate pool - if it doesn't work report more problems.
I'm can't activate volumes =(.
I'm run
lvconvert --repair vg1/tp1
lvchange -ay vg1/tp1_meta0
thin_dump --repair /dev/mapper/vg1-tp1_tmeta0
<superblock uuid="" time="27" transaction="120" data_block_size="128"
nr_data_blocks="7290880">
</superblock>
lvchange -an vg1/tp1_tmeta0
and finally
lvchange -ay vg1/3695
Check of thin pool vg1/tp1 failed (status:1). Manual repair required
(thin_dump --repair /dev/mapper/vg1-tp1_tmeta)!
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-28 13:55 ` Vasiliy Tolstov
@ 2014-10-28 14:09 ` Joe Thornber
2014-10-28 14:29 ` Vasiliy Tolstov
0 siblings, 1 reply; 14+ messages in thread
From: Joe Thornber @ 2014-10-28 14:09 UTC (permalink / raw)
To: LVM general discussion and development
On Tue, Oct 28, 2014 at 05:55:12PM +0400, Vasiliy Tolstov wrote:
> thin_dump --repair /dev/mapper/vg1-tp1_tmeta0
thin_dump spits just spits out xml, it doesn't change the device it's
reading. So the process is either:
thin_dump --repair <dev> > metadata.xml
thin_restore -i metadata.xml -o <dev>
or you can use the thin_repair tool which does both of these
processes.
> <superblock uuid="" time="27" transaction="120" data_block_size="128"
> nr_data_blocks="7290880">
> </superblock>
This is worrying, you seem to have no volumes in your pool?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [linux-lvm] [dm-devel] fix corrupted thin pool
2014-10-28 14:09 ` Joe Thornber
@ 2014-10-28 14:29 ` Vasiliy Tolstov
0 siblings, 0 replies; 14+ messages in thread
From: Vasiliy Tolstov @ 2014-10-28 14:29 UTC (permalink / raw)
To: LVM general discussion and development
2014-10-28 17:09 GMT+03:00 Joe Thornber <thornber@redhat.com>:
> thin_dump spits just spits out xml, it doesn't change the device it's
> reading. So the process is either:
>
> thin_dump --repair <dev> > metadata.xml
> thin_restore -i metadata.xml -o <dev>
>
As i understand - if thin_dump --repair not displays volumes i can't
do restore...
> or you can use the thin_repair tool which does both of these
> processes.
>
>> <superblock uuid="" time="27" transaction="120" data_block_size="128"
>> nr_data_blocks="7290880">
>> </superblock>
>
> This is worrying, you seem to have no volumes in your pool?
No, as i email before, i have many volumes:
lvs vg1
LV VG Attr LSize Pool Origin Data% Move Log
Cpy%Sync Convert
2679_751 vg1 Vwi---tz-k 20.00g tp1
2735 vg1 Vwi---tz-- 20.00g tp1
2749 vg1 Vwi---tz-- 20.00g tp1
2799 vg1 Vwi---tz-- 20.00g tp1
2937_785 vg1 Vwi---tz-k 160.00g tp1
3119 vg1 Vwi---tz-- 20.00g tp1
3435 vg1 Vwi---tz-- 20.00g tp1
3471 vg1 Vwi---tz-- 20.00g tp1
3547 vg1 Vwi---tz-- 160.00g tp1
3645 vg1 Vwi---tz-- 20.00g tp1
3647 vg1 Vwi---tz-- 20.00g tp1
3695 vg1 Vwi---tz-- 20.00g tp1
tp1 vg1 twi---tz-- 445.00g
tp1_tmeta0 vg1 -wi------- 620.00m
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2014-10-28 14:30 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-10-24 19:59 [linux-lvm] fix corrupted thin pool Vasiliy Tolstov
2014-10-25 12:43 ` Zdenek Kabelac
2014-10-25 18:41 ` [linux-lvm] [dm-devel] " Vasiliy Tolstov
2014-10-25 18:42 ` Vasiliy Tolstov
2014-10-25 20:18 ` Zdenek Kabelac
2014-10-25 20:53 ` Vasiliy Tolstov
2014-10-25 22:47 ` Zdenek Kabelac
2014-10-26 19:46 ` Vasiliy Tolstov
2014-10-27 9:15 ` Zdenek Kabelac
2014-10-28 13:55 ` Vasiliy Tolstov
2014-10-28 14:09 ` Joe Thornber
2014-10-28 14:29 ` Vasiliy Tolstov
2014-10-27 6:58 ` Anatoly Pugachev
2014-10-27 9:05 ` Zdenek Kabelac
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).