From: Zdenek Kabelac <zdenek.kabelac@gmail.com>
To: LVM general discussion and development <linux-lvm@redhat.com>,
Brian Murrell <brian@interlinx.bc.ca>
Subject: Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
Date: Thu, 2 Jun 2016 14:15:17 +0200 [thread overview]
Message-ID: <6504f4cb-8f17-390a-ec9f-0cc6826b23ec@gmail.com> (raw)
In-Reply-To: <1464864575.13773.29.camel@interlinx.bc.ca>
[-- Attachment #1: Type: text/plain, Size: 2254 bytes --]
Dne 2.6.2016 v 12:49 Brian J. Murrell napsal(a):
> On Thu, 2016-06-02 at 11:11 +0200, Zdenek Kabelac wrote:
>> Hi
>
> Hi.
>
>> So it seems your machine has crashed (you probably know better)
>> during
>> thin-pool resize operation.
>
> So, this is actually a family member's machine and he was using it at
> the time so I don't know, firsthand what happened.
>
> He says he was just adding some plugin to some (Windows) program in
> Wine. Doesn't seem like that should come anywhere near resizing so I
> will ask him again.
>
Yep that might explain 'missing' surrouding details...
Basically any write might result in thin-pool running out of threshold
and requireing new space - what's unclear is the later failure which
would be really good to know....
>> Unfortunately fc23 has lvm2 version 2.02.132 - and version 2.02.133
>> has
>> improvement patch to make the resize more resistent (but still not as
>> good as
>> I wish to be).
>
> Incremental improvements are improvements all the same. :-)
>
>> So what happened - lvm2 resized _tdata LV - tried to resumed it -
>> and
>> it has failed along this path - however since the 'resize' is ATM a
>> single
>> transaction - the lvm2 rollback reverted to previous size - yet thin-
>> pool
>> already managed to remembered 'new' bigger size.
>
> Ahhh.
>
>> So please take a look at your logs (if you have some) if there
>> is something suspicious to be mentioned (thought if you machined
>> has freezed, hardly any log will be available).
>
> I will take a look once I can get the system back up and running.
>
>> To get access to your thin-pool - I'm attaching restored metadata
>> content from
>> your disk header with 'bigger' _tdata volume.
>
> There was no attachment.
>
Ops - so once again ;)
>> To restore use:
>>
>> 'vgcfgrestore -f back --force brianr-laptop'
>
> I can do that in Fedora's "rescue" environment? Probably "lvm
> vgcfgrestore -f back --force brianr-laptop" instead.
>
>> It would be really interesting to know the reason of failure - but I
>> can
>> understand you could hardly obtain.
>
> I will see if there is anything in the logs and get back to you. It's
> the least I can do in return for the help you have provided. Much
> appreciated.
>
> Cheers,
Zdenek
[-- Attachment #2: back --]
[-- Type: text/plain, Size: 3931 bytes --]
# Generated by LVM2 version 2.02.155(2)-git (2016-05-14): Thu Jun 2 11:00:16 2016
contents = "Text Format Volume Group"
version = 1
description = "vgcfgbackup -f back brianr-laptop"
creation_host = "linux" # Linux linux 4.6.0-1.fc25.x86_64 #1 SMP Mon May 16 14:57:01 UTC 2016 x86_64
creation_time = 1464858016 # Thu Jun 2 11:00:16 2016
brianr-laptop {
id = "remwwI-SXei-swn5-9r5D-shjG-iqgP-UraDRi"
seqno = 29
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "t0OI9M-lExB-ZraT-4p4o-r20E-obtw-OtrtRL"
device = "/dev/loop0" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 494223360 # 235,664 Gigabytes
pe_start = 2048
pe_count = 60330 # 235,664 Gigabytes
}
}
logical_volumes {
pool00 {
id = "wPFVij-HJpy-QYtM-9ekQ-65DX-MRPj-p55mpW"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1430510215 # 2015-05-01 21:56:55 +0200
creation_host = "localhost"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 12644 # 49,3906 Gigabytes
type = "thin-pool"
metadata = "pool00_tmeta"
pool = "pool00_tdata"
transaction_id = 3
chunk_size = 128 # 64 Kilobytes
discards = "passdown"
zero_new_blocks = 1
}
}
home {
id = "MyMBIk-Vb2A-H9dx-6jfA-TcpM-WcVk-imiust"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1430510215 # 2015-05-01 21:56:55 +0200
creation_host = "localhost"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 5120 # 20 Gigabytes
type = "thin"
thin_pool = "pool00"
transaction_id = 0
device_id = 1
}
}
root {
id = "z3iQ0s-O2dD-qLv2-QXhR-RiK9-e7dq-BWRX6r"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1430510222 # 2015-05-01 21:57:02 +0200
creation_host = "localhost"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 3840 # 15 Gigabytes
type = "thin"
thin_pool = "pool00"
transaction_id = 1
device_id = 2
}
}
swap {
id = "jo1Nr6-IJdx-azkx-pmEw-qV7R-yr2M-ACWu2s"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1430510228 # 2015-05-01 21:57:08 +0200
creation_host = "localhost"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 1472 # 5,75 Gigabytes
type = "thin"
thin_pool = "pool00"
transaction_id = 2
device_id = 3
}
}
lvol0_pmspare {
id = "NRyvBs-VFJJ-yRJv-e62a-N6FC-cjRj-it4Y4O"
status = ["READ", "WRITE"]
flags = []
creation_time = 1430510215 # 2015-05-01 21:56:55 +0200
creation_host = "localhost"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 7 # 28 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
pool00_tmeta {
id = "MFnJK1-fkVP-ERt6-TKTN-5DPr-SdNI-2Si8LM"
status = ["READ", "WRITE"]
flags = []
creation_time = 1430510215 # 2015-05-01 21:56:55 +0200
creation_host = "localhost"
segment_count = 1
segment1 {
start_extent = 0
extent_count = 7 # 28 Megabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 6600
]
}
}
pool00_tdata {
id = "NqzQU0-GS2v-5rA6-vy7d-vRt2-lJWL-76kBnj"
status = ["READ", "WRITE"]
flags = []
creation_time = 1430510215 # 2015-05-01 21:56:55 +0200
creation_host = "localhost"
segment_count = 2
segment1 {
start_extent = 0
extent_count = 6593 # 25,7539 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 7
]
}
segment2 {
start_extent = 6593
extent_count = 6051 # 23,6367 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 6607
]
}
}
}
}
next prev parent reply other threads:[~2016-06-02 12:15 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-31 10:57 [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216 Brian J. Murrell
2016-05-31 11:22 ` Zdenek Kabelac
2016-06-01 22:52 ` Brian J. Murrell
2016-06-02 9:11 ` Zdenek Kabelac
2016-06-02 10:49 ` Brian J. Murrell
2016-06-02 12:15 ` Zdenek Kabelac [this message]
2016-06-02 19:27 ` Brian J. Murrell
2016-06-02 19:32 ` Zdenek Kabelac
2016-06-02 21:18 ` Brian J. Murrell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6504f4cb-8f17-390a-ec9f-0cc6826b23ec@gmail.com \
--to=zdenek.kabelac@gmail.com \
--cc=brian@interlinx.bc.ca \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).