linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
@ 2016-05-31 10:57 Brian J. Murrell
  2016-05-31 11:22 ` Zdenek Kabelac
  0 siblings, 1 reply; 9+ messages in thread
From: Brian J. Murrell @ 2016-05-31 10:57 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 696 bytes --]

I have a Fedora 23 system running (presumably, since I can't boot it to
be 100% sure) 2.02.132 of LVM.  It has ceased to boot and reports:

device-mapper: resume ioctl on (253:2) failed: Invalid argument
Unable to resume laptop-pool00-tpool (253:2)
thin: Data device (dm-1) dicard unsupported: Disabling discard passdown.
thin: 235:2 pool target (735616 blocks) too small: expected 809216
table: 253:2: thin-pool preresume failed, error = -22

Any ideas what the problem could be?  I can't imagine why all of a
sudden the pool target would be smaller than it should be.

What further information can I provide to help debug (and very
hopefully repair) this system?

Cheers,
b.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-05-31 10:57 [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216 Brian J. Murrell
@ 2016-05-31 11:22 ` Zdenek Kabelac
  2016-06-01 22:52   ` Brian J. Murrell
  0 siblings, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2016-05-31 11:22 UTC (permalink / raw)
  To: LVM general discussion and development, Brian Murrell

Dne 31.5.2016 v 12:57 Brian J. Murrell napsal(a):
> I have a Fedora 23 system running (presumably, since I can't boot it to
> be 100% sure) 2.02.132 of LVM.  It has ceased to boot and reports:
>
> device-mapper: resume ioctl on (253:2) failed: Invalid argument
> Unable to resume laptop-pool00-tpool (253:2)
> thin: Data device (dm-1) dicard unsupported: Disabling discard passdown.
> thin: 235:2 pool target (735616 blocks) too small: expected 809216
> table: 253:2: thin-pool preresume failed, error = -22
>
> Any ideas what the problem could be?  I can't imagine why all of a
> sudden the pool target would be smaller than it should be.
>
> What further information can I provide to help debug (and very
> hopefully repair) this system?

Hi

Well - could you post somewhere for download 1st. MB of your
PV device?

dd if=/dev/sdX of=/tmp/upload_me bs=1M count=1

Have you already tried to restore in some way ?

Regards


Zdenek

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-05-31 11:22 ` Zdenek Kabelac
@ 2016-06-01 22:52   ` Brian J. Murrell
  2016-06-02  9:11     ` Zdenek Kabelac
  0 siblings, 1 reply; 9+ messages in thread
From: Brian J. Murrell @ 2016-06-01 22:52 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 690 bytes --]

On Tue, 2016-05-31 at 13:22 +0200, Zdenek Kabelac wrote:
> Hi

Sorry for the delay.  It can be a bugger to get an environment on a
laptop that won't boot USB devices to a state where you can access the
PV and store something from it.

> Well - could you post somewhere for download 1st. MB of your
> PV device?
> 
> dd if=/dev/sdX of=/tmp/upload_me bs=1M count=1

You should be able to get that from:

http://www.interlinx.bc.ca/~brian/lvm-pv.img

> Have you already tried to restore in some way ?

I have not tried anything yet for fear of making things worse ("angels
tread" and all that), so this is a blank slate from the point of it
first failing.

Cheers,
b.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-06-01 22:52   ` Brian J. Murrell
@ 2016-06-02  9:11     ` Zdenek Kabelac
  2016-06-02 10:49       ` Brian J. Murrell
  0 siblings, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2016-06-02  9:11 UTC (permalink / raw)
  To: LVM general discussion and development, Brian Murrell

Dne 2.6.2016 v 00:52 Brian J. Murrell napsal(a):
> On Tue, 2016-05-31 at 13:22 +0200, Zdenek Kabelac wrote:
>> Hi
>
> Sorry for the delay.  It can be a bugger to get an environment on a
> laptop that won't boot USB devices to a state where you can access the
> PV and store something from it.
>
>> Well - could you post somewhere for download 1st. MB of your
>> PV device?
>>
>> dd if=/dev/sdX of=/tmp/upload_me bs=1M count=1
>
> You should be able to get that from:
>
> http://www.interlinx.bc.ca/~brian/lvm-pv.img
>
>> Have you already tried to restore in some way ?
>
> I have not tried anything yet for fear of making things worse ("angels
> tread" and all that), so this is a blank slate from the point of it
> first failing.

Hi

So it seems your  machine has crashed (you probably know better) during
thin-pool resize operation.

Unfortunately fc23 has lvm2 version 2.02.132 - and version 2.02.133 has 
improvement patch to make the resize more resistent (but still not as good as 
I wish to be).

So what happened -  lvm2 resized _tdata LV - tried to resumed it - and
it has failed along this path - however since the 'resize' is ATM a single 
transaction - the lvm2 rollback reverted to previous size - yet thin-pool
already managed to remembered 'new' bigger size.

So please take a look at your logs (if you have some) if there
is something suspicious to be mentioned (thought if you machined
has freezed, hardly any log will be available).

To get access to your thin-pool - I'm attaching restored metadata content from 
your disk header with 'bigger' _tdata volume.

To restore use:

'vgcfgrestore -f back  --force brianr-laptop'

It would be really interesting to know the reason of failure - but I can 
understand you could hardly obtain.

Regards


Zdenek

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-06-02  9:11     ` Zdenek Kabelac
@ 2016-06-02 10:49       ` Brian J. Murrell
  2016-06-02 12:15         ` Zdenek Kabelac
  0 siblings, 1 reply; 9+ messages in thread
From: Brian J. Murrell @ 2016-06-02 10:49 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1947 bytes --]

On Thu, 2016-06-02 at 11:11 +0200, Zdenek Kabelac wrote:
> Hi

Hi.

> So it seems your  machine has crashed (you probably know better)
> during
> thin-pool resize operation.

So, this is actually a family member's machine and he was using it at
the time so I don't know, firsthand what happened.

He says he was just adding some plugin to some (Windows) program in
Wine.  Doesn't seem like that should come anywhere near resizing so I
will ask him again.

> Unfortunately fc23 has lvm2 version 2.02.132 - and version 2.02.133
> has 
> improvement patch to make the resize more resistent (but still not as
> good as 
> I wish to be).

Incremental improvements are improvements all the same.  :-)

> So what happened -  lvm2 resized _tdata LV - tried to resumed it -
> and
> it has failed along this path - however since the 'resize' is ATM a
> single 
> transaction - the lvm2 rollback reverted to previous size - yet thin-
> pool
> already managed to remembered 'new' bigger size.

Ahhh.

> So please take a look at your logs (if you have some) if there
> is something suspicious to be mentioned (thought if you machined
> has freezed, hardly any log will be available).

I will take a look once I can get the system back up and running.

> To get access to your thin-pool - I'm attaching restored metadata
> content from 
> your disk header with 'bigger' _tdata volume.

There was no attachment.

> To restore use:
> 
> 'vgcfgrestore -f back  --force brianr-laptop'

I can do that in Fedora's "rescue" environment?  Probably "lvm
vgcfgrestore -f back  --force brianr-laptop" instead.

> It would be really interesting to know the reason of failure - but I
> can 
> understand you could hardly obtain.

I will see if there is anything in the logs and get back to you.  It's
the least I can do in return for the help you have provided.  Much
appreciated.

Cheers,
b.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-06-02 10:49       ` Brian J. Murrell
@ 2016-06-02 12:15         ` Zdenek Kabelac
  2016-06-02 19:27           ` Brian J. Murrell
  0 siblings, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2016-06-02 12:15 UTC (permalink / raw)
  To: LVM general discussion and development, Brian Murrell

[-- Attachment #1: Type: text/plain, Size: 2254 bytes --]

Dne 2.6.2016 v 12:49 Brian J. Murrell napsal(a):
> On Thu, 2016-06-02 at 11:11 +0200, Zdenek Kabelac wrote:
>> Hi
>
> Hi.
>
>> So it seems your  machine has crashed (you probably know better)
>> during
>> thin-pool resize operation.
>
> So, this is actually a family member's machine and he was using it at
> the time so I don't know, firsthand what happened.
>
> He says he was just adding some plugin to some (Windows) program in
> Wine.  Doesn't seem like that should come anywhere near resizing so I
> will ask him again.
>

Yep that might explain 'missing' surrouding details...
Basically any write might result in thin-pool running out of threshold
and requireing new space - what's unclear is the later failure which
would be really good to know....


>> Unfortunately fc23 has lvm2 version 2.02.132 - and version 2.02.133
>> has
>> improvement patch to make the resize more resistent (but still not as
>> good as
>> I wish to be).
>
> Incremental improvements are improvements all the same.  :-)
>
>> So what happened -  lvm2 resized _tdata LV - tried to resumed it -
>> and
>> it has failed along this path - however since the 'resize' is ATM a
>> single
>> transaction - the lvm2 rollback reverted to previous size - yet thin-
>> pool
>> already managed to remembered 'new' bigger size.
>
> Ahhh.
>
>> So please take a look at your logs (if you have some) if there
>> is something suspicious to be mentioned (thought if you machined
>> has freezed, hardly any log will be available).
>
> I will take a look once I can get the system back up and running.
>
>> To get access to your thin-pool - I'm attaching restored metadata
>> content from
>> your disk header with 'bigger' _tdata volume.
>
> There was no attachment.
>

Ops  - so once again ;)



>> To restore use:
>>
>> 'vgcfgrestore -f back  --force brianr-laptop'
>
> I can do that in Fedora's "rescue" environment?  Probably "lvm
> vgcfgrestore -f back  --force brianr-laptop" instead.
>
>> It would be really interesting to know the reason of failure - but I
>> can
>> understand you could hardly obtain.
>
> I will see if there is anything in the logs and get back to you.  It's
> the least I can do in return for the help you have provided.  Much
> appreciated.
>
> Cheers,

Zdenek



[-- Attachment #2: back --]
[-- Type: text/plain, Size: 3931 bytes --]

# Generated by LVM2 version 2.02.155(2)-git (2016-05-14): Thu Jun  2 11:00:16 2016

contents = "Text Format Volume Group"
version = 1

description = "vgcfgbackup -f back brianr-laptop"

creation_host = "linux"	# Linux linux 4.6.0-1.fc25.x86_64 #1 SMP Mon May 16 14:57:01 UTC 2016 x86_64
creation_time = 1464858016	# Thu Jun  2 11:00:16 2016

brianr-laptop {
	id = "remwwI-SXei-swn5-9r5D-shjG-iqgP-UraDRi"
	seqno = 29
	format = "lvm2"			# informational
	status = ["RESIZEABLE", "READ", "WRITE"]
	flags = []
	extent_size = 8192		# 4 Megabytes
	max_lv = 0
	max_pv = 0
	metadata_copies = 0

	physical_volumes {

		pv0 {
			id = "t0OI9M-lExB-ZraT-4p4o-r20E-obtw-OtrtRL"
			device = "/dev/loop0"	# Hint only

			status = ["ALLOCATABLE"]
			flags = []
			dev_size = 494223360	# 235,664 Gigabytes
			pe_start = 2048
			pe_count = 60330	# 235,664 Gigabytes
		}
	}

	logical_volumes {

		pool00 {
			id = "wPFVij-HJpy-QYtM-9ekQ-65DX-MRPj-p55mpW"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_time = 1430510215	# 2015-05-01 21:56:55 +0200
			creation_host = "localhost"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 12644	# 49,3906 Gigabytes

				type = "thin-pool"
				metadata = "pool00_tmeta"
				pool = "pool00_tdata"
				transaction_id = 3
				chunk_size = 128	# 64 Kilobytes
				discards = "passdown"
				zero_new_blocks = 1
			}
		}

		home {
			id = "MyMBIk-Vb2A-H9dx-6jfA-TcpM-WcVk-imiust"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_time = 1430510215	# 2015-05-01 21:56:55 +0200
			creation_host = "localhost"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 5120	# 20 Gigabytes

				type = "thin"
				thin_pool = "pool00"
				transaction_id = 0
				device_id = 1
			}
		}

		root {
			id = "z3iQ0s-O2dD-qLv2-QXhR-RiK9-e7dq-BWRX6r"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_time = 1430510222	# 2015-05-01 21:57:02 +0200
			creation_host = "localhost"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 3840	# 15 Gigabytes

				type = "thin"
				thin_pool = "pool00"
				transaction_id = 1
				device_id = 2
			}
		}

		swap {
			id = "jo1Nr6-IJdx-azkx-pmEw-qV7R-yr2M-ACWu2s"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_time = 1430510228	# 2015-05-01 21:57:08 +0200
			creation_host = "localhost"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 1472	# 5,75 Gigabytes

				type = "thin"
				thin_pool = "pool00"
				transaction_id = 2
				device_id = 3
			}
		}

		lvol0_pmspare {
			id = "NRyvBs-VFJJ-yRJv-e62a-N6FC-cjRj-it4Y4O"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1430510215	# 2015-05-01 21:56:55 +0200
			creation_host = "localhost"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 7	# 28 Megabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 0
				]
			}
		}

		pool00_tmeta {
			id = "MFnJK1-fkVP-ERt6-TKTN-5DPr-SdNI-2Si8LM"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1430510215	# 2015-05-01 21:56:55 +0200
			creation_host = "localhost"
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 7	# 28 Megabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 6600
				]
			}
		}

		pool00_tdata {
			id = "NqzQU0-GS2v-5rA6-vy7d-vRt2-lJWL-76kBnj"
			status = ["READ", "WRITE"]
			flags = []
			creation_time = 1430510215	# 2015-05-01 21:56:55 +0200
			creation_host = "localhost"
			segment_count = 2

			segment1 {
				start_extent = 0
				extent_count = 6593	# 25,7539 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 7
				]
			}
			segment2 {
				start_extent = 6593
				extent_count = 6051	# 23,6367 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 6607
				]
			}
		}
	}

}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-06-02 12:15         ` Zdenek Kabelac
@ 2016-06-02 19:27           ` Brian J. Murrell
  2016-06-02 19:32             ` Zdenek Kabelac
  0 siblings, 1 reply; 9+ messages in thread
From: Brian J. Murrell @ 2016-06-02 19:27 UTC (permalink / raw)
  To: Zdenek Kabelac, LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 750 bytes --]

On Thu, 2016-06-02 at 14:15 +0200, Zdenek Kabelac wrote:
> Yep that might explain 'missing' surrouding details...
> Basically any write might result in thin-pool running out of
> threshold
> and requireing new space

Ahhh.  I was wondering if the resize in question could be some kind of
implicit resizing done by the "hidden" thin-pool maintenance.  I'm
fairly sure he knows zilch about LVM and resizing.  :-)

>  - what's unclear is the later failure which
> would be really good to know....

Is there anything in particular I can look for in the logs that would
indicate when this implicit resizing was happening to look for other
activity that might have caused a problem?

> Ops  - so once again ;)

Cheers.  :-)

b.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-06-02 19:27           ` Brian J. Murrell
@ 2016-06-02 19:32             ` Zdenek Kabelac
  2016-06-02 21:18               ` Brian J. Murrell
  0 siblings, 1 reply; 9+ messages in thread
From: Zdenek Kabelac @ 2016-06-02 19:32 UTC (permalink / raw)
  To: Brian J. Murrell, LVM general discussion and development

Dne 2.6.2016 v 21:27 Brian J. Murrell napsal(a):
> On Thu, 2016-06-02 at 14:15 +0200, Zdenek Kabelac wrote:
>> Yep that might explain 'missing' surrouding details...
>> Basically any write might result in thin-pool running out of
>> threshold
>> and requireing new space
>
> Ahhh.  I was wondering if the resize in question could be some kind of
> implicit resizing done by the "hidden" thin-pool maintenance.  I'm
> fairly sure he knows zilch about LVM and resizing.  :-)
>

When lvm.conf  has defined thin_pool_autoextend_threshold <100% the pool is 
automatically monitored and resized when threshold is crossed.


>>  - what's unclear is the later failure which
>> would be really good to know....
>
> Is there anything in particular I can look for in the logs that would
> indicate when this implicit resizing was happening to look for other
>

If you spot any 'device-mapper' errors on May 25 in kernel log....

activity that might have caused a problem?
>
>> Ops  - so once again ;)
>
> Cheers.  :-)
>
> b.
>

Zdenek

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216
  2016-06-02 19:32             ` Zdenek Kabelac
@ 2016-06-02 21:18               ` Brian J. Murrell
  0 siblings, 0 replies; 9+ messages in thread
From: Brian J. Murrell @ 2016-06-02 21:18 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 664 bytes --]

On Thu, 2016-06-02 at 21:32 +0200, Zdenek Kabelac wrote:
> 
> When lvm.conf  has defined thin_pool_autoextend_threshold <100% the
> pool is 
> automatically monitored and resized when threshold is crossed.

Indeed.

> If you spot any 'device-mapper' errors on May 25 in kernel log....

Nothing.  Not a single device-mapper message since May 19 and that was
only:

May 19 17:00:23 laptop kernel: device-mapper: thin: Data device (dm-1) discard unsupported: Disabling discard passdown.

during a normal boot.

For what it's worth, I have the machine up and running so your help and
advise were spot on.  Thanks again so much.

Cheers,
b.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-06-02 21:26 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-31 10:57 [linux-lvm] thin: 235:2 pool target (735616 blocks) too small: expected 809216 Brian J. Murrell
2016-05-31 11:22 ` Zdenek Kabelac
2016-06-01 22:52   ` Brian J. Murrell
2016-06-02  9:11     ` Zdenek Kabelac
2016-06-02 10:49       ` Brian J. Murrell
2016-06-02 12:15         ` Zdenek Kabelac
2016-06-02 19:27           ` Brian J. Murrell
2016-06-02 19:32             ` Zdenek Kabelac
2016-06-02 21:18               ` Brian J. Murrell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).