linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zdenek.kabelac@gmail.com>
To: matthew patton <pattonme@yahoo.com>,
	"Brian J. Murrell" <brian@interlinx.bc.ca>,
	"linux-lvm@lists.linux.dev" <linux-lvm@lists.linux.dev>
Subject: Re: pvmove thin volume doesn't move
Date: Wed, 19 Nov 2025 21:38:52 +0100	[thread overview]
Message-ID: <f1fdc32b-0c9d-41a4-a3bb-98169351e9be@gmail.com> (raw)
In-Reply-To: <2068740993.5350374.1763581304153@mail.yahoo.com>

Dne 19. 11. 25 v 20:41 matthew patton napsal(a):
>> If the tool would have known 'which areas' are  mapped  (which knows thin-pool
>> target internally) then it would need to copy only those blocks.
> 
> no doubt. but if lvmthin is basically a private implementation that LVM (aka thick) doesn't actually know anything about and is just being used as a pass-thru to thin API, I'm not sure we want to expose thin internals to the caller. I obviously haven't read the code implementing either thick or thin, but if thick does a thin_read(one 4MB extent) then thin should just return a buffer with 4mb populated with all the data converted into a thick+linear representation that Thick is expecting. Then the traditional workflow can resume with the extent written out to its destination PV. In other words you're hydrating a thin representation into a thick representation. Could you take that buffer and thin_write(dest thin pool)? I don't see why not.
> 


Clearly tiny thin-pool can provide a space to virtual volume of some massive 
size - i.e. you can have 10TiB LV using just couple MiB of the real physical 
storage in a VG.

So if I'd be an 'unaware' admin how this all works - I'd kind of naively 
expect then when I 'pvmove' such thinLV where the original LV takes just those 
'couple' MiB - then I'd expect that copied volume would also take take 
approximately 'same' space and you would be coping couple MiB instead of 
having an operation running for even days.

We can 'kind of script' these things nowadays offline - but making this 
'online' requires a lot of new kernel code...

So while a naive/dumb pvmove may possibly have some minor 'usage' - it has 
many 'weak' points - and plain 'dd' can be likely copy data faster - and as 
said I'm not aware users would ever requested such operation until today.

Typically admins do move the whole storage to the faster hardware - so 
thinpool with its data & metadata is moved to the new drive - which is fully 
supported online.

> Or you could just punt and say pvmove() of a thin is necessarily a hydrate operation and can only be written out into a non-then PV, tough luck. Use offline `dd`if you want more.
> 
> I haven't been particularly impressed by LVM caching (it has it's uses don't get me wrong) but I find layering open-cas to be more intuitive and gives me a degree of freedom.
> 

It's worth to note -  dm-cache target is fully customizable - so anyone can 
come with 'policies' that fits their needs -  somehow this doesn't happen and 
users mostly stick with default 'smq' policy - which is usually 'good enough' 
- but its a hot-spot cache. There is also 'writecache' which is targeted for 
heavy write loads....

If some likes OpenCas more :) obviously we can't change his mind. We've tried 
to make 'caching' simple for lvm2 users aka:
    'lvcreate --type cache -L100G  vg/lv /dev/nvme0p1'
(assuming vg was already vgextended with /dev/nvme0p1)
but surely there are many ways to skin the cat...

Regards

Zdenek


  reply	other threads:[~2025-11-19 20:38 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-16 13:27 pvmove thin volume doesn't move Brian J. Murrell
2025-11-17 15:01 ` Zdenek Kabelac
2025-11-17 23:30   ` Brian J. Murrell
2025-11-17 23:37     ` Zdenek Kabelac
2025-11-18 23:14       ` Brian J. Murrell
2025-11-19  9:16         ` Zdenek Kabelac
2025-11-19 14:07           ` Matthew Patton
2025-11-19 15:46             ` Brian J. Murrell
2025-11-19 16:34               ` Zdenek Kabelac
2025-11-19 16:06             ` Zdenek Kabelac
2025-11-19 16:36               ` Brian J. Murrell
2025-11-19 16:59                 ` Zdenek Kabelac
2025-11-19 17:22               ` matthew patton
2025-11-19 17:31                 ` Zdenek Kabelac
2025-11-19 17:38                   ` Brian J. Murrell
2025-11-19 18:11                     ` Zdenek Kabelac
2025-11-19 19:41                       ` matthew patton
2025-11-19 20:38                         ` Zdenek Kabelac [this message]
2025-11-19 16:10 ` David Teigland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f1fdc32b-0c9d-41a4-a3bb-98169351e9be@gmail.com \
    --to=zdenek.kabelac@gmail.com \
    --cc=brian@interlinx.bc.ca \
    --cc=linux-lvm@lists.linux.dev \
    --cc=pattonme@yahoo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).