* Converting thin stripe volume to linear
@ 2025-03-13 0:38 Patrick Hemmer
2025-03-13 18:54 ` Zdenek Kabelac
0 siblings, 1 reply; 8+ messages in thread
From: Patrick Hemmer @ 2025-03-13 0:38 UTC (permalink / raw)
To: linux-lvm
I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.
So to start, the drives at play are sda (a new larger drive), sdb (one of the older drives being removed), & sdc (the other drive being removed). The VG name is "ssd".
This is what the initial layout looked like:
# lvs -o+lv_layout,stripes -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Layout #Str
[lvol0_pmspare] ssd ewi-a----- 236.00m linear 1
thin ssd twi-aotz-- 930.59g 92.40 98.76 thin,pool 1
[thin_tdata] ssd Twi-ao---- 930.59g striped 2
[thin_tmeta] ssd ewi-ao---- 236.00m linear 1
(plus some other LVs which are using the thin pool volume that I've omitted)
I initiated the mirror with:
# lvconvert -m 1 ssd/thin_tdata
Replaced LV type raid1 with possible type raid5_n.
Repeat this command to convert to raid1 after an interim conversion has finished.
Are you sure you want to convert striped LV ssd/thin_tdata to raid5_n type? [y/n]: y
Logical volume ssd/thin_tdata successfully converted.
And this is where I'm stuck. If I follow the instructions there and repeat the command, I get a nasty warning:
# lvconvert -m 1 ssd/thin_tdata
Using default stripesize 64.00 KiB.
Converting raid5_n LV ssd/thin_tdata to 2 stripes first.
WARNING: Removing stripes from active and open logical volume ssd/thin_tdata will shrink it from 930.59 GiB to <465.30 GiB!
THIS MAY DESTROY (PARTS OF) YOUR DATA!
Interrupt the conversion and run "lvresize -y -l476464 ssd/thin_tdata" to keep the current size if not done already!
If that leaves the logical volume larger than 476464 extents due to stripe rounding,
you may want to grow the content afterwards (filesystem etc.)
WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 ssd/thin_tdata"
Can't remove stripes without --force option.
Reshape request failed on LV ssd/thin_tdata.
If I go with other information I've found online, and skip to `-m 0` here instead of repeating the `-m 1` command, I get:
# lvconvert -m 0 ssd/thin_tdata
Using default stripesize 64.00 KiB.
No change in RAID LV ssd/thin_tdata layout, freeing reshape space.
LV ssd/thin_tdata does not have reshape space allocated.
Reshape request failed on LV ssd/thin_tdata.
This is what the layout currently looks like:
# lvs -o+lv_layout,stripes,devices -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Layout #Str Devices
[lvol0_pmspare] ssd ewi-a----- 236.00m linear 1 /dev/sda(0)
thin ssd twi-aotz-- 930.59g 92.40 98.76 thin,pool 1 thin_tdata(0)
[thin_tdata] ssd rwi-aor--- 930.59g 100.00 raid,raid5 3 thin_tdata_rimage_0(0),thin_tdata_rimage_1(0),thin_tdata_rimage_2(0)
[thin_tdata_rimage_0] ssd iwi-aor--- <465.30g linear 1 /dev/sda(118)
[thin_tdata_rimage_1] ssd iwi-aor--- <465.30g linear 1 /dev/sdb(0)
[thin_tdata_rimage_2] ssd iwi-aor--- <465.30g linear 1 /dev/sdc(1)
[thin_tdata_rmeta_0] ssd ewi-aor--- 4.00m linear 1 /dev/sda(119234)
[thin_tdata_rmeta_1] ssd ewi-aor--- 4.00m linear 1 /dev/sdb(119116)
[thin_tdata_rmeta_2] ssd ewi-aor--- 4.00m linear 1 /dev/sdc(0)
[thin_tmeta] ssd ewi-ao---- 236.00m linear 1 /dev/sda(59)
This is with lvm2-2.03.23-1 on Fedora 40
Any idea how I get past this point? I could just build a completely new logical volume and manually copy the data, but there's around 40 logical volumes on this thin pool, many of which are snapshots, so would be much easier to just convert it if possible.
--
Patrick
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Converting thin stripe volume to linear
2025-03-13 0:38 Converting thin stripe volume to linear Patrick Hemmer
@ 2025-03-13 18:54 ` Zdenek Kabelac
2025-03-13 23:14 ` Patrick Hemmer
0 siblings, 1 reply; 8+ messages in thread
From: Zdenek Kabelac @ 2025-03-13 18:54 UTC (permalink / raw)
To: Patrick Hemmer, linux-lvm
Dne 13. 03. 25 v 1:38 Patrick Hemmer napsal(a):
> I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.
Hi
Likely you can convert your nearly full thin-pool with a single thin volume to
a linear LV by just taking 'dd' copy of if=/dev/thin of=/dev/linear - using
direct io option (and this can be actually faster then raid mirroring).
If you are converting _tdata - you are basically using raid1 for _tdata - but
that is no help to convert anything to linear - _tdata holds 'chunks' and how
these chunks are 'mapped' to 'create' your thinLV looking like a 'linear block
device' is fully in control by a thin-pool target and the use of _tmeta content.
While lvm2 could eventually add support doing a raid for thinLV - so you can
have your duplicate made while your device is online and in-use - currently
there is no such variant support - so you need to 'activate' thinLV
and without any use/mounting take a 'dd' copy.
>
> Any idea how I get past this point? I could just build a completely new logical volume and manually copy the data, but there's around 40 logical volumes on this thin pool, many of which are snapshots, so would be much easier to just convert it if possible.
Well once you go to thin-pool, there is no other route to go back to linear
other then 'manual copy' (and no one has even requested such feature).
Regards
Zdenek
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Converting thin stripe volume to linear
2025-03-13 18:54 ` Zdenek Kabelac
@ 2025-03-13 23:14 ` Patrick Hemmer
2025-03-14 0:05 ` Zdenek Kabelac
0 siblings, 1 reply; 8+ messages in thread
From: Patrick Hemmer @ 2025-03-13 23:14 UTC (permalink / raw)
To: Zdenek Kabelac, linux-lvm
On Thu, Mar 13, 2025, at 14:54, Zdenek Kabelac wrote:
> Dne 13. 03. 25 v 1:38 Patrick Hemmer napsal(a):
>> I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.
>
> Hi
>
> Likely you can convert your nearly full thin-pool with a single thin volume to
> a linear LV by just taking 'dd' copy of if=/dev/thin of=/dev/linear - using
> direct io option (and this can be actually faster then raid mirroring).
I assume I need to copy both the tdata and tmeta volumes to their new linear counterparts. Did this, but now I assume I need to do something to get lvm to rescan the new linear thin volume to pick up the logical volumes that are now on it. And also stop LVM from picking them up off the old thin volume. I deactivated the old thin volume, but LVM is still recognizing up all the logical volumes inside it. I could completely delete the old thin volume, but I'd prefer to get the new volume online before doing that.
> If you are converting _tdata - you are basically using raid1 for _tdata - but
> that is no help to convert anything to linear - _tdata holds 'chunks' and how
> these chunks are 'mapped' to 'create' your thinLV looking like a 'linear block
> device' is fully in control by a thin-pool target and the use of _tmeta content.
>
> While lvm2 could eventually add support doing a raid for thinLV - so you can
> have your duplicate made while your device is online and in-use - currently
> there is no such variant support - so you need to 'activate' thinLV
> and without any use/mounting take a 'dd' copy.
>
>>
>> Any idea how I get past this point? I could just build a completely new logical volume and manually copy the data, but there's around 40 logical volumes on this thin pool, many of which are snapshots, so would be much easier to just convert it if possible.
>
> Well once you go to thin-pool, there is no other route to go back to linear
> other then 'manual copy' (and no one has even requested such feature).
>
> Regards
>
> Zdenek
--
Patrick
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Converting thin stripe volume to linear
2025-03-13 23:14 ` Patrick Hemmer
@ 2025-03-14 0:05 ` Zdenek Kabelac
2025-03-14 0:17 ` Patrick Hemmer
0 siblings, 1 reply; 8+ messages in thread
From: Zdenek Kabelac @ 2025-03-14 0:05 UTC (permalink / raw)
To: Patrick Hemmer, linux-lvm
Dne 14. 03. 25 v 0:14 Patrick Hemmer napsal(a):
> On Thu, Mar 13, 2025, at 14:54, Zdenek Kabelac wrote:
>> Dne 13. 03. 25 v 1:38 Patrick Hemmer napsal(a):
>>> I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.
>>
>> Hi
>>
>> Likely you can convert your nearly full thin-pool with a single thin volume to
>> a linear LV by just taking 'dd' copy of if=/dev/thin of=/dev/linear - using
>> direct io option (and this can be actually faster then raid mirroring).
>
> I assume I need to copy both the tdata and tmeta volumes to their new linear counterparts. Did this, but now I assume I need to do something to get lvm to rescan the new linear thin volume to pick up the logical volumes that are now on it. And also stop LVM from picking them up off the old thin volume. I deactivated the old thin volume, but LVM is still recognizing up all the logical volumes inside it. I could completely delete the old thin volume, but I'd prefer to get the new volume online before doing that.
>
Hi
Thin volume is using thin pool that is using data & metadata.
Thus thin-pool can remain active even when thin LV is already deactivated,
depends on the use case - and you can obviously deactivate also your thin-pool.
In your case you need to 'forget' copying thin_tdata or thin_tmeta or even
thin-pool ssd/thin itself.
Your 'lvs -a' you've shown unfortunately lists *ONLY* thin-pool (ssd/thin)
but not a single thin LV (with letter 'V' in attributes and using 'ssd/thin'
as Poll volume)
Once you know which thinLV you want to copy - simply use 'dd' to copy data
from a thinLV to your new storage.
dd if=/dev/ssd/your_thin_lv of=/dev/new_block_dev bs=1M \
iflag=direct oflag=direct status=progress...
Your new blockdev/LV must have at least the size of your thin_lv!!!
(thin-pool has 930.59g but thinLV could be possibly much bigger - so be sure
you know what you are copying and where!
I'd highly recommend to check at least 'man lvmthin' before you will try to
follow any googled advice.
Regards
Zdenek
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Converting thin stripe volume to linear
2025-03-14 0:05 ` Zdenek Kabelac
@ 2025-03-14 0:17 ` Patrick Hemmer
2025-03-14 0:27 ` Zdenek Kabelac
0 siblings, 1 reply; 8+ messages in thread
From: Patrick Hemmer @ 2025-03-14 0:17 UTC (permalink / raw)
To: Zdenek Kabelac, linux-lvm
On Thu, Mar 13, 2025, at 20:05, Zdenek Kabelac wrote:
> Dne 14. 03. 25 v 0:14 Patrick Hemmer napsal(a):
>> On Thu, Mar 13, 2025, at 14:54, Zdenek Kabelac wrote:
>>> Dne 13. 03. 25 v 1:38 Patrick Hemmer napsal(a):
>>>> I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.
>>>
>>> Hi
>>>
>>> Likely you can convert your nearly full thin-pool with a single thin volume to
>>> a linear LV by just taking 'dd' copy of if=/dev/thin of=/dev/linear - using
>>> direct io option (and this can be actually faster then raid mirroring).
>>
>> I assume I need to copy both the tdata and tmeta volumes to their new linear counterparts. Did this, but now I assume I need to do something to get lvm to rescan the new linear thin volume to pick up the logical volumes that are now on it. And also stop LVM from picking them up off the old thin volume. I deactivated the old thin volume, but LVM is still recognizing up all the logical volumes inside it. I could completely delete the old thin volume, but I'd prefer to get the new volume online before doing that.
>>
>
> Hi
>
> Thin volume is using thin pool that is using data & metadata.
>
> Thus thin-pool can remain active even when thin LV is already deactivated,
> depends on the use case - and you can obviously deactivate also your thin-pool.
>
> In your case you need to 'forget' copying thin_tdata or thin_tmeta or even
> thin-pool ssd/thin itself.
> Your 'lvs -a' you've shown unfortunately lists *ONLY* thin-pool (ssd/thin)
> but not a single thin LV (with letter 'V' in attributes and using 'ssd/thin'
> as Poll volume)
>
Yes, I mentioned a couple times in the first email that there are many volumes which sit on top of the thin volume, but which I omitted. And that I did not want to copy them manually, because there's a lot of them, and many of them are snapshots. So to recreate the snapshots, I'd have to copy the oldest snapshot to the new thin volume, snapshot it, copy the next oldest, etc down the line. That would be a very time consuming and painful process. Hence why it's not under consideration.
Here's the complete output as it stands right now:
# lvs -o+lv_layout,stripes,devices -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Layout #Str Devices
crypt nvme -wi-a----- 270.00g linear 1 /dev/nvme0n1p2(90571)
crypt-old nvme -wi-a----- 200.00g striped 2 /dev/nvme0n1p2(36096),/dev/nvme1n1p2(36096)
crypt-old nvme -wi-a----- 200.00g striped 2 /dev/nvme0n1p2(69632),/dev/nvme1n1p2(69376)
crypt-old nvme -wi-a----- 200.00g striped 2 /dev/nvme0n1p2(72832),/dev/nvme1n1p2(72576)
crypt-old nvme -wi-a----- 200.00g striped 2 /dev/nvme0n1p2(77312),/dev/nvme1n1p2(77056)
docker nvme -wi-ao---- 15.00g striped 2 /dev/nvme0n1p2(55808),/dev/nvme1n1p2(55296)
home-phemmer-luks nvme -wi-ao---- 350.00g striped 2 /dev/nvme0n1p2(4096),/dev/nvme1n1p2(4096)
home-phemmer-luks nvme -wi-ao---- 350.00g striped 2 /dev/nvme0n1p2(57728),/dev/nvme1n1p2(57472)
home-phemmer-luks nvme -wi-ao---- 350.00g striped 2 /dev/nvme0n1p2(62592),/dev/nvme1n1p2(62336)
home-phemmer-luks nvme -wi-ao---- 350.00g striped 2 /dev/nvme0n1p2(68352),/dev/nvme1n1p2(68096)
home-phemmer-luks nvme -wi-ao---- 350.00g striped 2 /dev/nvme0n1p2(74752),/dev/nvme1n1p2(74496)
home-phemmer-luks nvme -wi-ao---- 350.00g striped 2 /dev/nvme0n1p2(159691),/dev/nvme1n1p2(84096)
home-phemmer-luks nvme -wi-ao---- 350.00g striped 2 /dev/nvme0n1p2(166091),/dev/nvme1n1p2(90496)
root nvme -wi-ao---- 70.00g striped 2 /dev/nvme0n1p2(0),/dev/nvme1n1p2(0)
root nvme -wi-ao---- 70.00g striped 2 /dev/nvme0n1p2(62208),/dev/nvme1n1p2(61952)
root nvme -wi-ao---- 70.00g striped 2 /dev/nvme0n1p2(72192),/dev/nvme1n1p2(71936)
root nvme -wi-ao---- 70.00g striped 2 /dev/nvme0n1p2(74112),/dev/nvme1n1p2(73856)
root nvme -wi-ao---- 70.00g striped 2 /dev/nvme0n1p2(81152),/dev/nvme1n1p2(80896)
stmp nvme -wi-ao---- 110.00g striped 2 /dev/nvme0n1p2(54528),/dev/nvme1n1p2(54016)
stmp nvme -wi-ao---- 110.00g striped 2 /dev/nvme0n1p2(64512),/dev/nvme1n1p2(64256)
stmp nvme -wi-ao---- 110.00g striped 2 /dev/nvme0n1p2(160971),/dev/nvme1n1p2(85376)
stmp nvme -wi-ao---- 110.00g striped 2 /dev/nvme0n1p2(167371),/dev/nvme1n1p2(91776)
var-log nvme -wi-ao---- 2.00g striped 2 /dev/nvme0n1p2(54016),/dev/nvme1n1p2(57216)
lvbkup-nvme-crypt ssd Vwi---tz-- 270.00g thin thin,sparse 0
lvbkup-nvme-crypt-20250226 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250227 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250228 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250301 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250302 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250303 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250304 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250305 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250306 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250307 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250308 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250309 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250310 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250311 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250312 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-crypt-20250313 ssd Vwi---tz-k 270.00g thin lvbkup-nvme-crypt thin,sparse 0
lvbkup-nvme-home-phemmer-luks ssd Vwi---tz-- 350.00g thin thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250226 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250227 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250228 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250301 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250302 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250303 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250304 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250305 ssd Vwi---tz-k 330.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250306 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250307 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250308 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250309 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250310 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250311 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250312 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-home-phemmer-luks-20250313 ssd Vwi---tz-k 340.00g thin lvbkup-nvme-home-phemmer-luks thin,sparse 0
lvbkup-nvme-root ssd Vwi---tz-- 70.00g thin thin,sparse 0
lvbkup-nvme-root-20250227 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250228 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250301 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250302 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250303 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250304 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250305 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250306 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250307 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250308 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250309 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250310 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250311 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250312 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
lvbkup-nvme-root-20250313 ssd Vwi---tz-k 70.00g thin lvbkup-nvme-root thin,sparse 0
[lvol0_pmspare] ssd ewi-a----- 236.00m linear 1 /dev/sda(0)
thin ssd twi---tz-- 930.59g thin,pool 1 thin_tdata(0)
thin2 ssd twi-a-tzF- 930.59g thin,pool 1 thin2_tdata(0)
[thin2_tdata] ssd Twi-ao---- 930.59g linear 1 /dev/sda(119235)
[thin2_tmeta] ssd ewi-ao---- 236.00m linear 1 /dev/sdb(119117)
[thin_tdata] ssd rwi---r--- 930.59g raid,raid5 3 thin_tdata_rimage_0(0),thin_tdata_rimage_1(0),thin_tdata_rimage_2(0)
[thin_tdata_rimage_0] ssd Iwi---r--- <465.30g linear 1 /dev/sda(118)
[thin_tdata_rimage_1] ssd Iwi---r--- <465.30g linear 1 /dev/sdb(0)
[thin_tdata_rimage_2] ssd Iwi---r--- <465.30g linear 1 /dev/sdc(1)
[thin_tdata_rmeta_0] ssd ewi---r--- 4.00m linear 1 /dev/sda(119234)
[thin_tdata_rmeta_1] ssd ewi---r--- 4.00m linear 1 /dev/sdb(119116)
[thin_tdata_rmeta_2] ssd ewi---r--- 4.00m linear 1 /dev/sdc(0)
[thin_tmeta] ssd ewi------- 236.00m linear 1 /dev/sda(59)
> Once you know which thinLV you want to copy - simply use 'dd' to copy data
> from a thinLV to your new storage.
>
> dd if=/dev/ssd/your_thin_lv of=/dev/new_block_dev bs=1M \
> iflag=direct oflag=direct status=progress...
>
> Your new blockdev/LV must have at least the size of your thin_lv!!!
> (thin-pool has 930.59g but thinLV could be possibly much bigger - so be sure
> you know what you are copying and where!
>
> I'd highly recommend to check at least 'man lvmthin' before you will try to
> follow any googled advice.
>
> Regards
>
> Zdenek
--
Patrick
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Converting thin stripe volume to linear
2025-03-14 0:17 ` Patrick Hemmer
@ 2025-03-14 0:27 ` Zdenek Kabelac
[not found] ` <a5e7c616-70d5-4108-a963-b298ce317163@app.fastmail.com>
2025-03-14 11:30 ` Patrick Hemmer
0 siblings, 2 replies; 8+ messages in thread
From: Zdenek Kabelac @ 2025-03-14 0:27 UTC (permalink / raw)
To: Patrick Hemmer, linux-lvm
Dne 14. 03. 25 v 1:17 Patrick Hemmer napsal(a):
>
> On Thu, Mar 13, 2025, at 20:05, Zdenek Kabelac wrote:
>> Dne 14. 03. 25 v 0:14 Patrick Hemmer napsal(a):
>>> On Thu, Mar 13, 2025, at 14:54, Zdenek Kabelac wrote:
>>>> Dne 13. 03. 25 v 1:38 Patrick Hemmer napsal(a):
>>>>> I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.
>>>>
>>>> Hi
>>>>
>>>> Likely you can convert your nearly full thin-pool with a single thin volume to
>>>> a linear LV by just taking 'dd' copy of if=/dev/thin of=/dev/linear - using
>>>> direct io option (and this can be actually faster then raid mirroring).
>>>
>>> I assume I need to copy both the tdata and tmeta volumes to their new linear counterparts. Did this, but now I assume I need to do something to get lvm to rescan the new linear thin volume to pick up the logical volumes that are now on it. And also stop LVM from picking them up off the old thin volume. I deactivated the old thin volume, but LVM is still recognizing up all the logical volumes inside it. I could completely delete the old thin volume, but I'd prefer to get the new volume online before doing that.
>>>
>>
>> Hi
>>
>> Thin volume is using thin pool that is using data & metadata.
>>
>> Thus thin-pool can remain active even when thin LV is already deactivated,
>> depends on the use case - and you can obviously deactivate also your thin-pool.
>>
>> In your case you need to 'forget' copying thin_tdata or thin_tmeta or even
>> thin-pool ssd/thin itself.
>> Your 'lvs -a' you've shown unfortunately lists *ONLY* thin-pool (ssd/thin)
>> but not a single thin LV (with letter 'V' in attributes and using 'ssd/thin'
>> as Poll volume)
>>
> Yes, I mentioned a couple times in the first email that there are many volumes which sit on top of the thin volume, but which I omitted. And that I did not want to copy them manually, because there's a lot of them, and many of them are snapshots. So to recreate the snapshots, I'd have to copy the oldest snapshot to the new thin volume, snapshot it, copy the next oldest, etc down the line. That would be a very time consuming and painful process. Hence why it's not under consideration.
Then you are using somewhat confusing terminology.
If you want to convert thin to linear - it means you no longer want it would
a thin volume.
For this you need to copy 'dd' thins you want to convert - and obviously if
you had 300G thinLV + 300G snapshot thinLV - you need 600G to store them in
linear volumes.
If you want to something different you will need to desribe things differently.
Buf if you do not want to use thin-pool with your new storage - then use 'dd'
copy
If you want to however just 'migrage' thin-pool to a new PV - then you can
simply 'vgextend' new drive to your existing VG.
Then pvmove /dev/old /dev/new
lvm2 will figure out how to do a mirroring.
Once this all is finished you simply 'vgreduce' your old PV from this VG.
Regards
Zdenek
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Converting thin stripe volume to linear
[not found] ` <a5e7c616-70d5-4108-a963-b298ce317163@app.fastmail.com>
@ 2025-03-14 11:25 ` Zdenek Kabelac
0 siblings, 0 replies; 8+ messages in thread
From: Zdenek Kabelac @ 2025-03-14 11:25 UTC (permalink / raw)
To: Patrick Hemmer, linux-lvm
Dne 14. 03. 25 v 2:30 Patrick Hemmer napsal(a):
> On Thu, Mar 13, 2025, at 20:27, Zdenek Kabelac wrote:
>> Dne 14. 03. 25 v 1:17 Patrick Hemmer napsal(a):
> I'm describing exactly what I want. To convert a thin striped volume to linear.
Hi
Unfortunately you were using misleading terms but we are almost getting to the
right point - as you were like talking about volume named 'thin_tdata'
which was striped LV and now is raid5 LV - but it's not 'thin' volume.
You want to preserve thin-pool functionality, you just want to relocate _tdata
to the new drive and drop 'striped' functionality out of it.
ATM your thin_tdata volume is already raid5_n and conversion to 'raid1' type
is not supported ATM as lvm2 rejects this conversion for stacked LV.
So for quickest results I'd recommend these steps:
(Assuming you are 'brave enough to make them)
- Extend your 'ssd' VG with your new drive
- Create new LV (in the same VG) on new storage with the size matching _tdata
volume size.
# lvs -a --units b
# lvcreate -Lsamesizesize --name new_data_lv ssd /dev/newpv
- Do an 'component' activation of _tdata - for this whole thin-pool and all
it's thin volumes must be inactivate.
# lvchange -ay ssd/thin_tdata
This will activate volume in read-only mode.
- Then do a 'dd' copy of this _tdata volume to new destination.
# dd /dev/ssd/thin_tdata -> /dev/ssd/new_data_lv
Now 'tricky' part:
- Take a 'vgcfgbackup -f backup ssd' of current VG
- After all data have been copied make sure LVs are INACTIVE
lvchange -an ssd/.... or simply vgchange -an ssd
- Edit 'ascii' metadata format in your text editor - i.e.: 'vim backup'
look for section with LV name "thin_tdata"
Rename this LV to some other unused name i.e. "thin_Xdata".
Than look for the LV section with newly created LV and rename
this LV to the name: "thin_tdata"
write lvm2 metadata and exit editor
- Now restore lvm2 metadata with 'vgcfgrestore --force -f backup ssd'
- check with 'lvs -ao+seg_pe_ranges' that are all desired changes are in effect
- you now should be able to activate pool and use data from new location.
- lvremove now unused thin_Xdata volume - evetually you should be able to
vgremove unused PVs from 'ssd' VG.
- You may apply similar workflow to _tmeta - although there should work pvmove.
Regards
Zdenek
PS: Let me know if find some step unclear - it's better to ask instead of
causing data lose.
PPS: - yeah we will need to figure out how to enable this missing workflow
within the lvm2 - so the user does not need to do above step in such manual way.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Converting thin stripe volume to linear
2025-03-14 0:27 ` Zdenek Kabelac
[not found] ` <a5e7c616-70d5-4108-a963-b298ce317163@app.fastmail.com>
@ 2025-03-14 11:30 ` Patrick Hemmer
1 sibling, 0 replies; 8+ messages in thread
From: Patrick Hemmer @ 2025-03-14 11:30 UTC (permalink / raw)
To: linux-lvm
(note, this is a re-send due to accidentally removing the mailing list from the recipient list)
On Thu, Mar 13, 2025, at 20:27, Zdenek Kabelac wrote:
> Dne 14. 03. 25 v 1:17 Patrick Hemmer napsal(a):
>>
>> On Thu, Mar 13, 2025, at 20:05, Zdenek Kabelac wrote:
>>> Dne 14. 03. 25 v 0:14 Patrick Hemmer napsal(a):
>>>> On Thu, Mar 13, 2025, at 14:54, Zdenek Kabelac wrote:
>>>>> Dne 13. 03. 25 v 1:38 Patrick Hemmer napsal(a):
>>>>>> I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.
>>>>>
>>>>> Hi
>>>>>
>>>>> Likely you can convert your nearly full thin-pool with a single thin volume to
>>>>> a linear LV by just taking 'dd' copy of if=/dev/thin of=/dev/linear - using
>>>>> direct io option (and this can be actually faster then raid mirroring).
>>>>
>>>> I assume I need to copy both the tdata and tmeta volumes to their new linear counterparts. Did this, but now I assume I need to do something to get lvm to rescan the new linear thin volume to pick up the logical volumes that are now on it. And also stop LVM from picking them up off the old thin volume. I deactivated the old thin volume, but LVM is still recognizing up all the logical volumes inside it. I could completely delete the old thin volume, but I'd prefer to get the new volume online before doing that.
>>>>
>>>
>>> Hi
>>>
>>> Thin volume is using thin pool that is using data & metadata.
>>>
>>> Thus thin-pool can remain active even when thin LV is already deactivated,
>>> depends on the use case - and you can obviously deactivate also your thin-pool.
>>>
>>> In your case you need to 'forget' copying thin_tdata or thin_tmeta or even
>>> thin-pool ssd/thin itself.
>>> Your 'lvs -a' you've shown unfortunately lists *ONLY* thin-pool (ssd/thin)
>>> but not a single thin LV (with letter 'V' in attributes and using 'ssd/thin'
>>> as Poll volume)
>>>
>> Yes, I mentioned a couple times in the first email that there are many volumes which sit on top of the thin volume, but which I omitted. And that I did not want to copy them manually, because there's a lot of them, and many of them are snapshots. So to recreate the snapshots, I'd have to copy the oldest snapshot to the new thin volume, snapshot it, copy the next oldest, etc down the line. That would be a very time consuming and painful process. Hence why it's not under consideration.
>
>
> Then you are using somewhat confusing terminology.
>
> If you want to convert thin to linear - it means you no longer want it would
> a thin volume.
No, that's not how it works. "thin" and "linear" aren't exclusive, and aren't even related. "thin" vs "thick" is the method of allocation and has nothing to do with disk layout, or the distribution of extents on the devices, which is what "linear" vs "striped" refers to.
This can be demonstrated in the following output:
# lvs -o+lv_layout,stripes -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Layout #Str
thin ssd twi-aotz-- 930.59g 92.40 98.76 thin,pool 1
[thin_tdata] ssd Twi-ao---- 930.59g striped 2
[thin_tmeta] ssd ewi-ao---- 236.00m linear 1
^ Note how the thin_tdata is striped.
Now here's another thin volume I just created:
# lvs -o+lv_layout,stripes -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Layout #Str
test ssd twi-a-tz-- 1.00g 0.00 10.94 thin,pool 1
[test_tdata] ssd Twi-ao---- 1.00g linear 1
[test_tmeta] ssd ewi-ao---- 4.00m linear 1
^ Note how test_tdata is linear.
> For this you need to copy 'dd' thins you want to convert - and obviously if
> you had 300G thinLV + 300G snapshot thinLV - you need 600G to store them in
> linear volumes.
>
> If you want to something different you will need to desribe things differently.
I'm describing exactly what I want. To convert a thin striped volume to linear.
> Buf if you do not want to use thin-pool with your new storage - then use 'dd'
> copy
>
> If you want to however just 'migrage' thin-pool to a new PV - then you can
> simply 'vgextend' new drive to your existing VG.
> Then pvmove /dev/old /dev/new
No, this does not work because of the striping. You cannot move extents from two different physical drives onto a single physical drive without first disabling the striping. To disable the striping, you have to convert to linear. If you try to migrate the extents without disabling striping, LVM refuses to do so because it violates the policy (the logical volume is no longer "striped"):
# pvmove /dev/sdb /dev/sda
Insufficient suitable allocatable extents for logical volume : 119116 more required
Unable to allocate mirror extents for ssd/pvmove0.
Failed to convert pvmove LV to mirrored.
And no, it's not complaining because sda doesn't have enough space. It's complaining because of suitable *allocatable* extents. It can't allocate them on that drive because doing so violates the striping policy.
RedHat has documentation on this subject, and which is the documentation I was following. https://access.redhat.com/solutions/368783
> lvm2 will figure out how to do a mirroring.
> Once this all is finished you simply 'vgreduce' your old PV from this VG.
>
> Regards
>
> Zdenek
--
Patrick
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-03-14 11:30 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-13 0:38 Converting thin stripe volume to linear Patrick Hemmer
2025-03-13 18:54 ` Zdenek Kabelac
2025-03-13 23:14 ` Patrick Hemmer
2025-03-14 0:05 ` Zdenek Kabelac
2025-03-14 0:17 ` Patrick Hemmer
2025-03-14 0:27 ` Zdenek Kabelac
[not found] ` <a5e7c616-70d5-4108-a963-b298ce317163@app.fastmail.com>
2025-03-14 11:25 ` Zdenek Kabelac
2025-03-14 11:30 ` Patrick Hemmer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).