linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "Patrick Hemmer" <phemmer+lvm@stormcloud9.net>
To: linux-lvm@lists.linux.dev
Subject: Converting thin stripe volume to linear
Date: Wed, 12 Mar 2025 20:38:27 -0400	[thread overview]
Message-ID: <0eb680b5-ad45-45e6-bc17-de052aa583a1@app.fastmail.com> (raw)

I've got a LVM thin stripe volume across 2 drives that I'm trying to migrate to a new larger single drive (getting rid of the old drives). Following various information on the subject, it seems the procedure here is to first convert from stripe to mirror, and then from mirror to linear. While attempting this, I seem to have hit an issue on the second part of that process, and am not having much luck resolving it.

So to start, the drives at play are sda (a new larger drive), sdb (one of the older drives being removed), & sdc (the other drive being removed). The VG name is "ssd".
This is what the initial layout looked like:

# lvs -o+lv_layout,stripes -a
  LV                                     VG   Attr       LSize   Pool Origin                        Data%  Meta%  Move Log Cpy%Sync Convert Layout      #Str
  [lvol0_pmspare]                        ssd  ewi-a----- 236.00m                                                                            linear         1
  thin                                   ssd  twi-aotz-- 930.59g                                    92.40  98.76                            thin,pool      1
  [thin_tdata]                           ssd  Twi-ao---- 930.59g                                                                            striped        2
  [thin_tmeta]                           ssd  ewi-ao---- 236.00m                                                                            linear         1
(plus some other LVs which are using the thin pool volume that I've omitted)

I initiated the mirror with:
# lvconvert -m 1 ssd/thin_tdata
  Replaced LV type raid1 with possible type raid5_n.
  Repeat this command to convert to raid1 after an interim conversion has finished.
Are you sure you want to convert striped LV ssd/thin_tdata to raid5_n type? [y/n]: y
  Logical volume ssd/thin_tdata successfully converted.

And this is where I'm stuck. If I follow the instructions there and repeat the command, I get a nasty warning:
# lvconvert -m 1 ssd/thin_tdata

  Using default stripesize 64.00 KiB.
  Converting raid5_n LV ssd/thin_tdata to 2 stripes first.
  WARNING: Removing stripes from active and open logical volume ssd/thin_tdata will shrink it from 930.59 GiB to <465.30 GiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  Interrupt the conversion and run "lvresize -y -l476464 ssd/thin_tdata" to keep the current size if not done already!
  If that leaves the logical volume larger than 476464 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 ssd/thin_tdata"
  Can't remove stripes without --force option.
  Reshape request failed on LV ssd/thin_tdata.

If I go with other information I've found online, and skip to `-m 0` here instead of repeating the `-m 1` command, I get:
# lvconvert -m 0 ssd/thin_tdata
  Using default stripesize 64.00 KiB.
  No change in RAID LV ssd/thin_tdata layout, freeing reshape space.
  LV ssd/thin_tdata does not have reshape space allocated.
  Reshape request failed on LV ssd/thin_tdata.

This is what the layout currently looks like:
# lvs -o+lv_layout,stripes,devices -a 
  LV                                     VG   Attr       LSize    Pool Origin                        Data%  Meta%  Move Log Cpy%Sync Convert Layout      #Str Devices                                                             
  [lvol0_pmspare]                        ssd  ewi-a-----  236.00m                                                                            linear         1 /dev/sda(0)                                                         
  thin                                   ssd  twi-aotz--  930.59g                                    92.40  98.76                            thin,pool      1 thin_tdata(0)                                                       
  [thin_tdata]                           ssd  rwi-aor---  930.59g                                                           100.00           raid,raid5     3 thin_tdata_rimage_0(0),thin_tdata_rimage_1(0),thin_tdata_rimage_2(0)
  [thin_tdata_rimage_0]                  ssd  iwi-aor--- <465.30g                                                                            linear         1 /dev/sda(118)                                                       
  [thin_tdata_rimage_1]                  ssd  iwi-aor--- <465.30g                                                                            linear         1 /dev/sdb(0)                                                         
  [thin_tdata_rimage_2]                  ssd  iwi-aor--- <465.30g                                                                            linear         1 /dev/sdc(1)                                                         
  [thin_tdata_rmeta_0]                   ssd  ewi-aor---    4.00m                                                                            linear         1 /dev/sda(119234)                                                    
  [thin_tdata_rmeta_1]                   ssd  ewi-aor---    4.00m                                                                            linear         1 /dev/sdb(119116)                                                    
  [thin_tdata_rmeta_2]                   ssd  ewi-aor---    4.00m                                                                            linear         1 /dev/sdc(0)                                                         
  [thin_tmeta]                           ssd  ewi-ao----  236.00m                                                                            linear         1 /dev/sda(59)                                                        

This is with lvm2-2.03.23-1 on Fedora 40

Any idea how I get past this point? I could just build a completely new logical volume and manually copy the data, but there's around 40 logical volumes on this thin pool, many of which are snapshots, so would be much easier to just convert it if possible.

--
Patrick


             reply	other threads:[~2025-03-13  0:38 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-13  0:38 Patrick Hemmer [this message]
2025-03-13 18:54 ` Converting thin stripe volume to linear Zdenek Kabelac
2025-03-13 23:14   ` Patrick Hemmer
2025-03-14  0:05     ` Zdenek Kabelac
2025-03-14  0:17       ` Patrick Hemmer
2025-03-14  0:27         ` Zdenek Kabelac
     [not found]           ` <a5e7c616-70d5-4108-a963-b298ce317163@app.fastmail.com>
2025-03-14 11:25             ` Zdenek Kabelac
2025-03-14 11:30           ` Patrick Hemmer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0eb680b5-ad45-45e6-bc17-de052aa583a1@app.fastmail.com \
    --to=phemmer+lvm@stormcloud9.net \
    --cc=linux-lvm@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).