linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "Brian J. Murrell" <brian@interlinx.bc.ca>
To: linux-lvm@lists.linux.dev
Subject: Single lvmcache device for multiple LVs
Date: Thu, 12 Dec 2024 09:16:41 -0500	[thread overview]
Message-ID: <d97e2760fd994c7b7dac5b15b688b9f963d631d6.camel@interlinx.bc.ca> (raw)

I have multiple LVs in a VG:

  LV               VG       Attr       LSize    Pool     Origin  Data%  Meta%  Move Log Cpy%Sync Convert
  cyrus_spool      centos   Vwi-aotz--   29.80g pool00           95.84                                  
  cyrus_spool.old  centos   Vwi-a-tz--   22.18g pool00           98.91                                  
  data             centos   Vwi-aotz--   25.00g pool00           77.88                                  
  debuginfo        centos   -wi-ao----    4.73g                                                         
  home             centos   Vwi-aotz--   10.00g pool00           86.77                                  
  home.old         centos   Vwi-a-tz--   10.00g pool00           99.92                                  
  http_cache       centos   Vwi-aotz--   50.00g pool00           72.10                                  
  mp3              centos   Vwi-aotz--  150.00g pool00           84.37                                  
  nextcloud_data   centos   Vwi-aotz--  250.00g pool00           71.25                                  
  photos           centos   -wi-ao----   64.49g                                                         
  pkg-cache        centos   Vwi-aotz--   47.68g pool00           81.41                                  
  pool00           centos   twi-aotz--    1.26t                  75.81  35.35                           
  root             centos   Vwi-aotz--   28.81g pool00           16.68                                  
  snaptest         centos   Vwi---tz-k    1.46g pool00   root                                           
  source           centos   Vwi-aotz--   92.00g pool00           92.31                                  
  swap             centos   Vwi-aotz--    8.00g pool00           100.00                                 
  swap2            centos   -wi-a-----   16.00g                                                         
  test             centos   -wi-a-----  100.00m                                                         
  usr              centos   Vwi-aotz--    9.25g pool00           77.97                                  
  var              centos   Vwi-aotz--  265.45g pool00           92.25                                  
  windows10        centos   -wi-a-----    6.00g

that I want to cache with a faster 120GB SSD.  Not all of those LVs are
even actively used, and some are rarely used, but I would think any
decent caching algorithm would work out the most used LVs and their
hotspots so trying to cherry-pick which ones to cache and which not to
shouldn't even be an issue.

But as I understand it, I need a separate caching LV (for cachevol) or
two (for cachepool) for each LV that I want to cache.

This seems rather sub-optimal in trying to "guess" what portion of the
120GB SSD I should use for each LV that I want to cache, and trying to
guess which LVs would be most [in]effective to cache even.

Is there no better way to do this than 1:1 cache{pool|vol} per LV?

I suppose I could take a WAG and then after a while evaluate the stats
and decide (manually) if some re-balancing of the portioning would be
beneficial, and then keep doing that periodically but that seems rather
tedious and manual.  Any solutions here to automate this, as a
workaround to this 1:1 LV requirement?

Alternatively, if it must be 1:1 would attaching the cache to the thin-
pool device, pool00 effect what I am trying to achieve?  Is that even
possible?

Is lvmcache maybe just not the right solution in this case?

Cheers,
b.


                 reply	other threads:[~2024-12-12 14:16 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d97e2760fd994c7b7dac5b15b688b9f963d631d6.camel@interlinx.bc.ca \
    --to=brian@interlinx.bc.ca \
    --cc=linux-lvm@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).