* Best use of a small SSD with LVM
@ 2025-08-06 22:35 Brian J. Murrell
2025-08-07 2:22 ` matthew patton
2025-08-07 15:16 ` John Stoffel
0 siblings, 2 replies; 7+ messages in thread
From: Brian J. Murrell @ 2025-08-06 22:35 UTC (permalink / raw)
To: linux-lvm
I'm wondering what are the best practices for taking advantage of the
speed of SSDs in combination with spinning rust drives in a volume
group.
I have 111GB of free SSD space in the same volume group as all of my
system's filesystems (/, /usr, /var, /home, various other /var/cache/*
filesystems, other data, etc.).
Currently the 111GB PV on the SSD is completely unused. But I wonder
what the best use of it is.
Should I just move entire VGs (i.e. something like /usr or /var or
something else) on to it, or should I use it with LVM caching? IOW is
LVM caching effective?
LVM caching seems like a lot of guesswork regarding how to split the
SSD up most effectively so that you have the right size cache for the
various volumes that you want to cache. I.e. for every filesystem I
have in my VG, I need to decide what percentage of the SSD I want to
use for it and then how big I want the cache LV vs the cache metadata
LV, etc. Is there any way to monitor performance of the cache VGs to
decide if they are being used effectively or if space should be moved
from one cache LV to another?
Can cache LVs be resized to optimize the size of them? Perhaps I
created a cache LV too big and it's not being used efficiently and then
want to reduce it's size and use the freed up space to make a different
cache LV bigger.
Assuming one wants a writethrough cache because one does not have
battery/UPS backed systems (such that they are vulnerable to power
outages) (writethrough is the correct choice for that situation,
correct?) is there any point to caching write-mostly filesystems such
as /var/[log/] with a writethrough cache?
Any other insights anyone wants to add?
Cheers,
b.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Best use of a small SSD with LVM
2025-08-06 22:35 Best use of a small SSD with LVM Brian J. Murrell
@ 2025-08-07 2:22 ` matthew patton
2025-08-07 5:59 ` Brian J. Murrell
` (2 more replies)
2025-08-07 15:16 ` John Stoffel
1 sibling, 3 replies; 7+ messages in thread
From: matthew patton @ 2025-08-07 2:22 UTC (permalink / raw)
To: linux-lvm@lists.linux.dev, Brian J. Murrell
> IOW is LVM caching effective?
no it is not. Unless your workload pounds on the same files over and over which for OS files is nil.
You want open-cas Linux. https://github.com/Open-CAS/open-cas-linux
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Best use of a small SSD with LVM
2025-08-07 2:22 ` matthew patton
@ 2025-08-07 5:59 ` Brian J. Murrell
[not found] ` <d754fa089d273.c3f10ab0005ac8@gmail.com>
2025-08-07 15:21 ` John Stoffel
2 siblings, 0 replies; 7+ messages in thread
From: Brian J. Murrell @ 2025-08-07 5:59 UTC (permalink / raw)
To: linux-lvm@lists.linux.dev
On Thu, 2025-08-07 at 02:22 +0000, matthew patton wrote:
> > IOW is LVM caching effective?
>
> no it is not. Unless your workload pounds on the same files over and
> over which for OS files is nil.
> You want open-cas Linux. https://github.com/Open-CAS/open-cas-linux
Ah yes. It seems I have been down this rabbit-hole before:
https://github.com/Open-CAS/open-cas-linux/discussions/1605
What I don't like about Open-CAS is the complete lack of integration
with the distribution and it being updated automatically with every
distro kernel package update. I don't want every distro kernel update
to be a half a day of building out-of-distro software. And remembering
to do it with every kernel update.
The more effort you make installing updates, the more you put them off
until you know you have the time to do all of the ancillary work of
doing the update and that's just bad for security.
And even if you do somehow automate all of that (maybe with DKMS or
somesuch) then there is the need for a whole host of @devel packages on
every machine, bla bla bla. Just very messy.
Cheers,
b.
^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <d754fa089d273.c3f10ab0005ac8@gmail.com>]
* Re: Best use of a small SSD with LVM
2025-08-07 2:22 ` matthew patton
2025-08-07 5:59 ` Brian J. Murrell
[not found] ` <d754fa089d273.c3f10ab0005ac8@gmail.com>
@ 2025-08-07 15:21 ` John Stoffel
2025-08-07 17:45 ` Brian J. Murrell
2 siblings, 1 reply; 7+ messages in thread
From: John Stoffel @ 2025-08-07 15:21 UTC (permalink / raw)
To: matthew patton; +Cc: linux-lvm@lists.linux.dev, Brian J. Murrell
>>>>> "matthew" == matthew patton <pattonme@yahoo.com> writes:
>> IOW is LVM caching effective?
> no it is not. Unless your workload pounds on the same files over and
> over which for OS files is nil. You want open-cas
> Linux. https://github.com/Open-CAS/open-cas-linux
This looks interesting, but why isn't it upstreamed into the linux
kernel? What's stopping it being added if it's so good?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Best use of a small SSD with LVM
2025-08-07 15:21 ` John Stoffel
@ 2025-08-07 17:45 ` Brian J. Murrell
0 siblings, 0 replies; 7+ messages in thread
From: Brian J. Murrell @ 2025-08-07 17:45 UTC (permalink / raw)
To: linux-lvm@lists.linux.dev
On Thu, 2025-08-07 at 11:21 -0400, John Stoffel wrote:
> > > > >
>
> This looks interesting, but why isn't it upstreamed into the linux
> kernel? What's stopping it being added if it's so good?
See my other reply in this thread about that:
https://lore.kernel.org/linux-lvm/eb048b63ab9ecc6aba533a932bf6ff2ed87701f8.camel@interlinx.bc.ca/
Specifically the link to the exact same question I asked of the Open-
CAS developers:
https://github.com/Open-CAS/open-cas-linux/discussions/1605
They list a number of reasons.
Ultimately it still means more work to update kernels from the distro.
And more work typically introduces delays into updating until you have
the time for the extra processing needed to update the Open-CAS kernel
driver as well as your kernel. :-(
Although, I suppose most directly that's a lack of support for Open-CAS
in a distribution rather than in the upstream kernel.
Ultimately it needs automating in some way such that when your distro
publishes a new kernel package a rebuild of Open-CAS('s kernel module
RPM) for that new kernel is kicked off and an RPM is published in a
repo your servers are subscribed to in order to automatically get that
new kernel module when your distro gets a new kernel.
Quite doable. Fortunately the Open-CAS project includes a make target
to build an RPM, but it builds to the currently installed kernel, not
the one that you are going to be installing in your update. It also
requires lots of development tools, etc. on the target system to be
installed.
Alternatively, I have a patch (that I need to submit to Open-CAS) that
I spent a few hours working on last night that let's you specify an
alternate kernel version (i.e. the one in the updates repo that you
want to update to) and builds the Open-CAS RPMs in a mock chroot for
that kernel. Simply have a process to scoop the result of that build
up and put it into a repo and you have automated Open-CAS updates ready
when you want to update the kernel.
The one missing piece is triggering all of that on the newly available
kernel update in the distro's upstream repos. Fedora has some neat
tooling (anitya, fedbus, etc.) for that kind of triggering but other
distros do not sadly.
But this is starting to get far afield from LVM so maybe better just
end this discussion there.
Cheers,
b.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: Best use of a small SSD with LVM
2025-08-06 22:35 Best use of a small SSD with LVM Brian J. Murrell
2025-08-07 2:22 ` matthew patton
@ 2025-08-07 15:16 ` John Stoffel
1 sibling, 0 replies; 7+ messages in thread
From: John Stoffel @ 2025-08-07 15:16 UTC (permalink / raw)
To: Brian J. Murrell; +Cc: linux-lvm
>>>>> "Brian" == Brian J Murrell <brian@interlinx.bc.ca> writes:
> I'm wondering what are the best practices for taking advantage of the
> speed of SSDs in combination with spinning rust drives in a volume
> group.
> I have 111GB of free SSD space in the same volume group as all of my
> system's filesystems (/, /usr, /var, /home, various other /var/cache/*
> filesystems, other data, etc.).
> Currently the 111GB PV on the SSD is completely unused. But I wonder
> what the best use of it is.
> Should I just move entire VGs (i.e. something like /usr or /var or
> something else) on to it, or should I use it with LVM caching? IOW is
> LVM caching effective?
> LVM caching seems like a lot of guesswork regarding how to split the
> SSD up most effectively so that you have the right size cache for
> the various volumes that you want to cache. I.e. for every
> filesystem I have in my VG, I need to decide what percentage of the
> SSD I want to use for it and then how big I want the cache LV vs the
> cache metadata LV, etc. Is there any way to monitor performance of
> the cache VGs to decide if they are being used effectively or if
> space should be moved from one cache LV to another?
I tried LVcache for quite a while when I had more HDDs with small
SSDs, and I didn't really find it make any visible different to me. I
had my home directory and some scratch areas using this, but never
really got any wall time improvements.
Some of my tests were just reading emails and storing them on the
disk, starting/stopping applications. Doing kernel compiles (git
pull, make oldconfig, make bzImage, etc) and it just didn't seem to do
much.
> Can cache LVs be resized to optimize the size of them? Perhaps I
> created a cache LV too big and it's not being used efficiently and
> then want to reduce it's size and use the freed up space to make a
> different cache LV bigger.
> Assuming one wants a writethrough cache because one does not have
> battery/UPS backed systems (such that they are vulnerable to power
> outages) (writethrough is the correct choice for that situation,
> correct?) is there any point to caching write-mostly filesystems such
> as /var/[log/] with a writethrough cache?
> Any other insights anyone wants to add?
I wouldn't bother honestly, I just never found it to be providing
meaningful performance. But that's just me.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-08-07 17:45 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-06 22:35 Best use of a small SSD with LVM Brian J. Murrell
2025-08-07 2:22 ` matthew patton
2025-08-07 5:59 ` Brian J. Murrell
[not found] ` <d754fa089d273.c3f10ab0005ac8@gmail.com>
2025-08-07 6:58 ` Brian J. Murrell
2025-08-07 15:21 ` John Stoffel
2025-08-07 17:45 ` Brian J. Murrell
2025-08-07 15:16 ` John Stoffel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).