linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Martin Wilck <martin.wilck@suse.com>
To: Heming Zhao <heming.zhao@suse.com>,
	"zdenek.kabelac@gmail.com" <zdenek.kabelac@gmail.com>
Cc: "teigland@redhat.com" <teigland@redhat.com>,
	"linux-lvm@redhat.com" <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvmpolld causes high cpu load issue
Date: Wed, 17 Aug 2022 12:39:08 +0000	[thread overview]
Message-ID: <b36cd2879a9ccaf25d0bb0389291e331adcc68c6.camel@suse.com> (raw)
In-Reply-To: <20220817104732.jhu3ug6ahep3rnpq@c73>

On Wed, 2022-08-17 at 18:47 +0800, Heming Zhao wrote:
> On Wed, Aug 17, 2022 at 11:46:16AM +0200, Zdenek Kabelac wrote:
> 
> 
> > 
> > ATM I'm not even sure if you are complaining about how CPU usage of
> > lvmpolld
> > or just huge udev rules processing overhead.
> 
> The load is generated by multipath. lvmpolld does the IN_CLOSE_WRITE
> action
> which is the trigger.

Let's be clear here: every close-after-write operation triggers udev's
"watch" mechanism for block devices, which causes the udev rules to be
executed for the device. That is not a cheap operation. In the case at
hand, the customer was observing a lot of "multipath -U" commands. So
apparently a significant part of the udev rule processing was spent in
"multipath -U". Running "multipath -U" is important, because the rule
could have been triggered by a change of the number of available paths
devices, and later commands run from udev rules might hang indefinitely
if the multipath device had no usable paths any more. "multipath -U" is
already quite well optimized, but it needs to do some I/O to complete
it's work, thus it takes a few milliseconds to run.

IOW, it would be misleading to point at multipath. close-after-write
operations on block devices should be avoided if possible. As you
probably know, the purpose udev's "watch" operation is to be able to
determine changes on layered devices, e.g. newly created LVs or the
like. "pvmove" is special, because by definition it will usually not
cause any changes in higher layers. Therefore it might make sense to
disable the udev watch on the affected PVs while pvmove is running, and
trigger a single change event (re-enabling the watch) after the pvmove
has finished. If that is impossible, lvmpolld and other lvm tools that
are involved in the pvmove operation should avoid calling close() on
the PVs, IOW keep the fds open until the operation is finished.

Regards
Martin

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


  parent reply	other threads:[~2022-08-23  8:29 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-16  9:28 [linux-lvm] lvmpolld causes IO performance issue Heming Zhao
2022-08-16  9:38 ` Zdenek Kabelac
2022-08-16 10:08   ` [linux-lvm] lvmpolld causes high cpu load issue Heming Zhao
2022-08-16 10:26     ` Zdenek Kabelac
2022-08-17  2:03       ` Heming Zhao
2022-08-17  8:06         ` Zdenek Kabelac
2022-08-17  8:43           ` Heming Zhao
2022-08-17  9:46             ` Zdenek Kabelac
2022-08-17 10:47               ` Heming Zhao
2022-08-17 11:13                 ` Zdenek Kabelac
2022-08-17 12:39                 ` Martin Wilck [this message]
2022-08-17 12:54                   ` Zdenek Kabelac
2022-08-17 13:41                     ` Martin Wilck
2022-08-17 15:11                       ` David Teigland
2022-08-18  8:06                         ` Martin Wilck
2022-08-17 15:26                       ` Zdenek Kabelac
2022-08-17 15:58                         ` Demi Marie Obenour
2022-08-18  7:37                           ` Martin Wilck
2022-08-17 17:35                         ` Gionatan Danti
2022-08-17 18:54                           ` Zdenek Kabelac
2022-08-17 18:54                             ` Zdenek Kabelac
2022-08-17 19:13                             ` Gionatan Danti
2022-08-18 21:13                   ` Martin Wilck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b36cd2879a9ccaf25d0bb0389291e331adcc68c6.camel@suse.com \
    --to=martin.wilck@suse.com \
    --cc=heming.zhao@suse.com \
    --cc=linux-lvm@redhat.com \
    --cc=teigland@redhat.com \
    --cc=zdenek.kabelac@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).