linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <zdenek.kabelac@gmail.com>
To: Martin Wilck <martin.wilck@suse.com>, Heming Zhao <heming.zhao@suse.com>
Cc: "teigland@redhat.com" <teigland@redhat.com>,
	"linux-lvm@redhat.com" <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] lvmpolld causes high cpu load issue
Date: Wed, 17 Aug 2022 17:26:08 +0200	[thread overview]
Message-ID: <6be41a4e-9764-03ba-7231-911c733ffecd@gmail.com> (raw)
In-Reply-To: <4e0551e18a28ff602fae6e419dc746145e5962d3.camel@suse.com>

Dne 17. 08. 22 v 15:41 Martin Wilck napsal(a):
> On Wed, 2022-08-17 at 14:54 +0200, Zdenek Kabelac wrote:
>> Dne 17. 08. 22 v 14:39 Martin Wilck napsal(a):
>>
>>
>> Let's make clear we are very well aware of all the constrains
>> associated with
>> udev rule logic  (and we tried quite hard to minimize impact -
>> however udevd
>> developers kind of 'misunderstood'  how badly they will be impacting
>> system's
>> performance with the existing watch rule logic - and the story kind
>> of
>> 'continues' with  'systemd's' & dBus services unfortunatelly...
> 
> I dimly remember you dislike udev ;-)

Well it's not 'a dislike' from my side - but the architecture alone is just 
missing in many areas...

Dave is a complete disliker of udev & systemd all together :)....


> 
> I like the general idea of the udev watch. It is the magic that causes
> newly created partitions to magically appear in the system, which is

Tragedy of design comes from the plain fact that there are only 'very 
occasional' consumers of all these 'collected' data - but gathering all the 
info and keeping all of it 'up-to-date' is getting very very expensive and can 
basically 'neutralize' a lot of your CPU if you have too many resources to 
watch and keep update.....


> very convenient for users and wouldn't work otherwise. I can see that
> it might be inappropriate for LVM PVs. We can discuss changing the
> rules such that the watch is disabled for LVM devices (both PV and LV).

It's really not fixable as is - since of the complete lack of 'error' handling 
of device in udev DB (i.e. duplicate devices...., various frozen devices...)

There is on going  'SID' project - that might push the logic somewhat further, 
but existing 'device' support logic as is today is unfortunate 'trace' of how 
the design should not have been made - and since all 'original' programmers 
left the project long time ago - it's non-trivial to push things forward.

> I don't claim to overlook all possible side effects, but it might be
> worth a try. It would mean that newly created LVs, LV size changes etc.
> would not be visible in the system immediately. I suppose you could
> work around that in the LVM tools by triggering change events after
> operations like lvcreate.

We just hope the SID will make some progress (although probably small one at 
the beginning).


>> However let's focus on 'pvmove' as it is potentially very lengthy
>> operation -
>> so it's not feasible to keep the  VG locked/blocked  across an
>> operation which
>> might take even days with slower storage and big moved sizes (write
>> access/lock disables all readers...)
> 
> So these close-after-write operations are caused by locking/unlocking
> the PVs?
> 
> Note: We were observing that watch events were triggered every 30s, for
> every PV, simultaneously. (@Heming correct me if I'mn wrong here).

That's why we would like to see 'metadata' and also check if the issue is 
appearing on the latest version of lvm2.


Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

  parent reply	other threads:[~2022-08-17 15:26 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-16  9:28 [linux-lvm] lvmpolld causes IO performance issue Heming Zhao
2022-08-16  9:38 ` Zdenek Kabelac
2022-08-16 10:08   ` [linux-lvm] lvmpolld causes high cpu load issue Heming Zhao
2022-08-16 10:26     ` Zdenek Kabelac
2022-08-17  2:03       ` Heming Zhao
2022-08-17  8:06         ` Zdenek Kabelac
2022-08-17  8:43           ` Heming Zhao
2022-08-17  9:46             ` Zdenek Kabelac
2022-08-17 10:47               ` Heming Zhao
2022-08-17 11:13                 ` Zdenek Kabelac
2022-08-17 12:39                 ` Martin Wilck
2022-08-17 12:54                   ` Zdenek Kabelac
2022-08-17 13:41                     ` Martin Wilck
2022-08-17 15:11                       ` David Teigland
2022-08-18  8:06                         ` Martin Wilck
2022-08-17 15:26                       ` Zdenek Kabelac [this message]
2022-08-17 15:58                         ` Demi Marie Obenour
2022-08-18  7:37                           ` Martin Wilck
2022-08-17 17:35                         ` Gionatan Danti
2022-08-17 18:54                           ` Zdenek Kabelac
2022-08-17 18:54                             ` Zdenek Kabelac
2022-08-17 19:13                             ` Gionatan Danti
2022-08-18 21:13                   ` Martin Wilck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6be41a4e-9764-03ba-7231-911c733ffecd@gmail.com \
    --to=zdenek.kabelac@gmail.com \
    --cc=heming.zhao@suse.com \
    --cc=linux-lvm@redhat.com \
    --cc=martin.wilck@suse.com \
    --cc=teigland@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).