From: Chen Ridong <chenridong@huaweicloud.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org,
longman@redhat.com, chenridong@huawei.com,
cgroups@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v5 2/3] workqueue: doc: Add a note saturating the system_wq is not permitted
Date: Fri, 27 Sep 2024 16:08:26 +0800 [thread overview]
Message-ID: <6a2f4e01-c9f5-4fb5-953e-2999e00a4b37@huaweicloud.com> (raw)
In-Reply-To: <ipabgusdd5zhnp5724ycc5t4vbraeblhh3ascyzmbkrxvwpqec@pdy3wk5hokru>
On 2024/9/26 20:49, Michal Koutný wrote:
> On Mon, Sep 23, 2024 at 11:43:51AM GMT, Chen Ridong <chenridong@huaweicloud.com> wrote:
>> + Note: If something is expected to generate a large number of concurrent
>> + works, it should utilize its own dedicated workqueue rather than
>> + system wq. Because this may saturate system_wq and potentially lead
>> + to deadlock.
>
> How does "large number of concurrent" translate practically?
>
> The example with released cgroup_bpf from
> cgroup_destroy_locked
> cgroup_bpf_offline
> which is serialized under cgroup_mutex as argued previously. So this
> generates a single entry at a time and it wouldn't hint towards the
> creation of cgroup_bpf_destroy_wq.
>
> I reckon the argument could be something like the processing rate vs
> production rate of entry items should be such that number of active
> items is bound. But I'm not sure it's practical since users may not know
> the comparison result and they would end up always creating a dedicated
> workqueue.
>
>
> Michal
Thank you, Michal.
I think it's difficult to measure the comparison result. Actually, if
something generates work at a high frequency, it would be better to use
dedicated wq.
How about:
Note: If something may generate works frequently, it may saturate the
system_wq and potentially lead to deadlock. It should utilize its own
dedicated workqueue rather than system wq.
Best regards,
Ridong
next prev parent reply other threads:[~2024-09-27 8:08 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-23 11:43 [PATCH v5 0/3] add dedicated wq for cgroup bpf and adjust WQ_MAX_ACTIVE Chen Ridong
2024-09-23 11:43 ` [PATCH v5 1/3] cgroup/bpf: use a dedicated workqueue for cgroup bpf destruction Chen Ridong
2024-09-26 12:49 ` Michal Koutný
2024-09-27 7:45 ` Chen Ridong
[not found] ` <ZvYzIcYSJa3Loq4G@linux.ibm.com>
2024-09-27 9:50 ` Chen Ridong
2024-09-23 11:43 ` [PATCH v5 2/3] workqueue: doc: Add a note saturating the system_wq is not permitted Chen Ridong
2024-09-26 12:49 ` Michal Koutný
2024-09-27 8:08 ` Chen Ridong [this message]
2024-09-30 12:50 ` Michal Koutný
2024-10-08 1:32 ` Chen Ridong
2024-09-23 11:43 ` [PATCH v5 3/3] workqueue: Adjust WQ_MAX_ACTIVE from 512 to 2048 Chen Ridong
2024-09-27 19:56 ` [PATCH v5 0/3] add dedicated wq for cgroup bpf and adjust WQ_MAX_ACTIVE Tejun Heo
2024-09-29 2:38 ` Chen Ridong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6a2f4e01-c9f5-4fb5-953e-2999e00a4b37@huaweicloud.com \
--to=chenridong@huaweicloud.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huawei.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lizefan.x@bytedance.com \
--cc=longman@redhat.com \
--cc=mkoutny@suse.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).