From: Juri Lelli <juri.lelli@redhat.com>
To: Waiman Long <longman@redhat.com>
Cc: Tejun Heo <tj@kernel.org>, Lai Jiangshan <jiangshanlai@gmail.com>,
linux-kernel@vger.kernel.org, Cestmir Kalina <ckalina@redhat.com>,
Alex Gladkov <agladkov@redhat.com>
Subject: Re: [RFC PATCH 0/3] workqueue: Enable unbound cpumask update on ordered workqueues
Date: Fri, 2 Feb 2024 15:55:15 +0100 [thread overview]
Message-ID: <Zb0CU2OrTCv457Wo@localhost.localdomain> (raw)
In-Reply-To: <ff2c0ce1-4d40-4661-8d74-c1d81ff505ec@redhat.com>
On 01/02/24 09:28, Waiman Long wrote:
> On 2/1/24 05:18, Juri Lelli wrote:
> > On 31/01/24 10:31, Waiman Long wrote:
...
> > My patch only uses the wq->unbound_attrs->cpumask to change the
> > associated rescuer cpumask, but I don't think your series modifies the
> > former?
>
> I don't think so. The calling sequence of apply_wqattrs_prepare() and
> apply_wqattrs_commit() will copy unbound_cpumask into ctx->attrs which is
> copied into unbound_attrs. So unbound_attrs->cpumask should reflect the new
> global unbound cpumask. This code is there all along.
Indeed. I believe this is what my 3/4 [1] was trying to cure, though. I
still think that with current code the new_attr->cpumask gets first
correctly initialized considering unbound_cpumask
apply_wqattrs_prepare ->
copy_workqueue_attrs(new_attrs, attrs);
wqattrs_actualize_cpumask(new_attrs, unbound_cpumask);
but then overwritten further below using cpu_possible_mask
apply_wqattrs_prepare ->
copy_workqueue_attrs(new_attrs, attrs);
cpumask_and(new_attrs->cpumask, new_attrs->cpumask, cpu_possible_mask);
operation that I honestly seem to still fail to grasp why we need to do.
:)
In the end we commit that last (overwritten) cpumask
apply_wqattrs_commit ->
copy_workqueue_attrs(ctx->wq->unbound_attrs, ctx->attrs);
Now, my patch was wrong, as you pointed out, as it wasn't taking into
consideration the ordering guarantee. I thought maybe your changes (plus
and additional change to the above?) might fix the problem correctly.
Best,
Juri
1 - https://lore.kernel.org/lkml/20240116161929.232885-4-juri.lelli@redhat.com/
next prev parent reply other threads:[~2024-02-02 14:55 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-30 18:33 [RFC PATCH 0/3] workqueue: Enable unbound cpumask update on ordered workqueues Waiman Long
2024-01-30 18:33 ` [RFC PATCH 1/3] workqueue: Skip __WQ_DESTROYING workqueues when updating global unbound cpumask Waiman Long
2024-01-30 18:33 ` [RFC PATCH 2/3] workqueue: Break out __queue_work_rcu_locked() from __queue_work() Waiman Long
2024-01-30 18:33 ` [RFC PATCH 3/3] workqueue: Enable unbound cpumask update on ordered workqueues Waiman Long
2024-01-31 17:00 ` Tejun Heo
2024-01-31 17:02 ` Waiman Long
2024-01-31 13:01 ` [RFC PATCH 0/3] " Juri Lelli
2024-01-31 15:31 ` Waiman Long
2024-02-01 10:18 ` Juri Lelli
2024-02-01 14:28 ` Waiman Long
2024-02-02 14:55 ` Juri Lelli [this message]
2024-02-02 17:07 ` Tejun Heo
2024-02-02 19:03 ` Waiman Long
2024-02-05 6:30 ` Juri Lelli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zb0CU2OrTCv457Wo@localhost.localdomain \
--to=juri.lelli@redhat.com \
--cc=agladkov@redhat.com \
--cc=ckalina@redhat.com \
--cc=jiangshanlai@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox