From: Waiman Long <llong@redhat.com>
To: Frederic Weisbecker <frederic@kernel.org>,
Waiman Long <llong@redhat.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
Ingo Molnar <mingo@redhat.com>,
Marco Crivellari <marco.crivellari@suse.com>,
Michal Hocko <mhocko@suse.com>,
Peter Zijlstra <peterz@infradead.org>, Tejun Heo <tj@kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Vlastimil Babka <vbabka@suse.cz>
Subject: Re: [PATCH 02/27] sched/isolation: Introduce housekeeping per-cpu rwsem
Date: Thu, 26 Jun 2025 19:58:04 -0400 [thread overview]
Message-ID: <e9ef9cb1-f202-4591-99f0-4451ca945f0b@redhat.com> (raw)
In-Reply-To: <aFwFUk2rWrikLbyA@localhost.localdomain>
On 6/25/25 10:18 AM, Frederic Weisbecker wrote:
> Le Mon, Jun 23, 2025 at 01:34:58PM -0400, Waiman Long a écrit :
>> On 6/20/25 11:22 AM, Frederic Weisbecker wrote:
>>> The HK_TYPE_DOMAIN isolation cpumask, and further the
>>> HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the
>>> future.
>>>
>>> The affected subsystems will need to synchronize against those cpumask
>>> changes so that:
>>>
>>> * The reader get a coherent snapshot
>>> * The housekeeping subsystem can safely propagate a cpumask update to
>>> the susbsytems after it has been published.
>>>
>>> Protect against readsides that can sleep with per-cpu rwsem. Updates are
>>> expected to be very rare given that CPU isolation is a niche usecase and
>>> related cpuset setup happen only in preparation work. On the other hand
>>> read sides can occur in more frequent paths.
>>>
>>> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
>> Thanks for the patch series and it certainly has some good ideas. However I
>> am a bit concern about the overhead of using percpu-rwsem for
>> synchronization especially when the readers have to wait for the completion
>> on the writer side. From my point of view, during the transition period when
>> new isolated CPUs are being added or old ones being removed, the reader will
>> either get the old CPU data or the new one depending on the exact timing.
>> The effect the CPU selection may persist for a while after the end of the
>> critical section.
> It depends.
>
> 1) If the read side queues a work and wait for it
> (case of work_on_cpu()), we can protect the whole under the same
> sleeping lock and there is no persistance beyond.
>
> 2) But if the read side just queues some work or defines some cpumask
> for future queue then there is persistance and some action must be
> taken by housekeeping after the update to propagare the new cpumask
> (flush pending works, etc...)
I don't mind doing actions to make sure that the cpumask is properly
propagated after changing housekeeping cpumasks. I just don't want to
introduce too much latency on the reader which could be a latency
sensitive task running on an isolated CPU.
I would say it should be OK to have a grace period (reusing the RCU
term) after changing the housekeeping cpumasks that tasks running on
those CPUs that are affected by cpumask changes may or may not
experience the full effect of the cpumask change. However, we should
minimize the overhead of those tasks that run on CPUs unrelated to the
cpumask change ASAP.
>> Can we just rely on RCU to make sure that it either get the new one or the
>> old one but nothing in between without the additional overhead?
> This is the case as well and it is covered by 2) above.
> The sleeping parts handled in 1) would require more thoughts.
>
>> My current thinking is to make use CPU hotplug to enable better CPU
>> isolation. IOW, I would shut down the affected CPUs, change the housekeeping
>> masks and then bring them back online again. That means the writer side will
>> take a while to complete.
> You mean that an isolated partition should only be set on offline CPUs ? That's
> the plan for nohz_full but it may be too late for domain isolation.
Actually I was talking mainly about nohz_full, but we should handle
changes in HK_TYPE_DOMAIN cpumask the same way.
Cheers,
Longman
next prev parent reply other threads:[~2025-06-26 23:58 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-20 15:22 [PATCH 00/27] cpuset/isolation: Honour kthreads preferred affinity Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 01/27] sched/isolation: Remove housekeeping static key Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 02/27] sched/isolation: Introduce housekeeping per-cpu rwsem Frederic Weisbecker
2025-06-23 17:34 ` Waiman Long
2025-06-23 17:39 ` Tejun Heo
2025-06-23 17:57 ` Waiman Long
2025-06-23 18:03 ` Tejun Heo
2025-06-25 14:30 ` Frederic Weisbecker
2025-06-25 12:18 ` Phil Auld
2025-06-25 14:34 ` Frederic Weisbecker
2025-06-25 15:50 ` Phil Auld
2025-06-27 0:11 ` Waiman Long
2025-06-27 0:48 ` Phil Auld
2025-06-30 12:59 ` Thomas Gleixner
2025-06-25 14:18 ` Frederic Weisbecker
2025-06-26 23:58 ` Waiman Long [this message]
2025-06-20 15:22 ` [PATCH 03/27] PCI: Protect against concurrent change of housekeeping cpumask Frederic Weisbecker
2025-06-20 16:17 ` Bjorn Helgaas
2025-06-26 14:51 ` Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 04/27] cpu: Protect against concurrent isolated cpuset change Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 05/27] memcg: Prepare to protect " Frederic Weisbecker
2025-06-20 19:19 ` Shakeel Butt
2025-06-20 15:22 ` [PATCH 06/27] mm: vmstat: " Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 07/27] sched/isolation: Save boot defined domain flags Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 08/27] cpuset: Convert boot_hk_cpus to use HK_TYPE_DOMAIN_BOOT Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 09/27] driver core: cpu: Convert /sys/devices/system/cpu/isolated " Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 10/27] net: Keep ignoring isolated cpuset change Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 11/27] block: Protect against concurrent " Frederic Weisbecker
2025-06-20 15:59 ` Bart Van Assche
2025-06-26 15:03 ` Frederic Weisbecker
2025-06-23 5:46 ` Christoph Hellwig
2025-06-26 15:33 ` Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 12/27] cpu: Provide lockdep check for CPU hotplug lock write-held Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 13/27] cpuset: Provide lockdep check for cpuset lock held Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 14/27] sched/isolation: Convert housekeeping cpumasks to rcu pointers Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 15/27] cpuset: Update HK_TYPE_DOMAIN cpumask from cpuset Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 16/27] sched/isolation: Flush memcg workqueues on cpuset isolated partition change Frederic Weisbecker
2025-06-20 19:30 ` Shakeel Butt
2025-06-20 15:22 ` [PATCH 17/27] sched/isolation: Flush vmstat " Frederic Weisbecker
2025-06-20 15:22 ` [PATCH 18/27] cpuset: Propagate cpuset isolation update to workqueue through housekeeping Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 19/27] cpuset: Remove cpuset_cpu_is_isolated() Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 20/27] sched/isolation: Remove HK_TYPE_TICK test from cpu_is_isolated() Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 21/27] kthread: Refine naming of affinity related fields Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 22/27] kthread: Include unbound kthreads in the managed affinity list Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 23/27] kthread: Include kthreadd to " Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 24/27] kthread: Rely on HK_TYPE_DOMAIN for preferred affinity management Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 25/27] sched: Switch the fallback task allowed cpumask to HK_TYPE_DOMAIN Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 26/27] kthread: Honour kthreads preferred affinity after cpuset changes Frederic Weisbecker
2025-06-20 15:23 ` [PATCH 27/27] kthread: Comment on the purpose and placement of kthread_affine_node() call Frederic Weisbecker
2025-06-20 16:08 ` [PATCH 00/27] cpuset/isolation: Honour kthreads preferred affinity Bjorn Helgaas
2025-06-26 14:57 ` Frederic Weisbecker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e9ef9cb1-f202-4591-99f0-4451ca945f0b@redhat.com \
--to=llong@redhat.com \
--cc=frederic@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=marco.crivellari@suse.com \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).