Archive-only list for patches
 help / color / mirror / Atom feed
From: "Luck, Tony" <tony.luck@intel.com>
To: Reinette Chatre <reinette.chatre@intel.com>
Cc: Fenghua Yu <fenghuay@nvidia.com>,
	Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com>,
	Peter Newman <peternewman@google.com>,
	James Morse <james.morse@arm.com>,
	Babu Moger <babu.moger@amd.com>,
	"Drew Fustini" <dfustini@baylibre.com>,
	Dave Martin <Dave.Martin@arm.com>, Chen Yu <yu.c.chen@intel.com>,
	Borislav Petkov <bp@alien8.de>, <x86@kernel.org>,
	<linux-kernel@vger.kernel.org>, <patches@lists.linux.dev>
Subject: Re: [PATCH 4/4] fs/resctrl: Fix issues with worker threads when CPUs are taken offline
Date: Wed, 13 May 2026 13:10:01 -0700	[thread overview]
Message-ID: <agTamVZ23ilhQw5R@agluck-desk3> (raw)
In-Reply-To: <1216ef85-9cc5-4037-9c51-6915bc6f4bdd@intel.com>

On Mon, May 11, 2026 at 04:06:04PM -0700, Reinette Chatre wrote:
> Hi Tony,
> 
> On 5/8/26 11:21 AM, Tony Luck wrote:
> > diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
> > index 9fd901c78dc6..02434d11e024 100644
> > --- a/fs/resctrl/monitor.c
> > +++ b/fs/resctrl/monitor.c
> > @@ -791,12 +791,38 @@ static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d,
> >   */
> >  void cqm_handle_limbo(struct work_struct *work)
> >  {
> > +	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
> >  	unsigned long delay = msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL);
> >  	struct rdt_l3_mon_domain *d;
> >  
> >  	cpus_read_lock();
> >  	mutex_lock(&rdtgroup_mutex);
> >  
> > +	/*
> > +	 * Worker was blocked waiting for the CPU it was running on to go
> > +	 * offline. Handle two scenarios:
> > +	 * - Worker was running on the last CPU of a domain. The domain and
> > +	 *   thus the work_struct has been freed so do not attempt to obtain
> > +	 *   domain via container_of(). All remaining domains have limbo
> > +	 *   handlers so the loop will not find any domains needing a
> > +	 *   limbo handler. Just exit.
> > +	 * - Worker was running on CPU that just went offline with other
> > +	 *   CPUs in domain still running and available to take over the
> > +	 *   worker. Offline handler could not schedule a new worker on
> > +	 *   another CPU in the domain but signaled that this needs to be
> > +	 *   done by setting mbm_work_cpu to nr_cpu_ids. Find the domain
> > +	 *   that needs a worker and schedule it after the normal CQM
> > +	 *   interval.
> > +	 */
> > +	if (!is_percpu_thread()) {
> > +		list_for_each_entry(d, &r->mon_domains, hdr.list) {
> > +			if (d->cqm_work_cpu == nr_cpu_ids)
> > +				cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL,
> > +							RESCTRL_PICK_ANY_CPU);
> > +		}
> > +		goto out_unlock;
> > +	}
> > +
> >  	d = container_of(work, struct rdt_l3_mon_domain, cqm_limbo.work);
> >  
> 
> The issue reported by sashiko [1] is not clear to me. The claim is that if above worker
> is running on last CPU of a domain and is blocked at cpus_read_lock() at the time
> the CPU it is running on is rapidly offlined and then onlined, then when the
> worker can run it will find is_percpu_thread() to be true but the domain structure
> will be freed. 
> I am not familiar with the CPU hotplug locking but from what I can tell, in this
> scenario, the cpus_write_lock() in _cpu_up() will block since there is a pending reader
> and the worker will be able to run before the CPU online work is done. The scenario presented
> thus seems to be defeated by percpu-rwsem semantics. What do you think of the scenario
> presented in [1]?

I'm also not familiar with the fine details of the CPU offline/online
flow ... but this claim by sashiko seems fishy:

	4. Before the worker thread resumes, a CPU online operation occurs for
	   the same CPU. The workqueue subsystem rebinds the worker thread to
	   the CPU (restoring nr_cpus_allowed == 1).

That sounds like a thing that could be done, but did code actually get
written for this obscure corner case?

I asked Gemini about races between readers and writers for the cpu_lock,
it says that queued readers will run before queued writers because in
this case cpus_write_lock() is considered an infrequent heavy weight
operation, so priority is given to readers (who can execute in
parallel). This matches your analysis above.
> 
> Reinette
> 
> 
> [1] https://sashiko.dev/#/patchset/20260508182143.14592-1-tony.luck%40intel.com?part=4

-Tony
> 

      reply	other threads:[~2026-05-13 20:10 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-08 18:21 [PATCH 0/4] fs/resctrl: Fix three long-standing issues Tony Luck
2026-05-08 18:21 ` [PATCH 1/4] fs/resctrl: Move functions to avoid forward references in subsequent fixes Tony Luck
2026-05-08 18:21 ` [PATCH 2/4] fs/resctrl: Free mon_data structures on rdt_get_tree() failure Tony Luck
2026-05-08 21:36   ` Luck, Tony
2026-05-09 12:43     ` Chen, Yu C
2026-05-11  3:15       ` Luck, Tony
2026-05-12  1:51         ` Chen, Yu C
2026-05-08 18:21 ` [PATCH 3/4] fs/resctrl: Fix deadlock for errors during mount Tony Luck
2026-05-10 13:52   ` Chen, Yu C
2026-05-11 22:53   ` Reinette Chatre
2026-05-12  7:28     ` Chen, Yu C
2026-05-12 14:34       ` Reinette Chatre
2026-05-13  3:24         ` Chen, Yu C
2026-05-13 19:51           ` Luck, Tony
2026-05-13 22:19             ` Reinette Chatre
2026-05-08 18:21 ` [PATCH 4/4] fs/resctrl: Fix issues with worker threads when CPUs are taken offline Tony Luck
2026-05-11 23:06   ` Reinette Chatre
2026-05-13 20:10     ` Luck, Tony [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=agTamVZ23ilhQw5R@agluck-desk3 \
    --to=tony.luck@intel.com \
    --cc=Dave.Martin@arm.com \
    --cc=babu.moger@amd.com \
    --cc=bp@alien8.de \
    --cc=dfustini@baylibre.com \
    --cc=fenghuay@nvidia.com \
    --cc=james.morse@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.wieczor-retman@intel.com \
    --cc=patches@lists.linux.dev \
    --cc=peternewman@google.com \
    --cc=reinette.chatre@intel.com \
    --cc=x86@kernel.org \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox