Archive-only list for patches
 help / color / mirror / Atom feed
From: Reinette Chatre <reinette.chatre@intel.com>
To: Tony Luck <tony.luck@intel.com>, Fenghua Yu <fenghuay@nvidia.com>,
	"Maciej Wieczor-Retman" <maciej.wieczor-retman@intel.com>,
	Peter Newman <peternewman@google.com>,
	James Morse <james.morse@arm.com>,
	Babu Moger <babu.moger@amd.com>,
	Drew Fustini <dfustini@baylibre.com>,
	Dave Martin <Dave.Martin@arm.com>, Chen Yu <yu.c.chen@intel.com>
Cc: Borislav Petkov <bp@alien8.de>, <x86@kernel.org>,
	<linux-kernel@vger.kernel.org>, <patches@lists.linux.dev>
Subject: Re: [PATCH 4/4] fs/resctrl: Fix issues with worker threads when CPUs are taken offline
Date: Mon, 11 May 2026 16:06:04 -0700	[thread overview]
Message-ID: <1216ef85-9cc5-4037-9c51-6915bc6f4bdd@intel.com> (raw)
In-Reply-To: <20260508182143.14592-5-tony.luck@intel.com>

Hi Tony,

On 5/8/26 11:21 AM, Tony Luck wrote:
> diff --git a/fs/resctrl/monitor.c b/fs/resctrl/monitor.c
> index 9fd901c78dc6..02434d11e024 100644
> --- a/fs/resctrl/monitor.c
> +++ b/fs/resctrl/monitor.c
> @@ -791,12 +791,38 @@ static void mbm_update(struct rdt_resource *r, struct rdt_l3_mon_domain *d,
>   */
>  void cqm_handle_limbo(struct work_struct *work)
>  {
> +	struct rdt_resource *r = resctrl_arch_get_resource(RDT_RESOURCE_L3);
>  	unsigned long delay = msecs_to_jiffies(CQM_LIMBOCHECK_INTERVAL);
>  	struct rdt_l3_mon_domain *d;
>  
>  	cpus_read_lock();
>  	mutex_lock(&rdtgroup_mutex);
>  
> +	/*
> +	 * Worker was blocked waiting for the CPU it was running on to go
> +	 * offline. Handle two scenarios:
> +	 * - Worker was running on the last CPU of a domain. The domain and
> +	 *   thus the work_struct has been freed so do not attempt to obtain
> +	 *   domain via container_of(). All remaining domains have limbo
> +	 *   handlers so the loop will not find any domains needing a
> +	 *   limbo handler. Just exit.
> +	 * - Worker was running on CPU that just went offline with other
> +	 *   CPUs in domain still running and available to take over the
> +	 *   worker. Offline handler could not schedule a new worker on
> +	 *   another CPU in the domain but signaled that this needs to be
> +	 *   done by setting mbm_work_cpu to nr_cpu_ids. Find the domain
> +	 *   that needs a worker and schedule it after the normal CQM
> +	 *   interval.
> +	 */
> +	if (!is_percpu_thread()) {
> +		list_for_each_entry(d, &r->mon_domains, hdr.list) {
> +			if (d->cqm_work_cpu == nr_cpu_ids)
> +				cqm_setup_limbo_handler(d, CQM_LIMBOCHECK_INTERVAL,
> +							RESCTRL_PICK_ANY_CPU);
> +		}
> +		goto out_unlock;
> +	}
> +
>  	d = container_of(work, struct rdt_l3_mon_domain, cqm_limbo.work);
>  

The issue reported by sashiko [1] is not clear to me. The claim is that if above worker
is running on last CPU of a domain and is blocked at cpus_read_lock() at the time
the CPU it is running on is rapidly offlined and then onlined, then when the
worker can run it will find is_percpu_thread() to be true but the domain structure
will be freed. 
I am not familiar with the CPU hotplug locking but from what I can tell, in this
scenario, the cpus_write_lock() in _cpu_up() will block since there is a pending reader
and the worker will be able to run before the CPU online work is done. The scenario presented
thus seems to be defeated by percpu-rwsem semantics. What do you think of the scenario
presented in [1]?

Reinette


[1] https://sashiko.dev/#/patchset/20260508182143.14592-1-tony.luck%40intel.com?part=4


  reply	other threads:[~2026-05-11 23:06 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-08 18:21 [PATCH 0/4] fs/resctrl: Fix three long-standing issues Tony Luck
2026-05-08 18:21 ` [PATCH 1/4] fs/resctrl: Move functions to avoid forward references in subsequent fixes Tony Luck
2026-05-08 18:21 ` [PATCH 2/4] fs/resctrl: Free mon_data structures on rdt_get_tree() failure Tony Luck
2026-05-08 21:36   ` Luck, Tony
2026-05-09 12:43     ` Chen, Yu C
2026-05-11  3:15       ` Luck, Tony
2026-05-12  1:51         ` Chen, Yu C
2026-05-08 18:21 ` [PATCH 3/4] fs/resctrl: Fix deadlock for errors during mount Tony Luck
2026-05-10 13:52   ` Chen, Yu C
2026-05-11 22:53   ` Reinette Chatre
2026-05-12  7:28     ` Chen, Yu C
2026-05-12 14:34       ` Reinette Chatre
2026-05-13  3:24         ` Chen, Yu C
2026-05-13 19:51           ` Luck, Tony
2026-05-08 18:21 ` [PATCH 4/4] fs/resctrl: Fix issues with worker threads when CPUs are taken offline Tony Luck
2026-05-11 23:06   ` Reinette Chatre [this message]
2026-05-13 20:10     ` Luck, Tony

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1216ef85-9cc5-4037-9c51-6915bc6f4bdd@intel.com \
    --to=reinette.chatre@intel.com \
    --cc=Dave.Martin@arm.com \
    --cc=babu.moger@amd.com \
    --cc=bp@alien8.de \
    --cc=dfustini@baylibre.com \
    --cc=fenghuay@nvidia.com \
    --cc=james.morse@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.wieczor-retman@intel.com \
    --cc=patches@lists.linux.dev \
    --cc=peternewman@google.com \
    --cc=tony.luck@intel.com \
    --cc=x86@kernel.org \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox