* Re: [RESEND PATCH v4 2/2] acpi/hmat: Fix lockdep warning for hmem_register_resource()
2025-11-05 23:51 ` [RESEND PATCH v4 2/2] acpi/hmat: Fix lockdep warning for hmem_register_resource() Dave Jiang
@ 2025-11-08 1:06 ` dan.j.williams
2025-11-12 20:41 ` Dave Jiang
2025-11-11 15:56 ` Jonathan Cameron
2025-11-12 21:46 ` Dave Jiang
2 siblings, 1 reply; 8+ messages in thread
From: dan.j.williams @ 2025-11-08 1:06 UTC (permalink / raw)
To: Dave Jiang, nvdimm, linux-cxl, linux-acpi
Cc: dan.j.williams, vishal.l.verma, ira.weiny, rafael
Dave Jiang wrote:
> The following lockdep splat was observed while kernel auto-online a CXL
> memory region:
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.17.0djtest+ #53 Tainted: G W
> ------------------------------------------------------
> systemd-udevd/3334 is trying to acquire lock:
> ffffffff90346188 (hmem_resource_lock){+.+.}-{4:4}, at: hmem_register_resource+0x31/0x50
>
> but task is already holding lock:
> ffffffff90338890 ((node_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x2e/0x70
>
> which lock already depends on the new lock.
> [..]
> Chain exists of:
> hmem_resource_lock --> mem_hotplug_lock --> (node_chain).rwsem
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> rlock((node_chain).rwsem);
> lock(mem_hotplug_lock);
> lock((node_chain).rwsem);
> lock(hmem_resource_lock);
>
> The lock ordering can cause potential deadlock. There are instances
> where hmem_resource_lock is taken after (node_chain).rwsem, and vice
> versa.
>
> Split out the target update section of hmat_register_target() so that
> hmat_callback() only envokes that section instead of attempt to register
s/envokes/invokes/
> hmem devices that it does not need to.
>
> Fixes: cf8741ac57ed ("ACPI: NUMA: HMAT: Register "soft reserved" memory as an "hmem" device")
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
> ---
> v4:
> - Fix fixes tag. (Jonathan)
> - Refactor hmat_hotplug_target(). (Jonathan)
> ---
> drivers/acpi/numa/hmat.c | 47 ++++++++++++++++++++++------------------
> 1 file changed, 26 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index 1dc73d20d989..d10cbe93c3a7 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -874,10 +874,33 @@ static void hmat_register_target_devices(struct memory_target *target)
> }
> }
>
> -static void hmat_register_target(struct memory_target *target)
> +static void hmat_hotplug_target(struct memory_target *target)
Ah, this makes sense, but is quite a bit of churn with the new
indentation and new function name which is a slightly odd fit since
initial setup is not "hotplug". Is the following equivalent / easier to
follow?
Either way,
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
index 5a36d57289b4..a1be8cf70dc4 100644
--- a/drivers/acpi/numa/hmat.c
+++ b/drivers/acpi/numa/hmat.c
@@ -878,22 +878,16 @@ static void hmat_register_target(struct memory_target *target)
{
int nid = pxm_to_node(target->memory_pxm);
- /*
- * Devices may belong to either an offline or online
- * node, so unconditionally add them.
- */
- hmat_register_target_devices(target);
-
/*
* Register generic port perf numbers. The nid may not be
* initialized and is still NUMA_NO_NODE.
*/
- mutex_lock(&target_lock);
+ guard(mutex)(&target_lock);
if (*(u16 *)target->gen_port_device_handle) {
hmat_update_generic_target(target);
target->registered = true;
+ return;
}
- mutex_unlock(&target_lock);
/*
* Skip offline nodes. This can happen when memory
@@ -905,7 +899,6 @@ static void hmat_register_target(struct memory_target *target)
if (nid == NUMA_NO_NODE || !node_online(nid))
return;
- mutex_lock(&target_lock);
if (!target->registered) {
hmat_register_target_initiators(target);
hmat_register_target_cache(target);
@@ -913,15 +906,16 @@ static void hmat_register_target(struct memory_target *target)
hmat_register_target_perf(target, ACCESS_COORDINATE_CPU);
target->registered = true;
}
- mutex_unlock(&target_lock);
}
static void hmat_register_targets(void)
{
struct memory_target *target;
- list_for_each_entry(target, &targets, node)
+ list_for_each_entry(target, &targets, node) {
+ hmat_register_target_devices(target);
hmat_register_target(target);
+ }
}
static int hmat_callback(struct notifier_block *self,
^ permalink raw reply related [flat|nested] 8+ messages in thread* Re: [RESEND PATCH v4 2/2] acpi/hmat: Fix lockdep warning for hmem_register_resource()
2025-11-08 1:06 ` dan.j.williams
@ 2025-11-12 20:41 ` Dave Jiang
0 siblings, 0 replies; 8+ messages in thread
From: Dave Jiang @ 2025-11-12 20:41 UTC (permalink / raw)
To: dan.j.williams, nvdimm, linux-cxl, linux-acpi
Cc: vishal.l.verma, ira.weiny, rafael
On 11/7/25 6:06 PM, dan.j.williams@intel.com wrote:
> Dave Jiang wrote:
>> The following lockdep splat was observed while kernel auto-online a CXL
>> memory region:
>>
>> ======================================================
>> WARNING: possible circular locking dependency detected
>> 6.17.0djtest+ #53 Tainted: G W
>> ------------------------------------------------------
>> systemd-udevd/3334 is trying to acquire lock:
>> ffffffff90346188 (hmem_resource_lock){+.+.}-{4:4}, at: hmem_register_resource+0x31/0x50
>>
>> but task is already holding lock:
>> ffffffff90338890 ((node_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x2e/0x70
>>
>> which lock already depends on the new lock.
>> [..]
>> Chain exists of:
>> hmem_resource_lock --> mem_hotplug_lock --> (node_chain).rwsem
>>
>> Possible unsafe locking scenario:
>>
>> CPU0 CPU1
>> ---- ----
>> rlock((node_chain).rwsem);
>> lock(mem_hotplug_lock);
>> lock((node_chain).rwsem);
>> lock(hmem_resource_lock);
>>
>> The lock ordering can cause potential deadlock. There are instances
>> where hmem_resource_lock is taken after (node_chain).rwsem, and vice
>> versa.
>>
>> Split out the target update section of hmat_register_target() so that
>> hmat_callback() only envokes that section instead of attempt to register
>
> s/envokes/invokes/
>
>> hmem devices that it does not need to.
>>
>> Fixes: cf8741ac57ed ("ACPI: NUMA: HMAT: Register "soft reserved" memory as an "hmem" device")
>> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
>> ---
>> v4:
>> - Fix fixes tag. (Jonathan)
>> - Refactor hmat_hotplug_target(). (Jonathan)
>> ---
>> drivers/acpi/numa/hmat.c | 47 ++++++++++++++++++++++------------------
>> 1 file changed, 26 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
>> index 1dc73d20d989..d10cbe93c3a7 100644
>> --- a/drivers/acpi/numa/hmat.c
>> +++ b/drivers/acpi/numa/hmat.c
>> @@ -874,10 +874,33 @@ static void hmat_register_target_devices(struct memory_target *target)
>> }
>> }
>>
>> -static void hmat_register_target(struct memory_target *target)
>> +static void hmat_hotplug_target(struct memory_target *target)
>
> Ah, this makes sense, but is quite a bit of churn with the new
> indentation and new function name which is a slightly odd fit since
> initial setup is not "hotplug". Is the following equivalent / easier to
> follow?
Yes and no. We also need to remove the generic target block out of the hotplug path. So I'll just keep the original version.
DJ
>
> Either way,
>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
>
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index 5a36d57289b4..a1be8cf70dc4 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -878,22 +878,16 @@ static void hmat_register_target(struct memory_target *target)
> {
> int nid = pxm_to_node(target->memory_pxm);
>
> - /*
> - * Devices may belong to either an offline or online
> - * node, so unconditionally add them.
> - */
> - hmat_register_target_devices(target);
> -
> /*
> * Register generic port perf numbers. The nid may not be
> * initialized and is still NUMA_NO_NODE.
> */
> - mutex_lock(&target_lock);
> + guard(mutex)(&target_lock);
> if (*(u16 *)target->gen_port_device_handle) {
> hmat_update_generic_target(target);
> target->registered = true;
> + return;
> }
> - mutex_unlock(&target_lock);
>
> /*
> * Skip offline nodes. This can happen when memory
> @@ -905,7 +899,6 @@ static void hmat_register_target(struct memory_target *target)
> if (nid == NUMA_NO_NODE || !node_online(nid))
> return;
>
> - mutex_lock(&target_lock);
> if (!target->registered) {
> hmat_register_target_initiators(target);
> hmat_register_target_cache(target);
> @@ -913,15 +906,16 @@ static void hmat_register_target(struct memory_target *target)
> hmat_register_target_perf(target, ACCESS_COORDINATE_CPU);
> target->registered = true;
> }
> - mutex_unlock(&target_lock);
> }
>
> static void hmat_register_targets(void)
> {
> struct memory_target *target;
>
> - list_for_each_entry(target, &targets, node)
> + list_for_each_entry(target, &targets, node) {
> + hmat_register_target_devices(target);
> hmat_register_target(target);
> + }
> }
>
> static int hmat_callback(struct notifier_block *self,
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RESEND PATCH v4 2/2] acpi/hmat: Fix lockdep warning for hmem_register_resource()
2025-11-05 23:51 ` [RESEND PATCH v4 2/2] acpi/hmat: Fix lockdep warning for hmem_register_resource() Dave Jiang
2025-11-08 1:06 ` dan.j.williams
@ 2025-11-11 15:56 ` Jonathan Cameron
2025-11-12 21:46 ` Dave Jiang
2 siblings, 0 replies; 8+ messages in thread
From: Jonathan Cameron @ 2025-11-11 15:56 UTC (permalink / raw)
To: Dave Jiang
Cc: nvdimm, linux-cxl, linux-acpi, dan.j.williams, vishal.l.verma,
ira.weiny, rafael
On Wed, 5 Nov 2025 16:51:15 -0700
Dave Jiang <dave.jiang@intel.com> wrote:
> The following lockdep splat was observed while kernel auto-online a CXL
> memory region:
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.17.0djtest+ #53 Tainted: G W
> ------------------------------------------------------
> systemd-udevd/3334 is trying to acquire lock:
> ffffffff90346188 (hmem_resource_lock){+.+.}-{4:4}, at: hmem_register_resource+0x31/0x50
>
> but task is already holding lock:
> ffffffff90338890 ((node_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x2e/0x70
>
> which lock already depends on the new lock.
> [..]
> Chain exists of:
> hmem_resource_lock --> mem_hotplug_lock --> (node_chain).rwsem
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> rlock((node_chain).rwsem);
> lock(mem_hotplug_lock);
> lock((node_chain).rwsem);
> lock(hmem_resource_lock);
>
> The lock ordering can cause potential deadlock. There are instances
> where hmem_resource_lock is taken after (node_chain).rwsem, and vice
> versa.
>
> Split out the target update section of hmat_register_target() so that
> hmat_callback() only envokes that section instead of attempt to register
> hmem devices that it does not need to.
>
> Fixes: cf8741ac57ed ("ACPI: NUMA: HMAT: Register "soft reserved" memory as an "hmem" device")
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Trivial inline. I'm also fine with the version Dan posted.
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> ---
> v4:
> - Fix fixes tag. (Jonathan)
> - Refactor hmat_hotplug_target(). (Jonathan)
> ---
> drivers/acpi/numa/hmat.c | 47 ++++++++++++++++++++++------------------
> 1 file changed, 26 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index 1dc73d20d989..d10cbe93c3a7 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -874,10 +874,33 @@ static void hmat_register_target_devices(struct memory_target *target)
> }
> }
>
> -static void hmat_register_target(struct memory_target *target)
> +static void hmat_hotplug_target(struct memory_target *target)
> {
> int nid = pxm_to_node(target->memory_pxm);
>
> + /*
> + * Skip offline nodes. This can happen when memory
> + * marked EFI_MEMORY_SP, "specific purpose", is applied
> + * to all the memory in a proximity domain leading to
> + * the node being marked offline / unplugged, or if
> + * memory-only "hotplug" node is offline.
If this version goes forwards, rewrap closer to 80 chars. I guess it ended
up short in some previous refactor.
> + */
> + if (nid == NUMA_NO_NODE || !node_online(nid))
> + return;
> +
> + guard(mutex)(&target_lock);
> + if (target->registered)
> + return;
> +
> + hmat_register_target_initiators(target);
> + hmat_register_target_cache(target);
> + hmat_register_target_perf(target, ACCESS_COORDINATE_LOCAL);
> + hmat_register_target_perf(target, ACCESS_COORDINATE_CPU);
> + target->registered = true;
> +}
> +
> +static void hmat_register_target(struct memory_target *target)
> +{
> /*
> * Devices may belong to either an offline or online
> * node, so unconditionally add them.
> @@ -896,25 +919,7 @@ static void hmat_register_target(struct memory_target *target)
> }
> }
>
> - /*
> - * Skip offline nodes. This can happen when memory
> - * marked EFI_MEMORY_SP, "specific purpose", is applied
> - * to all the memory in a proximity domain leading to
> - * the node being marked offline / unplugged, or if
> - * memory-only "hotplug" node is offline.
> - */
> - if (nid == NUMA_NO_NODE || !node_online(nid))
> - return;
> -
> - mutex_lock(&target_lock);
> - if (!target->registered) {
> - hmat_register_target_initiators(target);
> - hmat_register_target_cache(target);
> - hmat_register_target_perf(target, ACCESS_COORDINATE_LOCAL);
> - hmat_register_target_perf(target, ACCESS_COORDINATE_CPU);
> - target->registered = true;
> - }
> - mutex_unlock(&target_lock);
> + hmat_hotplug_target(target);
> }
>
> static void hmat_register_targets(void)
> @@ -940,7 +945,7 @@ static int hmat_callback(struct notifier_block *self,
> if (!target)
> return NOTIFY_OK;
>
> - hmat_register_target(target);
> + hmat_hotplug_target(target);
> return NOTIFY_OK;
> }
>
^ permalink raw reply [flat|nested] 8+ messages in thread* Re: [RESEND PATCH v4 2/2] acpi/hmat: Fix lockdep warning for hmem_register_resource()
2025-11-05 23:51 ` [RESEND PATCH v4 2/2] acpi/hmat: Fix lockdep warning for hmem_register_resource() Dave Jiang
2025-11-08 1:06 ` dan.j.williams
2025-11-11 15:56 ` Jonathan Cameron
@ 2025-11-12 21:46 ` Dave Jiang
2 siblings, 0 replies; 8+ messages in thread
From: Dave Jiang @ 2025-11-12 21:46 UTC (permalink / raw)
To: nvdimm, linux-cxl, linux-acpi
Cc: dan.j.williams, vishal.l.verma, ira.weiny, rafael
On 11/5/25 4:51 PM, Dave Jiang wrote:
> The following lockdep splat was observed while kernel auto-online a CXL
> memory region:
>
> ======================================================
> WARNING: possible circular locking dependency detected
> 6.17.0djtest+ #53 Tainted: G W
> ------------------------------------------------------
> systemd-udevd/3334 is trying to acquire lock:
> ffffffff90346188 (hmem_resource_lock){+.+.}-{4:4}, at: hmem_register_resource+0x31/0x50
>
> but task is already holding lock:
> ffffffff90338890 ((node_chain).rwsem){++++}-{4:4}, at: blocking_notifier_call_chain+0x2e/0x70
>
> which lock already depends on the new lock.
> [..]
> Chain exists of:
> hmem_resource_lock --> mem_hotplug_lock --> (node_chain).rwsem
>
> Possible unsafe locking scenario:
>
> CPU0 CPU1
> ---- ----
> rlock((node_chain).rwsem);
> lock(mem_hotplug_lock);
> lock((node_chain).rwsem);
> lock(hmem_resource_lock);
>
> The lock ordering can cause potential deadlock. There are instances
> where hmem_resource_lock is taken after (node_chain).rwsem, and vice
> versa.
>
> Split out the target update section of hmat_register_target() so that
> hmat_callback() only envokes that section instead of attempt to register
> hmem devices that it does not need to.
>
> Fixes: cf8741ac57ed ("ACPI: NUMA: HMAT: Register "soft reserved" memory as an "hmem" device")
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Applied to cxl/fixes
71cd75e2b101a31d09f031e132a6ad04c911e164
> ---
> v4:
> - Fix fixes tag. (Jonathan)
> - Refactor hmat_hotplug_target(). (Jonathan)
> ---
> drivers/acpi/numa/hmat.c | 47 ++++++++++++++++++++++------------------
> 1 file changed, 26 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index 1dc73d20d989..d10cbe93c3a7 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -874,10 +874,33 @@ static void hmat_register_target_devices(struct memory_target *target)
> }
> }
>
> -static void hmat_register_target(struct memory_target *target)
> +static void hmat_hotplug_target(struct memory_target *target)
> {
> int nid = pxm_to_node(target->memory_pxm);
>
> + /*
> + * Skip offline nodes. This can happen when memory
> + * marked EFI_MEMORY_SP, "specific purpose", is applied
> + * to all the memory in a proximity domain leading to
> + * the node being marked offline / unplugged, or if
> + * memory-only "hotplug" node is offline.
> + */
> + if (nid == NUMA_NO_NODE || !node_online(nid))
> + return;
> +
> + guard(mutex)(&target_lock);
> + if (target->registered)
> + return;
> +
> + hmat_register_target_initiators(target);
> + hmat_register_target_cache(target);
> + hmat_register_target_perf(target, ACCESS_COORDINATE_LOCAL);
> + hmat_register_target_perf(target, ACCESS_COORDINATE_CPU);
> + target->registered = true;
> +}
> +
> +static void hmat_register_target(struct memory_target *target)
> +{
> /*
> * Devices may belong to either an offline or online
> * node, so unconditionally add them.
> @@ -896,25 +919,7 @@ static void hmat_register_target(struct memory_target *target)
> }
> }
>
> - /*
> - * Skip offline nodes. This can happen when memory
> - * marked EFI_MEMORY_SP, "specific purpose", is applied
> - * to all the memory in a proximity domain leading to
> - * the node being marked offline / unplugged, or if
> - * memory-only "hotplug" node is offline.
> - */
> - if (nid == NUMA_NO_NODE || !node_online(nid))
> - return;
> -
> - mutex_lock(&target_lock);
> - if (!target->registered) {
> - hmat_register_target_initiators(target);
> - hmat_register_target_cache(target);
> - hmat_register_target_perf(target, ACCESS_COORDINATE_LOCAL);
> - hmat_register_target_perf(target, ACCESS_COORDINATE_CPU);
> - target->registered = true;
> - }
> - mutex_unlock(&target_lock);
> + hmat_hotplug_target(target);
> }
>
> static void hmat_register_targets(void)
> @@ -940,7 +945,7 @@ static int hmat_callback(struct notifier_block *self,
> if (!target)
> return NOTIFY_OK;
>
> - hmat_register_target(target);
> + hmat_hotplug_target(target);
> return NOTIFY_OK;
> }
>
^ permalink raw reply [flat|nested] 8+ messages in thread