linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Oscar Salvador <osalvador@suse.de>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Harry Yoo <harry.yoo@oracle.com>, Rakie Kim <rakie.kim@sk.com>,
	Hyeonggon Yoo <42.hyeyoo@gmail.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 2/3] mm,memory_hotplug: Implement numa node notifier
Date: Wed, 4 Jun 2025 14:03:23 +0200	[thread overview]
Message-ID: <ddcdd8b9-566c-4f6c-b1f7-861e93a80fbb@redhat.com> (raw)
In-Reply-To: <20250603110850.192912-3-osalvador@suse.de>

On 03.06.25 13:08, Oscar Salvador wrote:
> There are at least six consumers of hotplug_memory_notifier that what they
> really are interested in is whether any numa node changed its state, e.g: going
> from being memory aware to becoming memoryless and vice versa.
> 
> Implement a specific notifier for numa nodes when their state gets changed,
> and have those consumers that only care about numa node state changes use it.
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> ---
>   drivers/acpi/numa/hmat.c  |   6 +-
>   drivers/base/node.c       |  21 +++++
>   drivers/cxl/core/region.c |  14 ++--
>   drivers/cxl/cxl.h         |   4 +-
>   include/linux/memory.h    |  38 ++++++++-
>   kernel/cgroup/cpuset.c    |   2 +-
>   mm/memory-tiers.c         |   8 +-
>   mm/memory_hotplug.c       | 161 +++++++++++++++++---------------------
>   mm/mempolicy.c            |   8 +-
>   mm/slub.c                 |  13 ++-
>   10 files changed, 155 insertions(+), 120 deletions(-)
> 
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index 9d9052258e92..9ac82a767daf 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -962,10 +962,10 @@ static int hmat_callback(struct notifier_block *self,
>   			 unsigned long action, void *arg)
>   {
>   	struct memory_target *target;
> -	struct memory_notify *mnb = arg;
> +	struct node_notify *mnb = arg;
>   	int pxm, nid = mnb->status_change_nid;
>   
> -	if (nid == NUMA_NO_NODE || action != MEM_ONLINE)
> +	if (nid == NUMA_NO_NODE || action != NODE_BECAME_MEM_AWARE)
>   		return NOTIFY_OK;
>   
>   	pxm = node_to_pxm(nid);
> @@ -1118,7 +1118,7 @@ static __init int hmat_init(void)
>   	hmat_register_targets();
>   
>   	/* Keep the table and structures if the notifier may use them */
> -	if (hotplug_memory_notifier(hmat_callback, HMAT_CALLBACK_PRI))
> +	if (hotplug_node_notifier(hmat_callback, HMAT_CALLBACK_PRI))
>   		goto out_put;
>   
>   	if (!hmat_set_default_dram_perf())
> diff --git a/drivers/base/node.c b/drivers/base/node.c


[...]


> diff --git a/include/linux/memory.h b/include/linux/memory.h
> index 5ec4e6d209b9..8c5c88eaffb3 100644
> --- a/include/linux/memory.h
> +++ b/include/linux/memory.h
> @@ -99,6 +99,14 @@ int set_memory_block_size_order(unsigned int order);
>   #define	MEM_PREPARE_ONLINE	(1<<6)
>   #define	MEM_FINISH_OFFLINE	(1<<7)
>   
> +/* These states are used for numa node notifiers */
> +#define NODE_BECOMING_MEM_AWARE		(1<<0)
> +#define NODE_BECAME_MEM_AWARE		(1<<1)
> +#define NODE_BECOMING_MEMORYLESS	(1<<2)
> +#define NODE_BECAME_MEMORYLESS		(1<<3)
> +#define NODE_CANCEL_MEM_AWARE		(1<<4)
> +#define NODE_CANCEL_MEMORYLESS		(1<<5)

Very nitpicky: MEM vs. MEMORY inconsistency. Also, I am not sure about 
"MEMORYLESS vs. MEMORY AWARE" terminology (opposite of aware is not 
less) and "BECOMING" vs. "CANCEL" ...

There must be something better ... but what is it. :)

NODE_ADDING_FIRST_MEMORY
NODE_ADDED_FIRST_MEMORY
NODE_CANCEL_ADDING_FIRST_MEMORY

NODE_REMOVING_LAST_MEMORY
NODE_REMOVED_LAST_MEMORY
NODE_CANCEL_REMOVING_LAST_MEMORY

Maybe something like that? I still don't quite like the "CANCEL" stuff.

NODE_ADDING_FIRST_MEMORY
NODE_ADDED_FIRST_MEMORY
NODE_NOT_ADDED_FIRST_MEMORY

NODE_REMOVING_LAST_MEMORY
NODE_REMOVED_LAST_MEMORY
NODE_NOT_REMOVED_LAST_MEMORY

Hm ...

> +
>   struct memory_notify {
>   	/*
>   	 * The altmap_start_pfn and altmap_nr_pages fields are designated for
> @@ -109,7 +117,10 @@ struct memory_notify {
>   	unsigned long altmap_nr_pages;
>   	unsigned long start_pfn;
>   	unsigned long nr_pages;
> -	int status_change_nid_normal;
> +	int status_change_nid;
> +};

Could/should that be a separate patch after patch #1 removed the last user?

Also, I think the sequence should be (this patch is getting hard to 
review for me due to the size):

#1 existing patch 1
#2 remove status_change_nid_normal
#3 introduce node notifier
#4-#X: convert individual users to node notifier
#X+1: change status_change_nid to always just indicate the nid, renaming
       it on the way (incl current patch #3)


> +
> +struct node_notify {
>   	int status_change_nid;

This should be called "nid" right from the start.

>   
> @@ -157,15 +168,34 @@ static inline unsigned long memory_block_advised_max_size(void)
>   {
>   	return 0;
>   }
> +

[...]

>   	 * {on,off}lining is constrained to full memory sections (or more
> @@ -1194,11 +1172,22 @@ int online_pages(unsigned long pfn, unsigned long nr_pages,
>   	/* associate pfn range with the zone */
>   	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_ISOLATE);
>   
> -	arg.start_pfn = pfn;
> -	arg.nr_pages = nr_pages;
> -	node_states_check_changes_online(nr_pages, zone, &arg);
> +	node_arg.status_change_nid = NUMA_NO_NODE;
> +	if (!node_state(nid, N_MEMORY)) {
> +		/* Node is becoming memory aware. Notify consumers */
> +		cancel_node_notifier_on_err = true;
> +		node_arg.status_change_nid = nid;
> +		ret = node_notify(NODE_BECOMING_MEM_AWARE, &node_arg);
> +		ret = notifier_to_errno(ret);
> +		if (ret)
> +			goto failed_addition;
> +	}

I assume without NUMA, that code would never trigger? I mean, the whole 
notifier doesn't make sense without CONFIG_NUMA :)

-- 
Cheers,

David / dhildenb


  reply	other threads:[~2025-06-04 12:03 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-03 11:08 [PATCH v4 0/3] Implement numa node notifier Oscar Salvador
2025-06-03 11:08 ` [PATCH v4 1/3] mm,slub: Do not special case N_NORMAL nodes for slab_nodes Oscar Salvador
2025-06-04  9:28   ` Harry Yoo
2025-06-04 11:33   ` David Hildenbrand
2025-06-04 12:16   ` Yunsheng Lin
2025-06-03 11:08 ` [PATCH v4 2/3] mm,memory_hotplug: Implement numa node notifier Oscar Salvador
2025-06-04 12:03   ` David Hildenbrand [this message]
2025-06-04 12:38     ` Oscar Salvador
2025-06-04 12:47       ` David Hildenbrand
2025-06-05  5:18         ` Oscar Salvador
2025-06-05  8:12           ` David Hildenbrand
2025-06-03 11:08 ` [PATCH v4 3/3] mm,memory_hotplug: Rename status_change_nid parameter in memory_notify Oscar Salvador

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ddcdd8b9-566c-4f6c-b1f7-861e93a80fbb@redhat.com \
    --to=david@redhat.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=Jonathan.Cameron@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=harry.yoo@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=osalvador@suse.de \
    --cc=rakie.kim@sk.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).