* [PATCH v2 0/7] implement "memmap on memory" feature on s390
@ 2023-11-23 9:23 Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 3/7] mm/memory_hotplug: introduce MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE notifiers Sumanth Korikkar
` (4 more replies)
0 siblings, 5 replies; 6+ messages in thread
From: Sumanth Korikkar @ 2023-11-23 9:23 UTC (permalink / raw)
To: linux-mm, Andrew Morton, David Hildenbrand
Cc: Oscar Salvador, Michal Hocko, Aneesh Kumar K.V, Anshuman Khandual,
Gerald Schaefer, Sumanth Korikkar, Alexander Gordeev,
Heiko Carstens, Vasily Gorbik, linux-s390, LKML
Hi All,
The patch series implements "memmap on memory" feature on s390.
Patch 1 introduces new mhp_flag MHP_OFFLINE_INACCESSIBLE to mark
memory as not accessible until memory hotplug online phase begins.
Patch 2 avoids page_init_poison() on memmap during mhp addition phase,
when mhp_flag MHP_OFFLINE_INACCESSIBLE is passed over from add_memory()
Patch 3 introduces MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE memory
notifiers to prepare the transition of memory to and from a physically
accessible state. This enhancement is crucial for implementing the
"memmap on memory" feature for s390 in a subsequent patch.
Patches 4 allocates vmemmap pages from self-contained memory range for
s390. It allocates memory map (struct pages array) from the hotplugged
memory range, rather than using system memory by passing altmap to
vmemmap functions.
Patch 5 removes unhandled memory notifier types on s390.
Patch 6 implements MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE memory
notifiers on s390. MEM_PREPARE_ONLINE memory notifier makes memory block
physical accessible via sclp assign command. The notifier ensures
self-contained memory maps are accessible and hence enabling the "memmap
on memory" on s390. MEM_FINISH_OFFLINE memory notifier shifts the memory
block to an inaccessible state via sclp unassign command
Patch 7 finally enables MHP_MEMMAP_ON_MEMORY on s390
These patches are rebased on top of three fixes:
mm: use vmem_altmap code without CONFIG_ZONE_DEVICE
mm/memory_hotplug: fix error handling in add_memory_resource()
mm/memory_hotplug: add missing mem_hotplug_lock
v2:
* Fixes are integrated and hence removed from this patch series
Suggestions from David:
* Add new flag MHP_OFFLINE_INACCESSIBLE to avoid accessing memory
during memory hotplug addition phase.
* Avoid page_init_poison() on memmap during mhp addition phase, when
MHP_OFFLINE_INACCESSIBLE mhp_flag is passed in add_memory().
* Do not skip add_pages() in arch_add_memory(). Similarly, remove
similar hacks in arch_remove_memory().
* Use MHP_PREPARE_ONLINE/MHP_FINISH_OFFLINE naming convention for
new memory notifiers.
* Rearrange removal of unused s390 memory notifier.
* Necessary commit messages changes.
Thank you
Sumanth Korikkar (7):
mm/memory_hotplug: introduce mhp_flag MHP_OFFLINE_INACCESSIBLE
mm/memory_hotplug: avoid poisoning memmap during mhp addition phase
mm/memory_hotplug: introduce MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE
notifiers
s390/mm: allocate vmemmap pages from self-contained memory range
s390/sclp: remove unhandled memory notifier type
s390/mm: implement MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE notifiers
s390: enable MHP_MEMMAP_ON_MEMORY
arch/s390/Kconfig | 1 +
arch/s390/mm/init.c | 3 --
arch/s390/mm/vmem.c | 62 +++++++++++++++++++---------------
drivers/base/memory.c | 23 +++++++++++--
drivers/s390/char/sclp_cmd.c | 31 ++++++++++++-----
include/linux/memory.h | 3 ++
include/linux/memory_hotplug.h | 12 ++++++-
include/linux/memremap.h | 1 +
mm/memory_hotplug.c | 30 ++++++++++++++--
mm/sparse.c | 3 +-
10 files changed, 124 insertions(+), 45 deletions(-)
--
2.39.2
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 3/7] mm/memory_hotplug: introduce MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE notifiers
2023-11-23 9:23 [PATCH v2 0/7] implement "memmap on memory" feature on s390 Sumanth Korikkar
@ 2023-11-23 9:23 ` Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 4/7] s390/mm: allocate vmemmap pages from self-contained memory range Sumanth Korikkar
` (3 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Sumanth Korikkar @ 2023-11-23 9:23 UTC (permalink / raw)
To: linux-mm, Andrew Morton, David Hildenbrand
Cc: Oscar Salvador, Michal Hocko, Aneesh Kumar K.V, Anshuman Khandual,
Gerald Schaefer, Sumanth Korikkar, Alexander Gordeev,
Heiko Carstens, Vasily Gorbik, linux-s390, LKML
Introduce MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE memory notifiers to
prepare the transition of memory to and from a physically accessible
state. This enhancement is crucial for implementing the "memmap on
memory" feature for s390 in a subsequent patch.
Platforms such as x86 can support physical memory hotplug via ACPI. When
there is physical memory hotplug, ACPI event leads to the memory
addition with the following callchain:
acpi_memory_device_add()
-> acpi_memory_enable_device()
-> __add_memory()
After this, the hotplugged memory is physically accessible, and altmap
support prepared, before the "memmap on memory" initialization in
memory_block_online() is called.
On s390, memory hotplug works in a different way. The available hotplug
memory has to be defined upfront in the hypervisor, but it is made
physically accessible only when the user sets it online via sysfs,
currently in the MEM_GOING_ONLINE notifier. This is too late and "memmap
on memory" initialization is performed before calling MEM_GOING_ONLINE
notifier.
During the memory hotplug addition phase, altmap support is prepared
(but not yet accessed) and during the memory onlining phase s390
requires memory to be physically accessible and then subsequently
initiate the "memmap on memory" initialization process.
The new MEM_PREPARE_ONLINE notifier allows to work around the problem,
by providing a hook to prepare the memory and make it physically
accessible. Similarly, the MEM_FINISH_OFFLINE notifier allows to make
the memory physically inaccessible at the end of memory_block_offline().
All architectures ignore unknown memory notifiers, so this patch should
not introduce any functional changes.
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
---
drivers/base/memory.c | 18 +++++++++++++++++-
include/linux/memory.h | 2 ++
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index cbff43b2ef44..a06a0b869992 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -188,6 +188,7 @@ static int memory_block_online(struct memory_block *mem)
unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
unsigned long nr_vmemmap_pages = 0;
+ struct memory_notify arg;
struct zone *zone;
int ret;
@@ -197,6 +198,14 @@ static int memory_block_online(struct memory_block *mem)
zone = zone_for_pfn_range(mem->online_type, mem->nid, mem->group,
start_pfn, nr_pages);
+ arg.start_pfn = start_pfn;
+ arg.nr_pages = nr_pages;
+ mem_hotplug_begin();
+ ret = memory_notify(MEM_PREPARE_ONLINE, &arg);
+ ret = notifier_to_errno(ret);
+ if (ret)
+ goto out_notifier;
+
/*
* Although vmemmap pages have a different lifecycle than the pages
* they describe (they remain until the memory is unplugged), doing
@@ -207,7 +216,6 @@ static int memory_block_online(struct memory_block *mem)
if (mem->altmap)
nr_vmemmap_pages = mem->altmap->free;
- mem_hotplug_begin();
if (nr_vmemmap_pages) {
ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages,
zone, mem->inaccessible);
@@ -232,7 +240,11 @@ static int memory_block_online(struct memory_block *mem)
nr_vmemmap_pages);
mem->zone = zone;
+ mem_hotplug_done();
+ return ret;
out:
+ memory_notify(MEM_FINISH_OFFLINE, &arg);
+out_notifier:
mem_hotplug_done();
return ret;
}
@@ -245,6 +257,7 @@ static int memory_block_offline(struct memory_block *mem)
unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
unsigned long nr_vmemmap_pages = 0;
+ struct memory_notify arg;
int ret;
if (!mem->zone)
@@ -276,6 +289,9 @@ static int memory_block_offline(struct memory_block *mem)
mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
mem->zone = NULL;
+ arg.start_pfn = start_pfn;
+ arg.nr_pages = nr_pages;
+ memory_notify(MEM_FINISH_OFFLINE, &arg);
out:
mem_hotplug_done();
return ret;
diff --git a/include/linux/memory.h b/include/linux/memory.h
index 655714d4e65a..76e5ab68dab7 100644
--- a/include/linux/memory.h
+++ b/include/linux/memory.h
@@ -97,6 +97,8 @@ int set_memory_block_size_order(unsigned int order);
#define MEM_GOING_ONLINE (1<<3)
#define MEM_CANCEL_ONLINE (1<<4)
#define MEM_CANCEL_OFFLINE (1<<5)
+#define MEM_PREPARE_ONLINE (1<<6)
+#define MEM_FINISH_OFFLINE (1<<7)
struct memory_notify {
unsigned long start_pfn;
--
2.39.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 4/7] s390/mm: allocate vmemmap pages from self-contained memory range
2023-11-23 9:23 [PATCH v2 0/7] implement "memmap on memory" feature on s390 Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 3/7] mm/memory_hotplug: introduce MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE notifiers Sumanth Korikkar
@ 2023-11-23 9:23 ` Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 5/7] s390/sclp: remove unhandled memory notifier type Sumanth Korikkar
` (2 subsequent siblings)
4 siblings, 0 replies; 6+ messages in thread
From: Sumanth Korikkar @ 2023-11-23 9:23 UTC (permalink / raw)
To: linux-mm, Andrew Morton, David Hildenbrand
Cc: Oscar Salvador, Michal Hocko, Aneesh Kumar K.V, Anshuman Khandual,
Gerald Schaefer, Sumanth Korikkar, Alexander Gordeev,
Heiko Carstens, Vasily Gorbik, linux-s390, LKML
Allocate memory map (struct pages array) from the hotplugged memory
range, rather than using system memory. The change addresses the issue
where standby memory, when configured to be much larger than online
memory, could potentially lead to ipl failure due to memory map
allocation from online memory. For example, 16MB of memory map
allocation is needed for a memory block size of 1GB and when standby
memory is configured much larger than online memory, this could lead to
ipl failure.
To address this issue, the solution involves introducing "memmap on
memory" using the vmem_altmap structure on s390. Architectures that
want to implement it should pass the altmap to the vmemmap_populate()
function and its associated callchain. This enhancement is discussed in
the commit 4b94ffdc4163 ("x86, mm: introduce vmem_altmap to augment
vmemmap_populate()").
Provide "memmap on memory" support for s390 by passing the altmap in
vmemmap_populate() and its callchain. The allocation path is described
as follows:
* When altmap is NULL in vmemmap_populate(), memory map allocation
occurs using the existing vmemmap_alloc_block_buf().
* When altmap is not NULL in vmemmap_populate(), memory map allocation
still uses vmemmap_alloc_block_buf(), but this function internally
calls altmap_alloc_block_buf().
For deallocation, the process is outlined as follows:
* When altmap is NULL in vmemmap_free(), memory map deallocation happens
through free_pages().
* When altmap is not NULL in vmemmap_free(), memory map deallocation
occurs via vmem_altmap_free().
While memory map allocation is primarily handled through the
self-contained memory map range, there might still be a small amount of
system memory allocation required for vmemmap pagetables. To mitigate
this impact, this feature will be limited to machines with EDAT1
support.
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
---
arch/s390/mm/init.c | 3 ---
arch/s390/mm/vmem.c | 62 +++++++++++++++++++++++++--------------------
2 files changed, 35 insertions(+), 30 deletions(-)
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 43e612bc2bcd..8d9a60ccb777 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -281,9 +281,6 @@ int arch_add_memory(int nid, u64 start, u64 size,
unsigned long size_pages = PFN_DOWN(size);
int rc;
- if (WARN_ON_ONCE(params->altmap))
- return -EINVAL;
-
if (WARN_ON_ONCE(params->pgprot.pgprot != PAGE_KERNEL.pgprot))
return -EINVAL;
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index 186a020857cf..eb100479f7be 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -33,8 +33,12 @@ static void __ref *vmem_alloc_pages(unsigned int order)
return memblock_alloc(size, size);
}
-static void vmem_free_pages(unsigned long addr, int order)
+static void vmem_free_pages(unsigned long addr, int order, struct vmem_altmap *altmap)
{
+ if (altmap) {
+ vmem_altmap_free(altmap, 1 << order);
+ return;
+ }
/* We don't expect boot memory to be removed ever. */
if (!slab_is_available() ||
WARN_ON_ONCE(PageReserved(virt_to_page((void *)addr))))
@@ -156,7 +160,8 @@ static bool vmemmap_unuse_sub_pmd(unsigned long start, unsigned long end)
/* __ref: we'll only call vmemmap_alloc_block() via vmemmap_populate() */
static int __ref modify_pte_table(pmd_t *pmd, unsigned long addr,
- unsigned long end, bool add, bool direct)
+ unsigned long end, bool add, bool direct,
+ struct vmem_altmap *altmap)
{
unsigned long prot, pages = 0;
int ret = -ENOMEM;
@@ -172,11 +177,11 @@ static int __ref modify_pte_table(pmd_t *pmd, unsigned long addr,
if (pte_none(*pte))
continue;
if (!direct)
- vmem_free_pages((unsigned long) pfn_to_virt(pte_pfn(*pte)), 0);
+ vmem_free_pages((unsigned long)pfn_to_virt(pte_pfn(*pte)), get_order(PAGE_SIZE), altmap);
pte_clear(&init_mm, addr, pte);
} else if (pte_none(*pte)) {
if (!direct) {
- void *new_page = vmemmap_alloc_block(PAGE_SIZE, NUMA_NO_NODE);
+ void *new_page = vmemmap_alloc_block_buf(PAGE_SIZE, NUMA_NO_NODE, altmap);
if (!new_page)
goto out;
@@ -213,7 +218,8 @@ static void try_free_pte_table(pmd_t *pmd, unsigned long start)
/* __ref: we'll only call vmemmap_alloc_block() via vmemmap_populate() */
static int __ref modify_pmd_table(pud_t *pud, unsigned long addr,
- unsigned long end, bool add, bool direct)
+ unsigned long end, bool add, bool direct,
+ struct vmem_altmap *altmap)
{
unsigned long next, prot, pages = 0;
int ret = -ENOMEM;
@@ -234,11 +240,11 @@ static int __ref modify_pmd_table(pud_t *pud, unsigned long addr,
if (IS_ALIGNED(addr, PMD_SIZE) &&
IS_ALIGNED(next, PMD_SIZE)) {
if (!direct)
- vmem_free_pages(pmd_deref(*pmd), get_order(PMD_SIZE));
+ vmem_free_pages(pmd_deref(*pmd), get_order(PMD_SIZE), altmap);
pmd_clear(pmd);
pages++;
} else if (!direct && vmemmap_unuse_sub_pmd(addr, next)) {
- vmem_free_pages(pmd_deref(*pmd), get_order(PMD_SIZE));
+ vmem_free_pages(pmd_deref(*pmd), get_order(PMD_SIZE), altmap);
pmd_clear(pmd);
}
continue;
@@ -261,7 +267,7 @@ static int __ref modify_pmd_table(pud_t *pud, unsigned long addr,
* page tables since vmemmap_populate gets
* called for each section separately.
*/
- new_page = vmemmap_alloc_block(PMD_SIZE, NUMA_NO_NODE);
+ new_page = vmemmap_alloc_block_buf(PMD_SIZE, NUMA_NO_NODE, altmap);
if (new_page) {
set_pmd(pmd, __pmd(__pa(new_page) | prot));
if (!IS_ALIGNED(addr, PMD_SIZE) ||
@@ -280,7 +286,7 @@ static int __ref modify_pmd_table(pud_t *pud, unsigned long addr,
vmemmap_use_sub_pmd(addr, next);
continue;
}
- ret = modify_pte_table(pmd, addr, next, add, direct);
+ ret = modify_pte_table(pmd, addr, next, add, direct, altmap);
if (ret)
goto out;
if (!add)
@@ -302,12 +308,12 @@ static void try_free_pmd_table(pud_t *pud, unsigned long start)
for (i = 0; i < PTRS_PER_PMD; i++, pmd++)
if (!pmd_none(*pmd))
return;
- vmem_free_pages(pud_deref(*pud), CRST_ALLOC_ORDER);
+ vmem_free_pages(pud_deref(*pud), CRST_ALLOC_ORDER, NULL);
pud_clear(pud);
}
static int modify_pud_table(p4d_t *p4d, unsigned long addr, unsigned long end,
- bool add, bool direct)
+ bool add, bool direct, struct vmem_altmap *altmap)
{
unsigned long next, prot, pages = 0;
int ret = -ENOMEM;
@@ -347,7 +353,7 @@ static int modify_pud_table(p4d_t *p4d, unsigned long addr, unsigned long end,
} else if (pud_large(*pud)) {
continue;
}
- ret = modify_pmd_table(pud, addr, next, add, direct);
+ ret = modify_pmd_table(pud, addr, next, add, direct, altmap);
if (ret)
goto out;
if (!add)
@@ -370,12 +376,12 @@ static void try_free_pud_table(p4d_t *p4d, unsigned long start)
if (!pud_none(*pud))
return;
}
- vmem_free_pages(p4d_deref(*p4d), CRST_ALLOC_ORDER);
+ vmem_free_pages(p4d_deref(*p4d), CRST_ALLOC_ORDER, NULL);
p4d_clear(p4d);
}
static int modify_p4d_table(pgd_t *pgd, unsigned long addr, unsigned long end,
- bool add, bool direct)
+ bool add, bool direct, struct vmem_altmap *altmap)
{
unsigned long next;
int ret = -ENOMEM;
@@ -394,7 +400,7 @@ static int modify_p4d_table(pgd_t *pgd, unsigned long addr, unsigned long end,
goto out;
p4d_populate(&init_mm, p4d, pud);
}
- ret = modify_pud_table(p4d, addr, next, add, direct);
+ ret = modify_pud_table(p4d, addr, next, add, direct, altmap);
if (ret)
goto out;
if (!add)
@@ -415,12 +421,12 @@ static void try_free_p4d_table(pgd_t *pgd, unsigned long start)
if (!p4d_none(*p4d))
return;
}
- vmem_free_pages(pgd_deref(*pgd), CRST_ALLOC_ORDER);
+ vmem_free_pages(pgd_deref(*pgd), CRST_ALLOC_ORDER, NULL);
pgd_clear(pgd);
}
static int modify_pagetable(unsigned long start, unsigned long end, bool add,
- bool direct)
+ bool direct, struct vmem_altmap *altmap)
{
unsigned long addr, next;
int ret = -ENOMEM;
@@ -445,7 +451,7 @@ static int modify_pagetable(unsigned long start, unsigned long end, bool add,
goto out;
pgd_populate(&init_mm, pgd, p4d);
}
- ret = modify_p4d_table(pgd, addr, next, add, direct);
+ ret = modify_p4d_table(pgd, addr, next, add, direct, altmap);
if (ret)
goto out;
if (!add)
@@ -458,14 +464,16 @@ static int modify_pagetable(unsigned long start, unsigned long end, bool add,
return ret;
}
-static int add_pagetable(unsigned long start, unsigned long end, bool direct)
+static int add_pagetable(unsigned long start, unsigned long end, bool direct,
+ struct vmem_altmap *altmap)
{
- return modify_pagetable(start, end, true, direct);
+ return modify_pagetable(start, end, true, direct, altmap);
}
-static int remove_pagetable(unsigned long start, unsigned long end, bool direct)
+static int remove_pagetable(unsigned long start, unsigned long end, bool direct,
+ struct vmem_altmap *altmap)
{
- return modify_pagetable(start, end, false, direct);
+ return modify_pagetable(start, end, false, direct, altmap);
}
/*
@@ -474,7 +482,7 @@ static int remove_pagetable(unsigned long start, unsigned long end, bool direct)
static int vmem_add_range(unsigned long start, unsigned long size)
{
start = (unsigned long)__va(start);
- return add_pagetable(start, start + size, true);
+ return add_pagetable(start, start + size, true, NULL);
}
/*
@@ -483,7 +491,7 @@ static int vmem_add_range(unsigned long start, unsigned long size)
static void vmem_remove_range(unsigned long start, unsigned long size)
{
start = (unsigned long)__va(start);
- remove_pagetable(start, start + size, true);
+ remove_pagetable(start, start + size, true, NULL);
}
/*
@@ -496,9 +504,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
mutex_lock(&vmem_mutex);
/* We don't care about the node, just use NUMA_NO_NODE on allocations */
- ret = add_pagetable(start, end, false);
+ ret = add_pagetable(start, end, false, altmap);
if (ret)
- remove_pagetable(start, end, false);
+ remove_pagetable(start, end, false, altmap);
mutex_unlock(&vmem_mutex);
return ret;
}
@@ -509,7 +517,7 @@ void vmemmap_free(unsigned long start, unsigned long end,
struct vmem_altmap *altmap)
{
mutex_lock(&vmem_mutex);
- remove_pagetable(start, end, false);
+ remove_pagetable(start, end, false, altmap);
mutex_unlock(&vmem_mutex);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 5/7] s390/sclp: remove unhandled memory notifier type
2023-11-23 9:23 [PATCH v2 0/7] implement "memmap on memory" feature on s390 Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 3/7] mm/memory_hotplug: introduce MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE notifiers Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 4/7] s390/mm: allocate vmemmap pages from self-contained memory range Sumanth Korikkar
@ 2023-11-23 9:23 ` Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 7/7] s390: enable MHP_MEMMAP_ON_MEMORY Sumanth Korikkar
[not found] ` <20231123092343.1703707-2-sumanthk@linux.ibm.com>
4 siblings, 0 replies; 6+ messages in thread
From: Sumanth Korikkar @ 2023-11-23 9:23 UTC (permalink / raw)
To: linux-mm, Andrew Morton, David Hildenbrand
Cc: Oscar Salvador, Michal Hocko, Aneesh Kumar K.V, Anshuman Khandual,
Gerald Schaefer, Sumanth Korikkar, Alexander Gordeev,
Heiko Carstens, Vasily Gorbik, linux-s390, LKML
Remove memory notifier types which are unhandled by s390. Unhandled
memory notifier types are covered by default case.
Suggested-by: Alexander Gordeev <agordeev@linux.ibm.com>
Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
---
drivers/s390/char/sclp_cmd.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/s390/char/sclp_cmd.c b/drivers/s390/char/sclp_cmd.c
index 11c428f4c7cf..355e63e44e95 100644
--- a/drivers/s390/char/sclp_cmd.c
+++ b/drivers/s390/char/sclp_cmd.c
@@ -340,9 +340,6 @@ static int sclp_mem_notifier(struct notifier_block *nb,
if (contains_standby_increment(start, start + size))
rc = -EPERM;
break;
- case MEM_ONLINE:
- case MEM_CANCEL_OFFLINE:
- break;
case MEM_GOING_ONLINE:
rc = sclp_mem_change_state(start, size, 1);
break;
--
2.39.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 7/7] s390: enable MHP_MEMMAP_ON_MEMORY
2023-11-23 9:23 [PATCH v2 0/7] implement "memmap on memory" feature on s390 Sumanth Korikkar
` (2 preceding siblings ...)
2023-11-23 9:23 ` [PATCH v2 5/7] s390/sclp: remove unhandled memory notifier type Sumanth Korikkar
@ 2023-11-23 9:23 ` Sumanth Korikkar
[not found] ` <20231123092343.1703707-2-sumanthk@linux.ibm.com>
4 siblings, 0 replies; 6+ messages in thread
From: Sumanth Korikkar @ 2023-11-23 9:23 UTC (permalink / raw)
To: linux-mm, Andrew Morton, David Hildenbrand
Cc: Oscar Salvador, Michal Hocko, Aneesh Kumar K.V, Anshuman Khandual,
Gerald Schaefer, Sumanth Korikkar, Alexander Gordeev,
Heiko Carstens, Vasily Gorbik, linux-s390, LKML
Enable MHP_MEMMAP_ON_MEMORY to support "memmap on memory".
memory_hotplug.memmap_on_memory=true kernel parameter should be set in
kernel boot option to enable the feature.
Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
---
arch/s390/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 3bec98d20283..4b9b0f947ddb 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -113,6 +113,7 @@ config S390
select ARCH_INLINE_WRITE_UNLOCK_BH
select ARCH_INLINE_WRITE_UNLOCK_IRQ
select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
+ select ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
select ARCH_STACKWALK
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_DEBUG_PAGEALLOC
--
2.39.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/7] mm/memory_hotplug: introduce mhp_flag MHP_OFFLINE_INACCESSIBLE
[not found] ` <20231123092343.1703707-2-sumanthk@linux.ibm.com>
@ 2023-11-24 18:04 ` David Hildenbrand
0 siblings, 0 replies; 6+ messages in thread
From: David Hildenbrand @ 2023-11-24 18:04 UTC (permalink / raw)
To: Sumanth Korikkar, linux-mm, Andrew Morton
Cc: Oscar Salvador, Michal Hocko, Aneesh Kumar K.V, Anshuman Khandual,
Gerald Schaefer, Alexander Gordeev, Heiko Carstens, Vasily Gorbik,
linux-s390, LKML
On 23.11.23 10:23, Sumanth Korikkar wrote:
> Introduce MHP_OFFLINE_INACCESSIBLE mhp_flag to mark the hotplugged
> memory block as inaccessible during the memory hotplug addition phase.
> With support for "memmap on memory", the altmap is prepared at this
> stage. Architectures like s390 anticipate that memmap should not be
> accessed until memory is physically accessible and is accessible only
> when it enters the memory hotplug onlining phase using the memory
> notifier. Introduce the flag to inform the memory hotplug
> infrastructure that the memory remains inaccessible until the memory
> hotplug onlining phase begins.
>
> Implementation considerations:
> mhp inaccessible flag is initially set in altmap. This is useful in
> arch_add_memory(). When the memory block device is added, the mhp
> inaccessible information is passed to memory_block. The flag is used in
> subsequent patch to avoid accessing memmap during memory hotplug
> addition phase.
>
> Signed-off-by: Sumanth Korikkar <sumanthk@linux.ibm.com>
> ---
> drivers/base/memory.c | 2 ++
> include/linux/memory.h | 1 +
> include/linux/memory_hotplug.h | 10 ++++++++++
> include/linux/memremap.h | 1 +
> mm/memory_hotplug.c | 3 ++-
> 5 files changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
> index 8a13babd826c..51915d5c3f88 100644
> --- a/drivers/base/memory.c
> +++ b/drivers/base/memory.c
> @@ -774,6 +774,8 @@ static int add_memory_block(unsigned long block_id, unsigned long state,
> mem->state = state;
> mem->nid = NUMA_NO_NODE;
> mem->altmap = altmap;
> + if (altmap)
> + mem->inaccessible = altmap->inaccessible;
> INIT_LIST_HEAD(&mem->group_next);
>
> #ifndef CONFIG_NUMA
> diff --git a/include/linux/memory.h b/include/linux/memory.h
> index f53cfdaaaa41..655714d4e65a 100644
> --- a/include/linux/memory.h
> +++ b/include/linux/memory.h
> @@ -67,6 +67,7 @@ struct memory_group {
> struct memory_block {
> unsigned long start_section_nr;
> unsigned long state; /* serialized by the dev->lock */
> + bool inaccessible; /* during memory addition phase */
Is that really required? After all, the altmap is stored in the memory
block and accessible there.
> int online_type; /* for passing data to online routine */
> int nid; /* NID for this memory block */
> /*
> diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
> index 7d2076583494..8988cd5ad55d 100644
> --- a/include/linux/memory_hotplug.h
> +++ b/include/linux/memory_hotplug.h
> @@ -106,6 +106,16 @@ typedef int __bitwise mhp_t;
> * implies the node id (nid).
> */
> #define MHP_NID_IS_MGID ((__force mhp_t)BIT(2))
> +/*
> + * Mark the hotplugged memory block as inaccessible during the memory hotplug
> + * addition phase. With support for "memmap on memory," the altmap is prepared
> + * at this stage. Architectures like s390 anticipate that memmap should not be
> + * accessed until memory is physically accessible and is accessible only when
> + * it enters the memory hotplug onlining phase using the memory notifier.
> + * Utilize this flag to inform the memory hotplug infrastructure that the
> + * memory remains inaccessible until the memory hotplug onlining phase begins.
> + */
> +#define MHP_OFFLINE_INACCESSIBLE ((__force mhp_t)BIT(3))
I'd suggest to squash all 3 patches. Then we can properly document here:
/*
* The hotplugged memory is completely inaccessible while the memory is
* offline. The memory provider will handle MEM_PREPARE_ONLINE /
* MEM_FINISH_OFFLINE notifications and make the memory accessible.
*
* This flag is only relevant when used along with MHP_MEMMAP_ON_MEMORY,
* because the altmap cannot be written (e.g., poisoned) when adding
* memory -- before it is set online.
*
* This allows for adding memory with an altmap that is not currently
* made available by a hypervisor. When onlining that memory, the
* hypervisor can be instructed to make that memory available, and
* the onlining phase will not require any memory allocations, which is
* helpful in low-memory situations.
*/
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-11-24 18:04 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-23 9:23 [PATCH v2 0/7] implement "memmap on memory" feature on s390 Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 3/7] mm/memory_hotplug: introduce MEM_PREPARE_ONLINE/MEM_FINISH_OFFLINE notifiers Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 4/7] s390/mm: allocate vmemmap pages from self-contained memory range Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 5/7] s390/sclp: remove unhandled memory notifier type Sumanth Korikkar
2023-11-23 9:23 ` [PATCH v2 7/7] s390: enable MHP_MEMMAP_ON_MEMORY Sumanth Korikkar
[not found] ` <20231123092343.1703707-2-sumanthk@linux.ibm.com>
2023-11-24 18:04 ` [PATCH v2 1/7] mm/memory_hotplug: introduce mhp_flag MHP_OFFLINE_INACCESSIBLE David Hildenbrand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).