* [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init
@ 2024-10-11 15:00 Ritesh Harjani (IBM)
2024-10-11 15:00 ` [RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 15:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige, Ritesh Harjani (IBM)
We anyway don't use any return values from fadump_cma_init(). Since
fadump_reserve_mem() from where fadump_cma_init() gets called today,
already has the required checks.
This patch makes this function return type as void. Let's also handle
extra cases like return if fadump_supported is false or dump_active, so
that in later patches we can call fadump_cma_init() separately from
setup_arch().
Acked-by: Hari Bathini <hbathini@linux.ibm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
v2 -> v3: Separated the series into 2 as discussed in v2.
[v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
arch/powerpc/kernel/fadump.c | 23 +++++++++--------------
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index a612e7513a4f..162327d66982 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -78,27 +78,23 @@ static struct cma *fadump_cma;
* But for some reason even if it fails we still have the memory reservation
* with us and we can still continue doing fadump.
*/
-static int __init fadump_cma_init(void)
+static void __init fadump_cma_init(void)
{
unsigned long long base, size;
int rc;
- if (!fw_dump.fadump_enabled)
- return 0;
-
+ if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
+ fw_dump.dump_active)
+ return;
/*
* Do not use CMA if user has provided fadump=nocma kernel parameter.
- * Return 1 to continue with fadump old behaviour.
*/
- if (fw_dump.nocma)
- return 1;
+ if (fw_dump.nocma || !fw_dump.boot_memory_size)
+ return;
base = fw_dump.reserve_dump_area_start;
size = fw_dump.boot_memory_size;
- if (!size)
- return 0;
-
rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
if (rc) {
pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
@@ -108,7 +104,7 @@ static int __init fadump_cma_init(void)
* blocked from production system usage. Hence return 1,
* so that we can continue with fadump.
*/
- return 1;
+ return;
}
/*
@@ -125,10 +121,9 @@ static int __init fadump_cma_init(void)
cma_get_size(fadump_cma),
(unsigned long)cma_get_base(fadump_cma) >> 20,
fw_dump.reserve_dump_area_size);
- return 1;
}
#else
-static int __init fadump_cma_init(void) { return 1; }
+static void __init fadump_cma_init(void) { }
#endif /* CONFIG_CMA */
/*
@@ -638,7 +633,7 @@ int __init fadump_reserve_mem(void)
pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
(size >> 20), base, (memblock_phys_mem_size() >> 20));
- ret = fadump_cma_init();
+ fadump_cma_init();
}
return ret;
--
2.46.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
2024-10-11 15:00 [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Ritesh Harjani (IBM)
@ 2024-10-11 15:00 ` Ritesh Harjani (IBM)
2024-10-15 11:34 ` Madhavan Srinivasan
2024-10-11 15:00 ` [RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after initmem_init() Ritesh Harjani (IBM)
2024-10-14 10:24 ` [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Madhavan Srinivasan
2 siblings, 1 reply; 10+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 15:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige, Ritesh Harjani (IBM)
This patch refactors all CMA related initialization and alignment code
to within fadump_cma_init() which gets called in the end. This also means
that we keep [reserve_dump_area_start, boot_memory_size] page aligned
during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the
aligned chunk and provide it to CMA. This inherently also fixes an issue in
the current code where the reserve_dump_area_start is not aligned
when the physical memory can have holes and the suitable chunk starts at
an unaligned boundary.
After this we should be able to call fadump_cma_init() independently
later in setup_arch() where pageblock_order is non-zero.
Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Acked-by: Hari Bathini <hbathini@linux.ibm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
arch/powerpc/kernel/fadump.c | 34 ++++++++++++++++++++++------------
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index 162327d66982..ffaec625b7a8 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -80,7 +80,7 @@ static struct cma *fadump_cma;
*/
static void __init fadump_cma_init(void)
{
- unsigned long long base, size;
+ unsigned long long base, size, end;
int rc;
if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
@@ -92,8 +92,24 @@ static void __init fadump_cma_init(void)
if (fw_dump.nocma || !fw_dump.boot_memory_size)
return;
+ /*
+ * [base, end) should be reserved during early init in
+ * fadump_reserve_mem(). No need to check this here as
+ * cma_init_reserved_mem() already checks for overlap.
+ * Here we give the aligned chunk of this reserved memory to CMA.
+ */
base = fw_dump.reserve_dump_area_start;
size = fw_dump.boot_memory_size;
+ end = base + size;
+
+ base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES);
+ end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES);
+ size = end - base;
+
+ if (end <= base) {
+ pr_warn("%s: Too less memory to give to CMA\n", __func__);
+ return;
+ }
rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
if (rc) {
@@ -116,11 +132,12 @@ static void __init fadump_cma_init(void)
/*
* So we now have successfully initialized cma area for fadump.
*/
- pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx "
+ pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] "
"bytes of memory reserved for firmware-assisted dump\n",
- cma_get_size(fadump_cma),
- (unsigned long)cma_get_base(fadump_cma) >> 20,
- fw_dump.reserve_dump_area_size);
+ cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20,
+ fw_dump.reserve_dump_area_start,
+ fw_dump.boot_memory_size >> 20);
+ return;
}
#else
static void __init fadump_cma_init(void) { }
@@ -553,13 +570,6 @@ int __init fadump_reserve_mem(void)
if (!fw_dump.dump_active) {
fw_dump.boot_memory_size =
PAGE_ALIGN(fadump_calculate_reserve_size());
-#ifdef CONFIG_CMA
- if (!fw_dump.nocma) {
- fw_dump.boot_memory_size =
- ALIGN(fw_dump.boot_memory_size,
- CMA_MIN_ALIGNMENT_BYTES);
- }
-#endif
bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
if (fw_dump.boot_memory_size < bootmem_min) {
--
2.46.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after initmem_init()
2024-10-11 15:00 [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Ritesh Harjani (IBM)
2024-10-11 15:00 ` [RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
@ 2024-10-11 15:00 ` Ritesh Harjani (IBM)
2024-10-15 14:06 ` Madhavan Srinivasan
2024-10-14 10:24 ` [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Madhavan Srinivasan
2 siblings, 1 reply; 10+ messages in thread
From: Ritesh Harjani (IBM) @ 2024-10-11 15:00 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Madhavan Srinivasan, Aneesh Kumar K . V, Donet Tom, LKML,
Sachin P Bappalige, Ritesh Harjani (IBM)
During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
since pageblock_order is still zero and it gets initialized
later during initmem_init() e.g.
setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order()
One such use case where this causes issues is -
early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init()
This causes CMA memory alignment check to be bypassed in
cma_init_reserved_mem(). Then later cma_activate_area() can hit
a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
area was not pageblock_order aligned.
Fix it by moving the fadump_cma_init() after initmem_init(),
where other such cma reservations also gets called.
<stack trace>
==============
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010
flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA
raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1))
------------[ cut here ]------------
kernel BUG at mm/page_alloc.c:778!
Call Trace:
__free_one_page+0x57c/0x7b0 (unreliable)
free_pcppages_bulk+0x1a8/0x2c8
free_unref_page_commit+0x3d4/0x4e4
free_unref_page+0x458/0x6d0
init_cma_reserved_pageblock+0x114/0x198
cma_init_reserved_areas+0x270/0x3e0
do_one_initcall+0x80/0x2f8
kernel_init_freeable+0x33c/0x530
kernel_init+0x34/0x26c
ret_from_kernel_user_thread+0x14/0x1c
Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
Suggested-by: David Hildenbrand <david@redhat.com>
Reported-by: Sachin P Bappalige <sachinpb@linux.ibm.com>
Acked-by: Hari Bathini <hbathini@linux.ibm.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
arch/powerpc/include/asm/fadump.h | 7 +++++++
arch/powerpc/kernel/fadump.c | 6 +-----
arch/powerpc/kernel/setup-common.c | 6 ++++--
3 files changed, 12 insertions(+), 7 deletions(-)
diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
index ef40c9b6972a..3638f04447f5 100644
--- a/arch/powerpc/include/asm/fadump.h
+++ b/arch/powerpc/include/asm/fadump.h
@@ -34,4 +34,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
int depth, void *data);
extern int fadump_reserve_mem(void);
#endif
+
+#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
+void fadump_cma_init(void);
+#else
+static inline void fadump_cma_init(void) { }
+#endif
+
#endif /* _ASM_POWERPC_FADUMP_H */
diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
index ffaec625b7a8..c42f89862893 100644
--- a/arch/powerpc/kernel/fadump.c
+++ b/arch/powerpc/kernel/fadump.c
@@ -78,7 +78,7 @@ static struct cma *fadump_cma;
* But for some reason even if it fails we still have the memory reservation
* with us and we can still continue doing fadump.
*/
-static void __init fadump_cma_init(void)
+void __init fadump_cma_init(void)
{
unsigned long long base, size, end;
int rc;
@@ -139,8 +139,6 @@ static void __init fadump_cma_init(void)
fw_dump.boot_memory_size >> 20);
return;
}
-#else
-static void __init fadump_cma_init(void) { }
#endif /* CONFIG_CMA */
/*
@@ -642,8 +640,6 @@ int __init fadump_reserve_mem(void)
pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
(size >> 20), base, (memblock_phys_mem_size() >> 20));
-
- fadump_cma_init();
}
return ret;
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 4bd2f87616ba..9f1e6f2e299e 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -996,9 +996,11 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
/*
- * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
- * be called after initmem_init(), so that pageblock_order is initialised.
+ * Reserve large chunks of memory for use by CMA for fadump, KVM and
+ * hugetlb. These must be called after initmem_init(), so that
+ * pageblock_order is initialised.
*/
+ fadump_cma_init();
kvm_cma_reserve();
gigantic_hugetlb_cma_reserve();
--
2.46.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init
2024-10-11 15:00 [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Ritesh Harjani (IBM)
2024-10-11 15:00 ` [RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
2024-10-11 15:00 ` [RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after initmem_init() Ritesh Harjani (IBM)
@ 2024-10-14 10:24 ` Madhavan Srinivasan
2024-10-14 11:24 ` Ritesh Harjani
2 siblings, 1 reply; 10+ messages in thread
From: Madhavan Srinivasan @ 2024-10-14 10:24 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
> We anyway don't use any return values from fadump_cma_init(). Since
> fadump_reserve_mem() from where fadump_cma_init() gets called today,
> already has the required checks.
> This patch makes this function return type as void. Let's also handle
> extra cases like return if fadump_supported is false or dump_active, so
> that in later patches we can call fadump_cma_init() separately from
> setup_arch().
Usually patches to this file are posted with title format of
powerpc/fadump:<>
>
> Acked-by: Hari Bathini <hbathini@linux.ibm.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> v2 -> v3: Separated the series into 2 as discussed in v2.
> [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
>
> arch/powerpc/kernel/fadump.c | 23 +++++++++--------------
> 1 file changed, 9 insertions(+), 14 deletions(-)
>
> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index a612e7513a4f..162327d66982 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -78,27 +78,23 @@ static struct cma *fadump_cma;
> * But for some reason even if it fails we still have the memory reservation
> * with us and we can still continue doing fadump.
> */
> -static int __init fadump_cma_init(void)
> +static void __init fadump_cma_init(void)
> {
> unsigned long long base, size;
> int rc;
>
> - if (!fw_dump.fadump_enabled)
> - return 0;
> -
> + if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
> + fw_dump.dump_active)
> + return;
Is these checks even needed here? fadump_reserve_mem() checked for all
these already, also dont see any other caller for fadump_cma_init().
> /*
> * Do not use CMA if user has provided fadump=nocma kernel parameter.
> - * Return 1 to continue with fadump old behaviour.
> */
> - if (fw_dump.nocma)
> - return 1;
> + if (fw_dump.nocma || !fw_dump.boot_memory_size)
> + return;
>
> base = fw_dump.reserve_dump_area_start;
> size = fw_dump.boot_memory_size;
>
> - if (!size)
> - return 0;
So this is the only place where we return 0, which in turn will make the
"ret" in fadump_reserve_mem() as zero forcing to call reserve_crashkernel()
in early_init_devtree().
we are removing it, becos we know "size" here will never be zero?
> -
> rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
> if (rc) {
> pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
> @@ -108,7 +104,7 @@ static int __init fadump_cma_init(void)
> * blocked from production system usage. Hence return 1,
> * so that we can continue with fadump.
> */
> - return 1;
> + return;
> }
>
> /*
> @@ -125,10 +121,9 @@ static int __init fadump_cma_init(void)
> cma_get_size(fadump_cma),
> (unsigned long)cma_get_base(fadump_cma) >> 20,
> fw_dump.reserve_dump_area_size);
> - return 1;
> }
> #else
> -static int __init fadump_cma_init(void) { return 1; }
> +static void __init fadump_cma_init(void) { }
> #endif /* CONFIG_CMA */
>
> /*
> @@ -638,7 +633,7 @@ int __init fadump_reserve_mem(void)
> pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
> (size >> 20), base, (memblock_phys_mem_size() >> 20));
>
> - ret = fadump_cma_init();
> + fadump_cma_init();
> }
>
> return ret;
> --
> 2.46.0
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init
2024-10-14 10:24 ` [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Madhavan Srinivasan
@ 2024-10-14 11:24 ` Ritesh Harjani
2024-10-14 12:21 ` Madhavan Srinivasan
2024-10-18 14:54 ` Madhavan Srinivasan
0 siblings, 2 replies; 10+ messages in thread
From: Ritesh Harjani @ 2024-10-14 11:24 UTC (permalink / raw)
To: Madhavan Srinivasan, linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
Madhavan Srinivasan <maddy@linux.ibm.com> writes:
> On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
>> We anyway don't use any return values from fadump_cma_init(). Since
>> fadump_reserve_mem() from where fadump_cma_init() gets called today,
>> already has the required checks.
>> This patch makes this function return type as void. Let's also handle
>> extra cases like return if fadump_supported is false or dump_active, so
>> that in later patches we can call fadump_cma_init() separately from
>> setup_arch().
>
> Usually patches to this file are posted with title format of
>
> powerpc/fadump:<>
yes. I guess it is good to do it that way (I might have missed it)
Although commit history of oldest few patches to fadump shows..
ebaeb5ae2437 fadump: Convert firmware-assisted cpu state dump data into elf notes.
2df173d9e85d fadump: Initialize elfcore header and add PT_LOAD program headers.
3ccc00a7e04f fadump: Register for firmware assisted dump.
eb39c8803d0e fadump: Reserve the memory for firmware assisted dump.
>
>
>>
>> Acked-by: Hari Bathini <hbathini@linux.ibm.com>
>> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>> ---
>> v2 -> v3: Separated the series into 2 as discussed in v2.
>> [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
>>
>> arch/powerpc/kernel/fadump.c | 23 +++++++++--------------
>> 1 file changed, 9 insertions(+), 14 deletions(-)
>>
>> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
>> index a612e7513a4f..162327d66982 100644
>> --- a/arch/powerpc/kernel/fadump.c
>> +++ b/arch/powerpc/kernel/fadump.c
>> @@ -78,27 +78,23 @@ static struct cma *fadump_cma;
>> * But for some reason even if it fails we still have the memory reservation
>> * with us and we can still continue doing fadump.
>> */
>> -static int __init fadump_cma_init(void)
>> +static void __init fadump_cma_init(void)
>> {
>> unsigned long long base, size;
>> int rc;
>>
>> - if (!fw_dump.fadump_enabled)
>> - return 0;
>> -
>> + if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
>> + fw_dump.dump_active)
>> + return;
>
> Is these checks even needed here? fadump_reserve_mem() checked for all
> these already, also dont see any other caller for fadump_cma_init().
>
>
In the next patch we will move fadump_cma_init() call from within
fadump_reserve_mem() to setup_arch(). Hence we need these extra checks
in fadump_cma_init() as well. I mentioned the same in the commit msg of
this patch too.
>> /*
>> * Do not use CMA if user has provided fadump=nocma kernel parameter.
>> - * Return 1 to continue with fadump old behaviour.
>> */
>> - if (fw_dump.nocma)
>> - return 1;
>> + if (fw_dump.nocma || !fw_dump.boot_memory_size)
>> + return;
>>
>> base = fw_dump.reserve_dump_area_start;
>> size = fw_dump.boot_memory_size;
>>
>> - if (!size)
>> - return 0;
>
> So this is the only place where we return 0, which in turn will make the
> "ret" in fadump_reserve_mem() as zero forcing to call reserve_crashkernel()
> in early_init_devtree().
>
> we are removing it, becos we know "size" here will never be zero?
>
>
yes. Because we already check if boot_memory_size is less than
bootmem_min in fadump_reserve_mem(). If it is less, then we fail and
disable fadump (fadump_enabled = 0).
So then there is no need to check for !boot_memory_size in here.
fadump_reseve_mem( ) {
<...>
if (!fw_dump.dump_active) {
fw_dump.boot_memory_size =
PAGE_ALIGN(fadump_calculate_reserve_size());
bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
if (fw_dump.boot_memory_size < bootmem_min) {
pr_err("Can't enable fadump with boot memory size (0x%lx) less than 0x%llx\n",
fw_dump.boot_memory_size, bootmem_min);
goto error_out;
}
<...>
}
<...>
error_out:
fw_dump.fadump_enabled = 0;
fw_dump.reserve_dump_area_size = 0;
return 0;
}
Thanks for the review!
-ritesh
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init
2024-10-14 11:24 ` Ritesh Harjani
@ 2024-10-14 12:21 ` Madhavan Srinivasan
2024-10-18 14:54 ` Madhavan Srinivasan
1 sibling, 0 replies; 10+ messages in thread
From: Madhavan Srinivasan @ 2024-10-14 12:21 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
On 10/14/24 4:54 PM, Ritesh Harjani (IBM) wrote:
> Madhavan Srinivasan <maddy@linux.ibm.com> writes:
>
>> On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
>>> We anyway don't use any return values from fadump_cma_init(). Since
>>> fadump_reserve_mem() from where fadump_cma_init() gets called today,
>>> already has the required checks.
>>> This patch makes this function return type as void. Let's also handle
>>> extra cases like return if fadump_supported is false or dump_active, so
>>> that in later patches we can call fadump_cma_init() separately from
>>> setup_arch().
>>
>> Usually patches to this file are posted with title format of
>>
>> powerpc/fadump:<>
>
> yes. I guess it is good to do it that way (I might have missed it)
> Although commit history of oldest few patches to fadump shows..
>
> ebaeb5ae2437 fadump: Convert firmware-assisted cpu state dump data into elf notes.
> 2df173d9e85d fadump: Initialize elfcore header and add PT_LOAD program headers.
> 3ccc00a7e04f fadump: Register for firmware assisted dump.
> eb39c8803d0e fadump: Reserve the memory for firmware assisted dump.
>
>>
>>
>>>
>>> Acked-by: Hari Bathini <hbathini@linux.ibm.com>
>>> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>>> ---
>>> v2 -> v3: Separated the series into 2 as discussed in v2.
>>> [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
>>>
>>> arch/powerpc/kernel/fadump.c | 23 +++++++++--------------
>>> 1 file changed, 9 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
>>> index a612e7513a4f..162327d66982 100644
>>> --- a/arch/powerpc/kernel/fadump.c
>>> +++ b/arch/powerpc/kernel/fadump.c
>>> @@ -78,27 +78,23 @@ static struct cma *fadump_cma;
>>> * But for some reason even if it fails we still have the memory reservation
>>> * with us and we can still continue doing fadump.
>>> */
>>> -static int __init fadump_cma_init(void)
>>> +static void __init fadump_cma_init(void)
>>> {
>>> unsigned long long base, size;
>>> int rc;
>>>
>>> - if (!fw_dump.fadump_enabled)
>>> - return 0;
>>> -
>>> + if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
>>> + fw_dump.dump_active)
>>> + return;
>>
>> Is these checks even needed here? fadump_reserve_mem() checked for all
>> these already, also dont see any other caller for fadump_cma_init().
>>
>>
>
> In the next patch we will move fadump_cma_init() call from within
> fadump_reserve_mem() to setup_arch(). Hence we need these extra checks
> in fadump_cma_init() as well. I mentioned the same in the commit msg of
> this patch too.
>
yes, just saw that when looking at the patch 3, sorry for the noise.
>>> /*
>>> * Do not use CMA if user has provided fadump=nocma kernel parameter.
>>> - * Return 1 to continue with fadump old behaviour.
>>> */
>>> - if (fw_dump.nocma)
>>> - return 1;
>>> + if (fw_dump.nocma || !fw_dump.boot_memory_size)
>>> + return;
>>>
>>> base = fw_dump.reserve_dump_area_start;
>>> size = fw_dump.boot_memory_size;
>>>
>>> - if (!size)
>>> - return 0;
>>
>> So this is the only place where we return 0, which in turn will make the
>> "ret" in fadump_reserve_mem() as zero forcing to call reserve_crashkernel()
>> in early_init_devtree().
>>
>> we are removing it, becos we know "size" here will never be zero?
>>
>>
>
> yes. Because we already check if boot_memory_size is less than
> bootmem_min in fadump_reserve_mem(). If it is less, then we fail and
> disable fadump (fadump_enabled = 0).
>
Thanks for the clarification.
> So then there is no need to check for !boot_memory_size in here.
>
> fadump_reseve_mem( ) {
> <...>
> if (!fw_dump.dump_active) {
> fw_dump.boot_memory_size =
> PAGE_ALIGN(fadump_calculate_reserve_size());
>
> bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
> if (fw_dump.boot_memory_size < bootmem_min) {
> pr_err("Can't enable fadump with boot memory size (0x%lx) less than 0x%llx\n",
> fw_dump.boot_memory_size, bootmem_min);
> goto error_out;
> }
> <...>
> }
> <...>
> error_out:
> fw_dump.fadump_enabled = 0;
> fw_dump.reserve_dump_area_size = 0;
> return 0;
> }
>
>
> Thanks for the review!
> -ritesh
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem
2024-10-11 15:00 ` [RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
@ 2024-10-15 11:34 ` Madhavan Srinivasan
0 siblings, 0 replies; 10+ messages in thread
From: Madhavan Srinivasan @ 2024-10-15 11:34 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
> This patch refactors all CMA related initialization and alignment code
> to within fadump_cma_init() which gets called in the end. This also means
> that we keep [reserve_dump_area_start, boot_memory_size] page aligned
> during fadump_reserve_mem(). Then later in fadump_cma_init() we extract the
> aligned chunk and provide it to CMA. This inherently also fixes an issue in
> the current code where the reserve_dump_area_start is not aligned
> when the physical memory can have holes and the suitable chunk starts at
> an unaligned boundary.
>
> After this we should be able to call fadump_cma_init() independently
> later in setup_arch() where pageblock_order is non-zero.
>
> Suggested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
> Acked-by: Hari Bathini <hbathini@linux.ibm.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> arch/powerpc/kernel/fadump.c | 34 ++++++++++++++++++++++------------
> 1 file changed, 22 insertions(+), 12 deletions(-)
>
> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index 162327d66982..ffaec625b7a8 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -80,7 +80,7 @@ static struct cma *fadump_cma;
> */
> static void __init fadump_cma_init(void)
> {
> - unsigned long long base, size;
> + unsigned long long base, size, end;
> int rc;
>
> if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
> @@ -92,8 +92,24 @@ static void __init fadump_cma_init(void)
> if (fw_dump.nocma || !fw_dump.boot_memory_size)
> return;
>
> + /*
> + * [base, end) should be reserved during early init in
> + * fadump_reserve_mem(). No need to check this here as
> + * cma_init_reserved_mem() already checks for overlap.
> + * Here we give the aligned chunk of this reserved memory to CMA.
> + */
> base = fw_dump.reserve_dump_area_start;
> size = fw_dump.boot_memory_size;
> + end = base + size;
> +
> + base = ALIGN(base, CMA_MIN_ALIGNMENT_BYTES);
> + end = ALIGN_DOWN(end, CMA_MIN_ALIGNMENT_BYTES);
> + size = end - base;
> +
> + if (end <= base) {
> + pr_warn("%s: Too less memory to give to CMA\n", __func__);
> + return;
> + }
>
just minor, other prints are all "cma", any reason for "CMA" here?
> rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
> if (rc) {
> @@ -116,11 +132,12 @@ static void __init fadump_cma_init(void)
> /*
> * So we now have successfully initialized cma area for fadump.
> */
> - pr_info("Initialized 0x%lx bytes cma area at %ldMB from 0x%lx "
> + pr_info("Initialized [0x%llx, %luMB] cma area from [0x%lx, %luMB] "
> "bytes of memory reserved for firmware-assisted dump\n",
> - cma_get_size(fadump_cma),
> - (unsigned long)cma_get_base(fadump_cma) >> 20,
> - fw_dump.reserve_dump_area_size);
> + cma_get_base(fadump_cma), cma_get_size(fadump_cma) >> 20,
> + fw_dump.reserve_dump_area_start,
> + fw_dump.boot_memory_size >> 20);
> + return;
> }
> #else
> static void __init fadump_cma_init(void) { }
> @@ -553,13 +570,6 @@ int __init fadump_reserve_mem(void)
> if (!fw_dump.dump_active) {
> fw_dump.boot_memory_size =
> PAGE_ALIGN(fadump_calculate_reserve_size());
> -#ifdef CONFIG_CMA
> - if (!fw_dump.nocma) {
> - fw_dump.boot_memory_size =
> - ALIGN(fw_dump.boot_memory_size,
> - CMA_MIN_ALIGNMENT_BYTES);
> - }
> -#endif
>
> bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
> if (fw_dump.boot_memory_size < bootmem_min) {
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after initmem_init()
2024-10-11 15:00 ` [RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after initmem_init() Ritesh Harjani (IBM)
@ 2024-10-15 14:06 ` Madhavan Srinivasan
0 siblings, 0 replies; 10+ messages in thread
From: Madhavan Srinivasan @ 2024-10-15 14:06 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
> During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,
> since pageblock_order is still zero and it gets initialized
> later during initmem_init() e.g.
> setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order()
>
> One such use case where this causes issues is -
> early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init()
>
> This causes CMA memory alignment check to be bypassed in
> cma_init_reserved_mem(). Then later cma_activate_area() can hit
> a VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memory
> area was not pageblock_order aligned.
>
> Fix it by moving the fadump_cma_init() after initmem_init(),
> where other such cma reservations also gets called.
>
> <stack trace>
> ==============
> page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010
> flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMA
> raw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000
> raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1))
> ------------[ cut here ]------------
> kernel BUG at mm/page_alloc.c:778!
>
> Call Trace:
> __free_one_page+0x57c/0x7b0 (unreliable)
> free_pcppages_bulk+0x1a8/0x2c8
> free_unref_page_commit+0x3d4/0x4e4
> free_unref_page+0x458/0x6d0
> init_cma_reserved_pageblock+0x114/0x198
> cma_init_reserved_areas+0x270/0x3e0
> do_one_initcall+0x80/0x2f8
> kernel_init_freeable+0x33c/0x530
> kernel_init+0x34/0x26c
> ret_from_kernel_user_thread+0x14/0x1c
>
Changes looks fine to me.
Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com>
> Fixes: 11ac3e87ce09 ("mm: cma: use pageblock_order as the single alignment")
> Suggested-by: David Hildenbrand <david@redhat.com>
> Reported-by: Sachin P Bappalige <sachinpb@linux.ibm.com>
> Acked-by: Hari Bathini <hbathini@linux.ibm.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> ---
> arch/powerpc/include/asm/fadump.h | 7 +++++++
> arch/powerpc/kernel/fadump.c | 6 +-----
> arch/powerpc/kernel/setup-common.c | 6 ++++--
> 3 files changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
> index ef40c9b6972a..3638f04447f5 100644
> --- a/arch/powerpc/include/asm/fadump.h
> +++ b/arch/powerpc/include/asm/fadump.h
> @@ -34,4 +34,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
> int depth, void *data);
> extern int fadump_reserve_mem(void);
> #endif
> +
> +#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
> +void fadump_cma_init(void);
> +#else
> +static inline void fadump_cma_init(void) { }
> +#endif
> +
> #endif /* _ASM_POWERPC_FADUMP_H */
> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
> index ffaec625b7a8..c42f89862893 100644
> --- a/arch/powerpc/kernel/fadump.c
> +++ b/arch/powerpc/kernel/fadump.c
> @@ -78,7 +78,7 @@ static struct cma *fadump_cma;
> * But for some reason even if it fails we still have the memory reservation
> * with us and we can still continue doing fadump.
> */
> -static void __init fadump_cma_init(void)
> +void __init fadump_cma_init(void)
> {
> unsigned long long base, size, end;
> int rc;
> @@ -139,8 +139,6 @@ static void __init fadump_cma_init(void)
> fw_dump.boot_memory_size >> 20);
> return;
> }
> -#else
> -static void __init fadump_cma_init(void) { }
> #endif /* CONFIG_CMA */
>
> /*
> @@ -642,8 +640,6 @@ int __init fadump_reserve_mem(void)
>
> pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
> (size >> 20), base, (memblock_phys_mem_size() >> 20));
> -
> - fadump_cma_init();
> }
>
> return ret;
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index 4bd2f87616ba..9f1e6f2e299e 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -996,9 +996,11 @@ void __init setup_arch(char **cmdline_p)
> initmem_init();
>
> /*
> - * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
> - * be called after initmem_init(), so that pageblock_order is initialised.
> + * Reserve large chunks of memory for use by CMA for fadump, KVM and
> + * hugetlb. These must be called after initmem_init(), so that
> + * pageblock_order is initialised.
> */
> + fadump_cma_init();
> kvm_cma_reserve();
> gigantic_hugetlb_cma_reserve();
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init
2024-10-14 11:24 ` Ritesh Harjani
2024-10-14 12:21 ` Madhavan Srinivasan
@ 2024-10-18 14:54 ` Madhavan Srinivasan
2024-10-18 16:04 ` Ritesh Harjani
1 sibling, 1 reply; 10+ messages in thread
From: Madhavan Srinivasan @ 2024-10-18 14:54 UTC (permalink / raw)
To: Ritesh Harjani (IBM), linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
On 10/14/24 4:54 PM, Ritesh Harjani (IBM) wrote:
> Madhavan Srinivasan <maddy@linux.ibm.com> writes:
>
>> On 10/11/24 8:30 PM, Ritesh Harjani (IBM) wrote:
>>> We anyway don't use any return values from fadump_cma_init(). Since
>>> fadump_reserve_mem() from where fadump_cma_init() gets called today,
>>> already has the required checks.
>>> This patch makes this function return type as void. Let's also handle
>>> extra cases like return if fadump_supported is false or dump_active, so
>>> that in later patches we can call fadump_cma_init() separately from
>>> setup_arch().
>>
>> Usually patches to this file are posted with title format of
>>
>> powerpc/fadump:<>
>
> yes. I guess it is good to do it that way (I might have missed it)
> Although commit history of oldest few patches to fadump shows..
>
> ebaeb5ae2437 fadump: Convert firmware-assisted cpu state dump data into elf notes.
> 2df173d9e85d fadump: Initialize elfcore header and add PT_LOAD program headers.
> 3ccc00a7e04f fadump: Register for firmware assisted dump.
> eb39c8803d0e fadump: Reserve the memory for firmware assisted dump.
>
>>
Patchset looks fine to me.
Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com> for the series.
>>
>>>
>>> Acked-by: Hari Bathini <hbathini@linux.ibm.com>
>>> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>>> ---
>>> v2 -> v3: Separated the series into 2 as discussed in v2.
>>> [v2]: https://lore.kernel.org/linuxppc-dev/cover.1728585512.git.ritesh.list@gmail.com/
>>>
>>> arch/powerpc/kernel/fadump.c | 23 +++++++++--------------
>>> 1 file changed, 9 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
>>> index a612e7513a4f..162327d66982 100644
>>> --- a/arch/powerpc/kernel/fadump.c
>>> +++ b/arch/powerpc/kernel/fadump.c
>>> @@ -78,27 +78,23 @@ static struct cma *fadump_cma;
>>> * But for some reason even if it fails we still have the memory reservation
>>> * with us and we can still continue doing fadump.
>>> */
>>> -static int __init fadump_cma_init(void)
>>> +static void __init fadump_cma_init(void)
>>> {
>>> unsigned long long base, size;
>>> int rc;
>>>
>>> - if (!fw_dump.fadump_enabled)
>>> - return 0;
>>> -
>>> + if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
>>> + fw_dump.dump_active)
>>> + return;
>>
>> Is these checks even needed here? fadump_reserve_mem() checked for all
>> these already, also dont see any other caller for fadump_cma_init().
>>
>>
>
> In the next patch we will move fadump_cma_init() call from within
> fadump_reserve_mem() to setup_arch(). Hence we need these extra checks
> in fadump_cma_init() as well. I mentioned the same in the commit msg of
> this patch too.
>
>>> /*
>>> * Do not use CMA if user has provided fadump=nocma kernel parameter.
>>> - * Return 1 to continue with fadump old behaviour.
>>> */
>>> - if (fw_dump.nocma)
>>> - return 1;
>>> + if (fw_dump.nocma || !fw_dump.boot_memory_size)
>>> + return;
>>>
>>> base = fw_dump.reserve_dump_area_start;
>>> size = fw_dump.boot_memory_size;
>>>
>>> - if (!size)
>>> - return 0;
>>
>> So this is the only place where we return 0, which in turn will make the
>> "ret" in fadump_reserve_mem() as zero forcing to call reserve_crashkernel()
>> in early_init_devtree().
>>
>> we are removing it, becos we know "size" here will never be zero?
>>
>>
>
> yes. Because we already check if boot_memory_size is less than
> bootmem_min in fadump_reserve_mem(). If it is less, then we fail and
> disable fadump (fadump_enabled = 0).
>
> So then there is no need to check for !boot_memory_size in here.
>
> fadump_reseve_mem( ) {
> <...>
> if (!fw_dump.dump_active) {
> fw_dump.boot_memory_size =
> PAGE_ALIGN(fadump_calculate_reserve_size());
>
> bootmem_min = fw_dump.ops->fadump_get_bootmem_min();
> if (fw_dump.boot_memory_size < bootmem_min) {
> pr_err("Can't enable fadump with boot memory size (0x%lx) less than 0x%llx\n",
> fw_dump.boot_memory_size, bootmem_min);
> goto error_out;
> }
> <...>
> }
> <...>
> error_out:
> fw_dump.fadump_enabled = 0;
> fw_dump.reserve_dump_area_size = 0;
> return 0;
> }
>
>
> Thanks for the review!
> -ritesh
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init
2024-10-18 14:54 ` Madhavan Srinivasan
@ 2024-10-18 16:04 ` Ritesh Harjani
0 siblings, 0 replies; 10+ messages in thread
From: Ritesh Harjani @ 2024-10-18 16:04 UTC (permalink / raw)
To: Madhavan Srinivasan, linuxppc-dev
Cc: linux-mm, Sourabh Jain, Hari Bathini, Zi Yan, David Hildenbrand,
Kirill A . Shutemov, Mahesh J Salgaonkar, Michael Ellerman,
Aneesh Kumar K . V, Donet Tom, LKML, Sachin P Bappalige
Madhavan Srinivasan <maddy@linux.ibm.com> writes:
>
> Patchset looks fine to me.
>
> Reviewed-by: Madhavan Srinivasan <maddy@linux.ibm.com> for the series.
>
Thanks Maddy for the reviews!
I will spin PATCH v4 with these minor suggested changes (No code changes)
-ritesh
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-12-05 15:23 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-11 15:00 [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Ritesh Harjani (IBM)
2024-10-11 15:00 ` [RFC v3 2/3] fadump: Reserve page-aligned boot_memory_size during fadump_reserve_mem Ritesh Harjani (IBM)
2024-10-15 11:34 ` Madhavan Srinivasan
2024-10-11 15:00 ` [RFC v3 3/3] fadump: Move fadump_cma_init to setup_arch() after initmem_init() Ritesh Harjani (IBM)
2024-10-15 14:06 ` Madhavan Srinivasan
2024-10-14 10:24 ` [RFC v3 1/3] fadump: Refactor and prepare fadump_cma_init for late init Madhavan Srinivasan
2024-10-14 11:24 ` Ritesh Harjani
2024-10-14 12:21 ` Madhavan Srinivasan
2024-10-18 14:54 ` Madhavan Srinivasan
2024-10-18 16:04 ` Ritesh Harjani
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).