* [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE
@ 2025-02-26 4:34 Donet Tom
2025-03-03 13:02 ` Aneesh Kumar K.V
2025-03-08 3:46 ` Aneesh Kumar K.V
0 siblings, 2 replies; 7+ messages in thread
From: Donet Tom @ 2025-02-26 4:34 UTC (permalink / raw)
To: Madhavan Srinivasan, linuxppc-dev
Cc: Ritesh Harjani, Aneesh Kumar K . V, Donet Tom
A vmemmap altmap is a device-provided region used to provide
backing storage for struct pages. For each namespace, the altmap
should belong to that same namespace. If the namespaces are
created unaligned, there is a chance that the section vmemmap
start address could also be unaligned. If the section vmemmap
start address is unaligned, the altmap page allocated from the
current namespace might be used by the previous namespace also.
During the free operation, since the altmap is shared between two
namespaces, the previous namespace may detect that the page does
not belong to its altmap and incorrectly assume that the page is a
normal page. It then attempts to free the normal page, which leads
to a kernel crash.
In this patch, we are aligning the section vmemmap start address
to PAGE_SIZE. After alignment, the start address will not be
part of the current namespace, and a normal page will be allocated
for the vmemmap mapping of the current section. For the remaining
sections, altmaps will be allocated. During the free operation,
the normal page will be correctly freed.
Without this patch
==================
NS1 start NS2 start
_________________________________________________________
| NS1 | NS2 |
---------------------------------------------------------
| Altmap| Altmap | .....|Altmap| Altmap | ...........
| NS1 | NS1 | | NS2 | NS2 |
In the above scenario, NS1 and NS2 are two namespaces. The vmemmap
for NS1 comes from Altmap NS1, which belongs to NS1, and the
vmemmap for NS2 comes from Altmap NS2, which belongs to NS2.
The vmemmap start for NS2 is not aligned, so Altmap NS2 is shared
by both NS1 and NS2. During the free operation in NS1, Altmap NS2
is not part of NS1's altmap, causing it to attempt to free an
invalid page.
With this patch
===============
NS1 start NS2 start
_________________________________________________________
| NS1 | NS2 |
---------------------------------------------------------
| Altmap| Altmap | .....| Normal | Altmap | Altmap |.......
| NS1 | NS1 | | Page | NS2 | NS2 |
If the vmemmap start for NS2 is not aligned then we are allocating
a normal page. NS1 and NS2 vmemmap will be freed correctly.
Fixes: 368a0590d954("powerpc/book3s64/vmemmap: switch radix to use a different vmemmap handling function")
Co-developed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Signed-off-by: Donet Tom <donettom@linux.ibm.com>
---
arch/powerpc/mm/book3s64/radix_pgtable.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 311e2112d782..b22d5f6147d2 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -1120,6 +1120,8 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
pmd_t *pmd;
pte_t *pte;
+ start = ALIGN_DOWN(start, PAGE_SIZE);
+
for (addr = start; addr < end; addr = next) {
next = pmd_addr_end(addr, end);
--
2.43.5
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE
2025-02-26 4:34 [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE Donet Tom
@ 2025-03-03 13:02 ` Aneesh Kumar K.V
2025-03-04 5:33 ` Donet Tom
2025-03-08 3:46 ` Aneesh Kumar K.V
1 sibling, 1 reply; 7+ messages in thread
From: Aneesh Kumar K.V @ 2025-03-03 13:02 UTC (permalink / raw)
To: Donet Tom, Madhavan Srinivasan, linuxppc-dev; +Cc: Ritesh Harjani, Donet Tom
Donet Tom <donettom@linux.ibm.com> writes:
> A vmemmap altmap is a device-provided region used to provide
> backing storage for struct pages. For each namespace, the altmap
> should belong to that same namespace. If the namespaces are
> created unaligned, there is a chance that the section vmemmap
> start address could also be unaligned. If the section vmemmap
> start address is unaligned, the altmap page allocated from the
> current namespace might be used by the previous namespace also.
> During the free operation, since the altmap is shared between two
> namespaces, the previous namespace may detect that the page does
> not belong to its altmap and incorrectly assume that the page is a
> normal page. It then attempts to free the normal page, which leads
> to a kernel crash.
>
> In this patch, we are aligning the section vmemmap start address
> to PAGE_SIZE. After alignment, the start address will not be
> part of the current namespace, and a normal page will be allocated
> for the vmemmap mapping of the current section. For the remaining
> sections, altmaps will be allocated. During the free operation,
> the normal page will be correctly freed.
>
> Without this patch
> ==================
> NS1 start NS2 start
> _________________________________________________________
> | NS1 | NS2 |
> ---------------------------------------------------------
> | Altmap| Altmap | .....|Altmap| Altmap | ...........
> | NS1 | NS1 | | NS2 | NS2 |
>
^^^ this should be allocated in ram?
>
> In the above scenario, NS1 and NS2 are two namespaces. The vmemmap
> for NS1 comes from Altmap NS1, which belongs to NS1, and the
> vmemmap for NS2 comes from Altmap NS2, which belongs to NS2.
>
> The vmemmap start for NS2 is not aligned, so Altmap NS2 is shared
> by both NS1 and NS2. During the free operation in NS1, Altmap NS2
> is not part of NS1's altmap, causing it to attempt to free an
> invalid page.
>
> With this patch
> ===============
> NS1 start NS2 start
> _________________________________________________________
> | NS1 | NS2 |
> ---------------------------------------------------------
> | Altmap| Altmap | .....| Normal | Altmap | Altmap |.......
> | NS1 | NS1 | | Page | NS2 | NS2 |
>
> If the vmemmap start for NS2 is not aligned then we are allocating
> a normal page. NS1 and NS2 vmemmap will be freed correctly.
>
> Fixes: 368a0590d954("powerpc/book3s64/vmemmap: switch radix to use a different vmemmap handling function")
> Co-developed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> Signed-off-by: Donet Tom <donettom@linux.ibm.com>
> ---
> arch/powerpc/mm/book3s64/radix_pgtable.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 311e2112d782..b22d5f6147d2 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -1120,6 +1120,8 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
> pmd_t *pmd;
> pte_t *pte;
>
> + start = ALIGN_DOWN(start, PAGE_SIZE);
> +
> for (addr = start; addr < end; addr = next) {
> next = pmd_addr_end(addr, end);
>
> --
> 2.43.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE
2025-03-03 13:02 ` Aneesh Kumar K.V
@ 2025-03-04 5:33 ` Donet Tom
2025-03-06 4:11 ` Aneesh Kumar K.V
0 siblings, 1 reply; 7+ messages in thread
From: Donet Tom @ 2025-03-04 5:33 UTC (permalink / raw)
To: Aneesh Kumar K.V, Madhavan Srinivasan, linuxppc-dev; +Cc: Ritesh Harjani
On 3/3/25 18:32, Aneesh Kumar K.V wrote:
> Donet Tom <donettom@linux.ibm.com> writes:
>
>> A vmemmap altmap is a device-provided region used to provide
>> backing storage for struct pages. For each namespace, the altmap
>> should belong to that same namespace. If the namespaces are
>> created unaligned, there is a chance that the section vmemmap
>> start address could also be unaligned. If the section vmemmap
>> start address is unaligned, the altmap page allocated from the
>> current namespace might be used by the previous namespace also.
>> During the free operation, since the altmap is shared between two
>> namespaces, the previous namespace may detect that the page does
>> not belong to its altmap and incorrectly assume that the page is a
>> normal page. It then attempts to free the normal page, which leads
>> to a kernel crash.
>>
>> In this patch, we are aligning the section vmemmap start address
>> to PAGE_SIZE. After alignment, the start address will not be
>> part of the current namespace, and a normal page will be allocated
>> for the vmemmap mapping of the current section. For the remaining
>> sections, altmaps will be allocated. During the free operation,
>> the normal page will be correctly freed.
>>
>> Without this patch
>> ==================
>> NS1 start NS2 start
>> _________________________________________________________
>> | NS1 | NS2 |
>> ---------------------------------------------------------
>> | Altmap| Altmap | .....|Altmap| Altmap | ...........
>> | NS1 | NS1 | | NS2 | NS2 |
>>
> ^^^ this should be allocated in ram?
>
Yes, it should be allocated from RAM. However, in the current
implementation, an altmap page gets allocated. This is because the
NS2 vmemmap section's start address is unaligned. There is an
altmap_cross_boundary() check. Here, from the vmemmap section
start, we identify the namespace start and check if the namespace start
is within the boundary. Since it is within the boundary, it returns false,
causing an altmap page to be allocated. During the PTE update, the
vmemmap start address is aligned down to PAGE_SIZE, and the PTE is
updated. As a result, the altmap page is shared between the current
and previous namespaces.
If we had aligned the vmemmap start address, the
altmap_cross_boundary() function would return true because the
vmemmap section's start address belongs to the previous
namespace. Therefore normal page gets allocated. During the
PTE set operation, since the address is already aligned, the
PTE will updated.
>> In the above scenario, NS1 and NS2 are two namespaces. The vmemmap
>> for NS1 comes from Altmap NS1, which belongs to NS1, and the
>> vmemmap for NS2 comes from Altmap NS2, which belongs to NS2.
>>
>> The vmemmap start for NS2 is not aligned, so Altmap NS2 is shared
>> by both NS1 and NS2. During the free operation in NS1, Altmap NS2
>> is not part of NS1's altmap, causing it to attempt to free an
>> invalid page.
>>
>> With this patch
>> ===============
>> NS1 start NS2 start
>> _________________________________________________________
>> | NS1 | NS2 |
>> ---------------------------------------------------------
>> | Altmap| Altmap | .....| Normal | Altmap | Altmap |.......
>> | NS1 | NS1 | | Page | NS2 | NS2 |
>>
>> If the vmemmap start for NS2 is not aligned then we are allocating
>> a normal page. NS1 and NS2 vmemmap will be freed correctly.
>>
>> Fixes: 368a0590d954("powerpc/book3s64/vmemmap: switch radix to use a different vmemmap handling function")
>> Co-developed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>> Signed-off-by: Donet Tom <donettom@linux.ibm.com>
>> ---
>> arch/powerpc/mm/book3s64/radix_pgtable.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
>> index 311e2112d782..b22d5f6147d2 100644
>> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
>> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
>> @@ -1120,6 +1120,8 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
>> pmd_t *pmd;
>> pte_t *pte;
>>
>> + start = ALIGN_DOWN(start, PAGE_SIZE);
>> +
>> for (addr = start; addr < end; addr = next) {
>> next = pmd_addr_end(addr, end);
>>
>> --
>> 2.43.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE
2025-03-04 5:33 ` Donet Tom
@ 2025-03-06 4:11 ` Aneesh Kumar K.V
2025-03-07 6:41 ` Donet Tom
0 siblings, 1 reply; 7+ messages in thread
From: Aneesh Kumar K.V @ 2025-03-06 4:11 UTC (permalink / raw)
To: Donet Tom, Madhavan Srinivasan, linuxppc-dev; +Cc: Ritesh Harjani
Donet Tom <donettom@linux.ibm.com> writes:
> On 3/3/25 18:32, Aneesh Kumar K.V wrote:
>> Donet Tom <donettom@linux.ibm.com> writes:
>>
>>> A vmemmap altmap is a device-provided region used to provide
>>> backing storage for struct pages. For each namespace, the altmap
>>> should belong to that same namespace. If the namespaces are
>>> created unaligned, there is a chance that the section vmemmap
>>> start address could also be unaligned. If the section vmemmap
>>> start address is unaligned, the altmap page allocated from the
>>> current namespace might be used by the previous namespace also.
>>> During the free operation, since the altmap is shared between two
>>> namespaces, the previous namespace may detect that the page does
>>> not belong to its altmap and incorrectly assume that the page is a
>>> normal page. It then attempts to free the normal page, which leads
>>> to a kernel crash.
>>>
>>> In this patch, we are aligning the section vmemmap start address
>>> to PAGE_SIZE. After alignment, the start address will not be
>>> part of the current namespace, and a normal page will be allocated
>>> for the vmemmap mapping of the current section. For the remaining
>>> sections, altmaps will be allocated. During the free operation,
>>> the normal page will be correctly freed.
>>>
>>> Without this patch
>>> ==================
>>> NS1 start NS2 start
>>> _________________________________________________________
>>> | NS1 | NS2 |
>>> ---------------------------------------------------------
>>> | Altmap| Altmap | .....|Altmap| Altmap | ...........
>>> | NS1 | NS1 | | NS2 | NS2 |
>>>
>> ^^^ this should be allocated in ram?
>>
>
> Yes, it should be allocated from RAM. However, in the current
> implementation, an altmap page gets allocated. This is because the
> NS2 vmemmap section's start address is unaligned. There is an
> altmap_cross_boundary() check. Here, from the vmemmap section
> start, we identify the namespace start and check if the namespace start
> is within the boundary. Since it is within the boundary, it returns false,
> causing an altmap page to be allocated. During the PTE update, the
> vmemmap start address is aligned down to PAGE_SIZE, and the PTE is
> updated. As a result, the altmap page is shared between the current
> and previous namespaces.
>
> If we had aligned the vmemmap start address, the
> altmap_cross_boundary() function would return true because the
> vmemmap section's start address belongs to the previous
> namespace. Therefore normal page gets allocated. During the
> PTE set operation, since the address is already aligned, the
> PTE will updated.
>
So the nvdimm driver should ensure that alignment right? I assume other things
will also require that to be properly aligned.?
-aneesh
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE
2025-03-06 4:11 ` Aneesh Kumar K.V
@ 2025-03-07 6:41 ` Donet Tom
0 siblings, 0 replies; 7+ messages in thread
From: Donet Tom @ 2025-03-07 6:41 UTC (permalink / raw)
To: Aneesh Kumar K.V, Madhavan Srinivasan, linuxppc-dev; +Cc: Ritesh Harjani
On 3/6/25 9:41 AM, Aneesh Kumar K.V wrote:
> Donet Tom <donettom@linux.ibm.com> writes:
>
>> On 3/3/25 18:32, Aneesh Kumar K.V wrote:
>>> Donet Tom <donettom@linux.ibm.com> writes:
>>>
>>>> A vmemmap altmap is a device-provided region used to provide
>>>> backing storage for struct pages. For each namespace, the altmap
>>>> should belong to that same namespace. If the namespaces are
>>>> created unaligned, there is a chance that the section vmemmap
>>>> start address could also be unaligned. If the section vmemmap
>>>> start address is unaligned, the altmap page allocated from the
>>>> current namespace might be used by the previous namespace also.
>>>> During the free operation, since the altmap is shared between two
>>>> namespaces, the previous namespace may detect that the page does
>>>> not belong to its altmap and incorrectly assume that the page is a
>>>> normal page. It then attempts to free the normal page, which leads
>>>> to a kernel crash.
>>>>
>>>> In this patch, we are aligning the section vmemmap start address
>>>> to PAGE_SIZE. After alignment, the start address will not be
>>>> part of the current namespace, and a normal page will be allocated
>>>> for the vmemmap mapping of the current section. For the remaining
>>>> sections, altmaps will be allocated. During the free operation,
>>>> the normal page will be correctly freed.
>>>>
>>>> Without this patch
>>>> ==================
>>>> NS1 start NS2 start
>>>> _________________________________________________________
>>>> | NS1 | NS2 |
>>>> ---------------------------------------------------------
>>>> | Altmap| Altmap | .....|Altmap| Altmap | ...........
>>>> | NS1 | NS1 | | NS2 | NS2 |
>>>>
>>> ^^^ this should be allocated in ram?
>>>
>> Yes, it should be allocated from RAM. However, in the current
>> implementation, an altmap page gets allocated. This is because the
>> NS2 vmemmap section's start address is unaligned. There is an
>> altmap_cross_boundary() check. Here, from the vmemmap section
>> start, we identify the namespace start and check if the namespace start
>> is within the boundary. Since it is within the boundary, it returns false,
>> causing an altmap page to be allocated. During the PTE update, the
>> vmemmap start address is aligned down to PAGE_SIZE, and the PTE is
>> updated. As a result, the altmap page is shared between the current
>> and previous namespaces.
>>
>> If we had aligned the vmemmap start address, the
>> altmap_cross_boundary() function would return true because the
>> vmemmap section's start address belongs to the previous
>> namespace. Therefore normal page gets allocated. During the
>> PTE set operation, since the address is already aligned, the
>> PTE will updated.
>>
> So the nvdimm driver should ensure that alignment right? I assume other things
> will also require that to be properly aligned.?
#cat /proc/iomem
00000000-63ffffffff : System RAM
40340000000-403401fffff : namespace1.0
40340200000-403a0ffffff : dax1.0
403a1000000-403a11fffff : namespace1.1
403a1200000-40401ffffff : dax1.1
40402000000-404021fffff : namespace1.2
40402200000-40462ffffff : dax1.2
40463000000-404631fffff : namespace1.3
40463200000-404c3ffffff : dax1.3
#
I have created 4 namespaces with a size of 1552M. As you can see, the
start of
namespace1.0 is 1G aligned, while namespace1.1, namespace1.2, and
namespace1.3
are not 1G aligned. If I had created the namespace with a size of 1536M
(1.5G), then
all the namespaces would have started 1G aligned.
I believe that based on the size we are requesting, the namespaces
alignments are
being created. They do not always need to be 1G aligned.
Now, if we calculate the vmemmap start for namespace1.1..
Phy start - 0x403a1000000
pfn start - 0x403a1000000 / PAGE_SIZE = 0x403a100
vmemmap start = 0xc00c000000000000 + (0x403a100 * 0x40)
=0xC00C000100E84000
This address is not page aligned. This will trigger this issue.
>
> -aneesh
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE
2025-02-26 4:34 [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE Donet Tom
2025-03-03 13:02 ` Aneesh Kumar K.V
@ 2025-03-08 3:46 ` Aneesh Kumar K.V
2025-03-09 11:02 ` Donet Tom
1 sibling, 1 reply; 7+ messages in thread
From: Aneesh Kumar K.V @ 2025-03-08 3:46 UTC (permalink / raw)
To: Donet Tom, Madhavan Srinivasan, linuxppc-dev; +Cc: Ritesh Harjani, Donet Tom
Donet Tom <donettom@linux.ibm.com> writes:
> A vmemmap altmap is a device-provided region used to provide
> backing storage for struct pages. For each namespace, the altmap
> should belong to that same namespace. If the namespaces are
> created unaligned, there is a chance that the section vmemmap
> start address could also be unaligned. If the section vmemmap
> start address is unaligned, the altmap page allocated from the
> current namespace might be used by the previous namespace also.
> During the free operation, since the altmap is shared between two
> namespaces, the previous namespace may detect that the page does
> not belong to its altmap and incorrectly assume that the page is a
> normal page. It then attempts to free the normal page, which leads
> to a kernel crash.
>
> In this patch, we are aligning the section vmemmap start address
> to PAGE_SIZE. After alignment, the start address will not be
> part of the current namespace, and a normal page will be allocated
> for the vmemmap mapping of the current section. For the remaining
> sections, altmaps will be allocated. During the free operation,
> the normal page will be correctly freed.
>
> Without this patch
> ==================
> NS1 start NS2 start
> _________________________________________________________
> | NS1 | NS2 |
> ---------------------------------------------------------
> | Altmap| Altmap | .....|Altmap| Altmap | ...........
> | NS1 | NS1 | | NS2 | NS2 |
>
> In the above scenario, NS1 and NS2 are two namespaces. The vmemmap
> for NS1 comes from Altmap NS1, which belongs to NS1, and the
> vmemmap for NS2 comes from Altmap NS2, which belongs to NS2.
>
> The vmemmap start for NS2 is not aligned, so Altmap NS2 is shared
> by both NS1 and NS2. During the free operation in NS1, Altmap NS2
> is not part of NS1's altmap, causing it to attempt to free an
> invalid page.
>
> With this patch
> ===============
> NS1 start NS2 start
> _________________________________________________________
> | NS1 | NS2 |
> ---------------------------------------------------------
> | Altmap| Altmap | .....| Normal | Altmap | Altmap |.......
> | NS1 | NS1 | | Page | NS2 | NS2 |
>
> If the vmemmap start for NS2 is not aligned then we are allocating
> a normal page. NS1 and NS2 vmemmap will be freed correctly.
>
> Fixes: 368a0590d954("powerpc/book3s64/vmemmap: switch radix to use a different vmemmap handling function")
> Co-developed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
> Signed-off-by: Donet Tom <donettom@linux.ibm.com>
> ---
> arch/powerpc/mm/book3s64/radix_pgtable.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 311e2112d782..b22d5f6147d2 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -1120,6 +1120,8 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
> pmd_t *pmd;
> pte_t *pte;
>
/*
* Make sure we align the start vmemmap addr so that we calculate
the correct start_pfn in altmap boundary check to decided whether
we should use altmap or RAM based backing memory allocation. Also
the address need to be aligned for set_pte operation.
If the start addr is already PMD_SIZE aligned we will try to use
a pmd mapping. We don't want to be too aggressive here beacause
that will cause more allocations in RAM. So only if the namespace
vmemmap start addr is PMD_SIZE aligned we will use PMD mapping.
*/
May be with some comments as above?
> + start = ALIGN_DOWN(start, PAGE_SIZE);
> +
> for (addr = start; addr < end; addr = next) {
> next = pmd_addr_end(addr, end);
>
> --
> 2.43.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE
2025-03-08 3:46 ` Aneesh Kumar K.V
@ 2025-03-09 11:02 ` Donet Tom
0 siblings, 0 replies; 7+ messages in thread
From: Donet Tom @ 2025-03-09 11:02 UTC (permalink / raw)
To: Aneesh Kumar K.V, Madhavan Srinivasan, linuxppc-dev; +Cc: Ritesh Harjani
On 3/8/25 9:16 AM, Aneesh Kumar K.V wrote:
> Donet Tom <donettom@linux.ibm.com> writes:
>
>> A vmemmap altmap is a device-provided region used to provide
>> backing storage for struct pages. For each namespace, the altmap
>> should belong to that same namespace. If the namespaces are
>> created unaligned, there is a chance that the section vmemmap
>> start address could also be unaligned. If the section vmemmap
>> start address is unaligned, the altmap page allocated from the
>> current namespace might be used by the previous namespace also.
>> During the free operation, since the altmap is shared between two
>> namespaces, the previous namespace may detect that the page does
>> not belong to its altmap and incorrectly assume that the page is a
>> normal page. It then attempts to free the normal page, which leads
>> to a kernel crash.
>>
>> In this patch, we are aligning the section vmemmap start address
>> to PAGE_SIZE. After alignment, the start address will not be
>> part of the current namespace, and a normal page will be allocated
>> for the vmemmap mapping of the current section. For the remaining
>> sections, altmaps will be allocated. During the free operation,
>> the normal page will be correctly freed.
>>
>> Without this patch
>> ==================
>> NS1 start NS2 start
>> _________________________________________________________
>> | NS1 | NS2 |
>> ---------------------------------------------------------
>> | Altmap| Altmap | .....|Altmap| Altmap | ...........
>> | NS1 | NS1 | | NS2 | NS2 |
>>
>> In the above scenario, NS1 and NS2 are two namespaces. The vmemmap
>> for NS1 comes from Altmap NS1, which belongs to NS1, and the
>> vmemmap for NS2 comes from Altmap NS2, which belongs to NS2.
>>
>> The vmemmap start for NS2 is not aligned, so Altmap NS2 is shared
>> by both NS1 and NS2. During the free operation in NS1, Altmap NS2
>> is not part of NS1's altmap, causing it to attempt to free an
>> invalid page.
>>
>> With this patch
>> ===============
>> NS1 start NS2 start
>> _________________________________________________________
>> | NS1 | NS2 |
>> ---------------------------------------------------------
>> | Altmap| Altmap | .....| Normal | Altmap | Altmap |.......
>> | NS1 | NS1 | | Page | NS2 | NS2 |
>>
>> If the vmemmap start for NS2 is not aligned then we are allocating
>> a normal page. NS1 and NS2 vmemmap will be freed correctly.
>>
>> Fixes: 368a0590d954("powerpc/book3s64/vmemmap: switch radix to use a different vmemmap handling function")
>> Co-developed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
>> Signed-off-by: Donet Tom <donettom@linux.ibm.com>
>> ---
>> arch/powerpc/mm/book3s64/radix_pgtable.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
>> index 311e2112d782..b22d5f6147d2 100644
>> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
>> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
>> @@ -1120,6 +1120,8 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
>> pmd_t *pmd;
>> pte_t *pte;
>>
> /*
> * Make sure we align the start vmemmap addr so that we calculate
> the correct start_pfn in altmap boundary check to decided whether
> we should use altmap or RAM based backing memory allocation. Also
> the address need to be aligned for set_pte operation.
>
> If the start addr is already PMD_SIZE aligned we will try to use
> a pmd mapping. We don't want to be too aggressive here beacause
> that will cause more allocations in RAM. So only if the namespace
> vmemmap start addr is PMD_SIZE aligned we will use PMD mapping.
>
> */
>
> May be with some comments as above?
>
Sure, I will update it and send a V3.
>
>> + start = ALIGN_DOWN(start, PAGE_SIZE);
>> +
>> for (addr = start; addr < end; addr = next) {
>> next = pmd_addr_end(addr, end);
>>
>> --
>> 2.43.5
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-03-09 11:02 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-26 4:34 [PATCH] book3s64/radix : Align section vmemmap start address to PAGE_SIZE Donet Tom
2025-03-03 13:02 ` Aneesh Kumar K.V
2025-03-04 5:33 ` Donet Tom
2025-03-06 4:11 ` Aneesh Kumar K.V
2025-03-07 6:41 ` Donet Tom
2025-03-08 3:46 ` Aneesh Kumar K.V
2025-03-09 11:02 ` Donet Tom
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).