* [PATCH v7 0/7] arm64: Reorganize kernel VA space
@ 2023-12-13 8:40 Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 1/7] arm64: mm: Move PCI I/O emulation region above the vmemmap region Ard Biesheuvel
` (7 more replies)
0 siblings, 8 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
These seven patches were taken from [0] and tweaked to address the
feedback by Mark Rutland. They reconfigure the upper region of the
kernel VA space so that the vmemmap region can be resized dynamically
on 52-bit builds running on 48-bit-only hardware. This is needed for
LPA2 support.
They can be applied onto the arm62 lpa2-prep branch.
[0] https://lore.kernel.org/all/20231129111555.3594833-43-ardb@google.com
v7:
- add static assert to avoid the fixmap overlapping with the PCI I/O
region
- avoid deriving VMEMMAP_END from VMEMMAP_START+VMEMMAP_SIZE in ptdump.c
- add ack from Mark
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Ard Biesheuvel (7):
arm64: mm: Move PCI I/O emulation region above the vmemmap region
arm64: mm: Move fixmap region above vmemmap region
arm64: ptdump: Allow all region boundaries to be defined at boot time
arm64: ptdump: Discover start of vmemmap region at runtime
arm64: vmemmap: Avoid base2 order of struct page size to dimension
region
arm64: mm: Reclaim unused vmemmap region for vmalloc use
arm64: kaslr: Adjust randomization range dynamically
arch/arm64/include/asm/memory.h | 14 ++---
arch/arm64/include/asm/pgtable.h | 10 ++--
arch/arm64/kernel/image-vars.h | 2 +
arch/arm64/kernel/pi/kaslr_early.c | 11 ++--
arch/arm64/mm/fixmap.c | 3 ++
arch/arm64/mm/ptdump.c | 56 +++++++++-----------
6 files changed, 49 insertions(+), 47 deletions(-)
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v7 1/7] arm64: mm: Move PCI I/O emulation region above the vmemmap region
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
@ 2023-12-13 8:40 ` Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 2/7] arm64: mm: Move fixmap region above " Ard Biesheuvel
` (6 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
Move the PCI I/O region above the vmemmap region in the kernel's VA
space. This will permit us to reclaim the lower part of the vmemmap
region for vmalloc/vmap allocations when running a 52-bit VA capable
build on a 48-bit VA capable system.
Also, given that PCI_IO_START is derived from VMEMMAP_END, use that
symbolic constant directly in ptdump rather than deriving it from
VMEMMAP_START and VMEMMAP_SIZE, as those definitions will change in
subsequent patches.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/memory.h | 4 ++--
arch/arm64/mm/ptdump.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b8d726f951ae..99caeff78e1a 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -49,8 +49,8 @@
#define MODULES_VSIZE (SZ_2G)
#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE)
-#define PCI_IO_END (VMEMMAP_START - SZ_8M)
-#define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
+#define PCI_IO_START (VMEMMAP_END + SZ_8M)
+#define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
#define FIXADDR_TOP (VMEMMAP_START - SZ_32M)
#if VA_BITS > 48
diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
index e305b6593c4e..46acb2a24da0 100644
--- a/arch/arm64/mm/ptdump.c
+++ b/arch/arm64/mm/ptdump.c
@@ -47,10 +47,10 @@ static struct addr_marker address_markers[] = {
{ VMALLOC_END, "vmalloc() end" },
{ FIXADDR_TOT_START, "Fixmap start" },
{ FIXADDR_TOP, "Fixmap end" },
+ { VMEMMAP_START, "vmemmap start" },
+ { VMEMMAP_END, "vmemmap end" },
{ PCI_IO_START, "PCI I/O start" },
{ PCI_IO_END, "PCI I/O end" },
- { VMEMMAP_START, "vmemmap start" },
- { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" },
{ -1, NULL },
};
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 2/7] arm64: mm: Move fixmap region above vmemmap region
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 1/7] arm64: mm: Move PCI I/O emulation region above the vmemmap region Ard Biesheuvel
@ 2023-12-13 8:40 ` Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 3/7] arm64: ptdump: Allow all region boundaries to be defined at boot time Ard Biesheuvel
` (5 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
Move the fixmap region above the vmemmap region, so that the start of
the vmemmap delineates the end of the region available for vmalloc and
vmap allocations and the randomized placement of the kernel and modules.
In a subsequent patch, we will take advantage of this to reclaim most of
the vmemmap area when running a 52-bit VA capable build with 52-bit
virtual addressing disabled at runtime.
Note that the existing guard region of 256 MiB covers the fixmap and PCI
I/O regions as well, so we can reduce it 8 MiB, which is what we use in
other places too.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/memory.h | 2 +-
arch/arm64/include/asm/pgtable.h | 2 +-
arch/arm64/mm/fixmap.c | 3 +++
arch/arm64/mm/ptdump.c | 4 ++--
4 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 99caeff78e1a..2745bed8ae5b 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -51,7 +51,7 @@
#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE)
#define PCI_IO_START (VMEMMAP_END + SZ_8M)
#define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
-#define FIXADDR_TOP (VMEMMAP_START - SZ_32M)
+#define FIXADDR_TOP (-UL(SZ_8M))
#if VA_BITS > 48
#define VA_BITS_MIN (48)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index b19a8aee684c..8d30e2787b1f 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -22,7 +22,7 @@
* and fixed mappings
*/
#define VMALLOC_START (MODULES_END)
-#define VMALLOC_END (VMEMMAP_START - SZ_256M)
+#define VMALLOC_END (VMEMMAP_START - SZ_8M)
#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
diff --git a/arch/arm64/mm/fixmap.c b/arch/arm64/mm/fixmap.c
index c0a3301203bd..6fc17b2e1714 100644
--- a/arch/arm64/mm/fixmap.c
+++ b/arch/arm64/mm/fixmap.c
@@ -16,6 +16,9 @@
#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
+/* ensure that the fixmap region does not grow down into the PCI I/O region */
+static_assert(FIXADDR_TOT_START > PCI_IO_END);
+
#define NR_BM_PTE_TABLES \
SPAN_NR_ENTRIES(FIXADDR_TOT_START, FIXADDR_TOP, PMD_SHIFT)
#define NR_BM_PMD_TABLES \
diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
index 46acb2a24da0..a929b5a321db 100644
--- a/arch/arm64/mm/ptdump.c
+++ b/arch/arm64/mm/ptdump.c
@@ -45,12 +45,12 @@ static struct addr_marker address_markers[] = {
{ MODULES_END, "Modules end" },
{ VMALLOC_START, "vmalloc() area" },
{ VMALLOC_END, "vmalloc() end" },
- { FIXADDR_TOT_START, "Fixmap start" },
- { FIXADDR_TOP, "Fixmap end" },
{ VMEMMAP_START, "vmemmap start" },
{ VMEMMAP_END, "vmemmap end" },
{ PCI_IO_START, "PCI I/O start" },
{ PCI_IO_END, "PCI I/O end" },
+ { FIXADDR_TOT_START, "Fixmap start" },
+ { FIXADDR_TOP, "Fixmap end" },
{ -1, NULL },
};
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 3/7] arm64: ptdump: Allow all region boundaries to be defined at boot time
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 1/7] arm64: mm: Move PCI I/O emulation region above the vmemmap region Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 2/7] arm64: mm: Move fixmap region above " Ard Biesheuvel
@ 2023-12-13 8:40 ` Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 4/7] arm64: ptdump: Discover start of vmemmap region at runtime Ard Biesheuvel
` (4 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
Rework the way the address_markers array is populated so that we can
tolerate values that are not compile time constants generally, rather
than keeping track manually of the array indexes in question, and poking
new values into them manually. This will be needed for VMALLOC_END,
which will cease to be a compile time constant after a subsequent patch.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/ptdump.c | 54 ++++++++------------
1 file changed, 22 insertions(+), 32 deletions(-)
diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
index a929b5a321db..66ccb8d6997e 100644
--- a/arch/arm64/mm/ptdump.c
+++ b/arch/arm64/mm/ptdump.c
@@ -26,34 +26,6 @@
#include <asm/ptdump.h>
-enum address_markers_idx {
- PAGE_OFFSET_NR = 0,
- PAGE_END_NR,
-#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
- KASAN_START_NR,
-#endif
-};
-
-static struct addr_marker address_markers[] = {
- { PAGE_OFFSET, "Linear Mapping start" },
- { 0 /* PAGE_END */, "Linear Mapping end" },
-#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
- { 0 /* KASAN_SHADOW_START */, "Kasan shadow start" },
- { KASAN_SHADOW_END, "Kasan shadow end" },
-#endif
- { MODULES_VADDR, "Modules start" },
- { MODULES_END, "Modules end" },
- { VMALLOC_START, "vmalloc() area" },
- { VMALLOC_END, "vmalloc() end" },
- { VMEMMAP_START, "vmemmap start" },
- { VMEMMAP_END, "vmemmap end" },
- { PCI_IO_START, "PCI I/O start" },
- { PCI_IO_END, "PCI I/O end" },
- { FIXADDR_TOT_START, "Fixmap start" },
- { FIXADDR_TOP, "Fixmap end" },
- { -1, NULL },
-};
-
#define pt_dump_seq_printf(m, fmt, args...) \
({ \
if (m) \
@@ -339,9 +311,8 @@ static void __init ptdump_initialize(void)
pg_level[i].mask |= pg_level[i].bits[j].mask;
}
-static struct ptdump_info kernel_ptdump_info = {
+static struct ptdump_info kernel_ptdump_info __ro_after_init = {
.mm = &init_mm,
- .markers = address_markers,
.base_addr = PAGE_OFFSET,
};
@@ -375,10 +346,29 @@ void ptdump_check_wx(void)
static int __init ptdump_init(void)
{
- address_markers[PAGE_END_NR].start_address = PAGE_END;
+ struct addr_marker m[] = {
+ { PAGE_OFFSET, "Linear Mapping start" },
+ { PAGE_END, "Linear Mapping end" },
#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
- address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START;
+ { KASAN_SHADOW_START, "Kasan shadow start" },
+ { KASAN_SHADOW_END, "Kasan shadow end" },
#endif
+ { MODULES_VADDR, "Modules start" },
+ { MODULES_END, "Modules end" },
+ { VMALLOC_START, "vmalloc() area" },
+ { VMALLOC_END, "vmalloc() end" },
+ { VMEMMAP_START, "vmemmap start" },
+ { VMEMMAP_END, "vmemmap end" },
+ { PCI_IO_START, "PCI I/O start" },
+ { PCI_IO_END, "PCI I/O end" },
+ { FIXADDR_TOT_START, "Fixmap start" },
+ { FIXADDR_TOP, "Fixmap end" },
+ { -1, NULL },
+ };
+ static struct addr_marker address_markers[ARRAY_SIZE(m)] __ro_after_init;
+
+ kernel_ptdump_info.markers = memcpy(address_markers, m, sizeof(m));
+
ptdump_initialize();
ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables");
return 0;
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 4/7] arm64: ptdump: Discover start of vmemmap region at runtime
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
` (2 preceding siblings ...)
2023-12-13 8:40 ` [PATCH v7 3/7] arm64: ptdump: Allow all region boundaries to be defined at boot time Ard Biesheuvel
@ 2023-12-13 8:40 ` Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region Ard Biesheuvel
` (3 subsequent siblings)
7 siblings, 0 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
We will soon reclaim the part of the vmemmap region that covers VA space
that is not addressable by the hardware. To avoid confusion, ensure that
the 'vmemmap start' marker points at the start of the region that is
actually being used for the struct page array, rather than the start of
the region we set aside for it at build time.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/mm/ptdump.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c
index 66ccb8d6997e..5f0849528ccf 100644
--- a/arch/arm64/mm/ptdump.c
+++ b/arch/arm64/mm/ptdump.c
@@ -346,6 +346,8 @@ void ptdump_check_wx(void)
static int __init ptdump_init(void)
{
+ u64 page_offset = _PAGE_OFFSET(vabits_actual);
+ u64 vmemmap_start = (u64)virt_to_page((void *)page_offset);
struct addr_marker m[] = {
{ PAGE_OFFSET, "Linear Mapping start" },
{ PAGE_END, "Linear Mapping end" },
@@ -357,7 +359,7 @@ static int __init ptdump_init(void)
{ MODULES_END, "Modules end" },
{ VMALLOC_START, "vmalloc() area" },
{ VMALLOC_END, "vmalloc() end" },
- { VMEMMAP_START, "vmemmap start" },
+ { vmemmap_start, "vmemmap start" },
{ VMEMMAP_END, "vmemmap end" },
{ PCI_IO_START, "PCI I/O start" },
{ PCI_IO_END, "PCI I/O end" },
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
` (3 preceding siblings ...)
2023-12-13 8:40 ` [PATCH v7 4/7] arm64: ptdump: Discover start of vmemmap region at runtime Ard Biesheuvel
@ 2023-12-13 8:40 ` Ard Biesheuvel
2023-12-13 13:49 ` Mark Rutland
2023-12-13 8:40 ` [PATCH v7 6/7] arm64: mm: Reclaim unused vmemmap region for vmalloc use Ard Biesheuvel
` (2 subsequent siblings)
7 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
The placement and size of the vmemmap region in the kernel virtual
address space is currently derived from the base2 order of the size of a
struct page. This makes for nicely aligned constants with lots of
leading 0xf and trailing 0x0 digits, but given that the actual struct
pages are indexed as an ordinary array, this resulting region is
severely overdimensioned when the size of a struct page is just over a
power of 2.
This doesn't matter today, but once we enable 52-bit virtual addressing
for 4k pages configurations, the vmemmap region may take up almost half
of the upper VA region with the current struct page upper bound at 64
bytes. And once we enable KMSAN or other features that push the size of
a struct page over 64 bytes, we will run out of VMALLOC space entirely.
So instead, let's derive the region size from the actual size of a
struct page, and place the entire region 1 GB from the top of the VA
space, where it still doesn't share any lower level translation table
entries with the fixmap.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/memory.h | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 2745bed8ae5b..b49575a92afc 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -30,8 +30,8 @@
* keep a constant PAGE_OFFSET and "fallback" to using the higher end
* of the VMEMMAP where 52-bit support is not available in hardware.
*/
-#define VMEMMAP_SHIFT (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT)
-#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) >> VMEMMAP_SHIFT)
+#define VMEMMAP_RANGE (_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET)
+#define VMEMMAP_SIZE ((VMEMMAP_RANGE >> PAGE_SHIFT) * sizeof(struct page))
/*
* PAGE_OFFSET - the virtual address of the start of the linear map, at the
@@ -47,8 +47,8 @@
#define MODULES_END (MODULES_VADDR + MODULES_VSIZE)
#define MODULES_VADDR (_PAGE_END(VA_BITS_MIN))
#define MODULES_VSIZE (SZ_2G)
-#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
-#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE)
+#define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
+#define VMEMMAP_END (-UL(SZ_1G))
#define PCI_IO_START (VMEMMAP_END + SZ_8M)
#define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
#define FIXADDR_TOP (-UL(SZ_8M))
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 6/7] arm64: mm: Reclaim unused vmemmap region for vmalloc use
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
` (4 preceding siblings ...)
2023-12-13 8:40 ` [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region Ard Biesheuvel
@ 2023-12-13 8:40 ` Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 7/7] arm64: kaslr: Adjust randomization range dynamically Ard Biesheuvel
2024-02-09 17:33 ` [PATCH v7 0/7] arm64: Reorganize kernel VA space Catalin Marinas
7 siblings, 0 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
The vmemmap array is statically sized based on the maximum supported
size of the virtual address space, but it is located inside the upper VA
region, which is statically sized based on the *minimum* supported size
of the VA space. This doesn't matter much when using 64k pages, which is
the only configuration that currently supports 52-bit virtual
addressing.
However, upcoming LPA2 support will change this picture somewhat, as in
that case, the vmemmap array will take up more than 25% of the upper VA
region when using 4k pages. Given that most of this space is never used
when running on a system that does not support 52-bit virtual
addressing, let's reclaim the unused vmemmap area in that case.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/pgtable.h | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 8d30e2787b1f..5b5bedfa320e 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -18,11 +18,15 @@
* VMALLOC range.
*
* VMALLOC_START: beginning of the kernel vmalloc space
- * VMALLOC_END: extends to the available space below vmemmap, PCI I/O space
- * and fixed mappings
+ * VMALLOC_END: extends to the available space below vmemmap
*/
#define VMALLOC_START (MODULES_END)
+#if VA_BITS == VA_BITS_MIN
#define VMALLOC_END (VMEMMAP_START - SZ_8M)
+#else
+#define VMEMMAP_UNUSED_NPAGES ((_PAGE_OFFSET(vabits_actual) - PAGE_OFFSET) >> PAGE_SHIFT)
+#define VMALLOC_END (VMEMMAP_START + VMEMMAP_UNUSED_NPAGES * sizeof(struct page) - SZ_8M)
+#endif
#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v7 7/7] arm64: kaslr: Adjust randomization range dynamically
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
` (5 preceding siblings ...)
2023-12-13 8:40 ` [PATCH v7 6/7] arm64: mm: Reclaim unused vmemmap region for vmalloc use Ard Biesheuvel
@ 2023-12-13 8:40 ` Ard Biesheuvel
2024-02-09 17:33 ` [PATCH v7 0/7] arm64: Reorganize kernel VA space Catalin Marinas
7 siblings, 0 replies; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 8:40 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Catalin Marinas, Will Deacon, Marc Zyngier,
Mark Rutland
From: Ard Biesheuvel <ardb@kernel.org>
Currently, we base the KASLR randomization range on a rough estimate of
the available space in the upper VA region: the lower 1/4th has the
module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O
ranges, and so we pick a random location in the remaining space in the
middle.
Once we enable support for 5-level paging with 4k pages, this no longer
works: the vmemmap region, being dimensioned to cover a 52-bit linear
region, takes up so much space in the upper VA region (the size of which
is based on a 48-bit VA space for compatibility with non-LVA hardware)
that the region above the vmalloc region takes up more than a quarter of
the available space.
So instead of a heuristic, let's derive the randomization range from the
actual boundaries of the vmalloc region.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/image-vars.h | 2 ++
arch/arm64/kernel/pi/kaslr_early.c | 11 ++++++-----
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 5e4dc72ab1bd..e931ce078a00 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -36,6 +36,8 @@ PROVIDE(__pi___memcpy = __pi_memcpy);
PROVIDE(__pi___memmove = __pi_memmove);
PROVIDE(__pi___memset = __pi_memset);
+PROVIDE(__pi_vabits_actual = vabits_actual);
+
#ifdef CONFIG_KVM
/*
diff --git a/arch/arm64/kernel/pi/kaslr_early.c b/arch/arm64/kernel/pi/kaslr_early.c
index 17bff6e399e4..b9e0bb4bc6a9 100644
--- a/arch/arm64/kernel/pi/kaslr_early.c
+++ b/arch/arm64/kernel/pi/kaslr_early.c
@@ -14,6 +14,7 @@
#include <asm/archrandom.h>
#include <asm/memory.h>
+#include <asm/pgtable.h>
/* taken from lib/string.c */
static char *__strstr(const char *s1, const char *s2)
@@ -87,7 +88,7 @@ static u64 get_kaslr_seed(void *fdt)
asmlinkage u64 kaslr_early_init(void *fdt)
{
- u64 seed;
+ u64 seed, range;
if (is_kaslr_disabled_cmdline(fdt))
return 0;
@@ -102,9 +103,9 @@ asmlinkage u64 kaslr_early_init(void *fdt)
/*
* OK, so we are proceeding with KASLR enabled. Calculate a suitable
* kernel image offset from the seed. Let's place the kernel in the
- * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
- * the lower and upper quarters to avoid colliding with other
- * allocations.
+ * 'middle' half of the VMALLOC area, and stay clear of the lower and
+ * upper quarters to avoid colliding with other allocations.
*/
- return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 0));
+ range = (VMALLOC_END - KIMAGE_VADDR) / 2;
+ return range / 2 + (((__uint128_t)range * seed) >> 64);
}
--
2.43.0.472.g3155946c3a-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region
2023-12-13 8:40 ` [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region Ard Biesheuvel
@ 2023-12-13 13:49 ` Mark Rutland
2023-12-13 14:09 ` Ard Biesheuvel
0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2023-12-13 13:49 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-arm-kernel, Ard Biesheuvel, Catalin Marinas, Will Deacon,
Marc Zyngier
Hi Ard,
On Wed, Dec 13, 2023 at 09:40:30AM +0100, Ard Biesheuvel wrote:
> From: Ard Biesheuvel <ardb@kernel.org>
>
> The placement and size of the vmemmap region in the kernel virtual
> address space is currently derived from the base2 order of the size of a
> struct page. This makes for nicely aligned constants with lots of
> leading 0xf and trailing 0x0 digits, but given that the actual struct
> pages are indexed as an ordinary array, this resulting region is
> severely overdimensioned when the size of a struct page is just over a
> power of 2.
>
> This doesn't matter today, but once we enable 52-bit virtual addressing
> for 4k pages configurations, the vmemmap region may take up almost half
> of the upper VA region with the current struct page upper bound at 64
> bytes. And once we enable KMSAN or other features that push the size of
> a struct page over 64 bytes, we will run out of VMALLOC space entirely.
>
> So instead, let's derive the region size from the actual size of a
> struct page, and place the entire region 1 GB from the top of the VA
> space, where it still doesn't share any lower level translation table
> entries with the fixmap.
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
> arch/arm64/include/asm/memory.h | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 2745bed8ae5b..b49575a92afc 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -30,8 +30,8 @@
> * keep a constant PAGE_OFFSET and "fallback" to using the higher end
> * of the VMEMMAP where 52-bit support is not available in hardware.
> */
> -#define VMEMMAP_SHIFT (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT)
> -#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) >> VMEMMAP_SHIFT)
> +#define VMEMMAP_RANGE (_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET)
> +#define VMEMMAP_SIZE ((VMEMMAP_RANGE >> PAGE_SHIFT) * sizeof(struct page))
>
> /*
> * PAGE_OFFSET - the virtual address of the start of the linear map, at the
> @@ -47,8 +47,8 @@
> #define MODULES_END (MODULES_VADDR + MODULES_VSIZE)
> #define MODULES_VADDR (_PAGE_END(VA_BITS_MIN))
> #define MODULES_VSIZE (SZ_2G)
> -#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
> -#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE)
> +#define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
> +#define VMEMMAP_END (-UL(SZ_1G))
> #define PCI_IO_START (VMEMMAP_END + SZ_8M)
> #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
> #define FIXADDR_TOP (-UL(SZ_8M))
I realise I've acked this already, but my big concern here is still that it's
hard to see why these don't overlap (though the assert in fixmap.c will save
us). Usually we try to make that clear by construction, and I think we can do
that here with something like:
| #define GUARD_VA_SIZE (UL(SZ_8M))
|
| #define FIXADDR_TOP (-GUARD_VA_SIZE)
| #define FIXADDR_SIZE_MAX SZ_8M
| #define FIXADDR_START_MIN (FIXADDR_TOP - FIXADDR_SIZE_MAX)
|
| #define PCI_IO_END (FIXADDR_START_MIN - GUARD_VA_SIZE)
| #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
|
| #define VMEMMAP_END (ALIGN_DOWN(PCI_IO_START - GUARD_VA_SIZE, SZ_1G))
| #define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
... and in fixmap.h have:
/* Ensure the estimate in memory.h was big enough */
static_assert(FIXADDR_SIZE_MAX > FIXADDR_SIZE);
I might be missing some reason why we can't do that; I locally tried the above
atop this series with defconfig+4K and defcconfig+64K, and both build and boot
without sisue.
Other than that, the series looks good to me.
If you're happy with the above I can go spin that as a patch to apply atop.
Mark.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region
2023-12-13 13:49 ` Mark Rutland
@ 2023-12-13 14:09 ` Ard Biesheuvel
2023-12-13 14:39 ` Mark Rutland
0 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2023-12-13 14:09 UTC (permalink / raw)
To: Mark Rutland
Cc: Ard Biesheuvel, linux-arm-kernel, Catalin Marinas, Will Deacon,
Marc Zyngier
On Wed, 13 Dec 2023 at 14:49, Mark Rutland <mark.rutland@arm.com> wrote:
>
> Hi Ard,
>
> On Wed, Dec 13, 2023 at 09:40:30AM +0100, Ard Biesheuvel wrote:
> > From: Ard Biesheuvel <ardb@kernel.org>
> >
> > The placement and size of the vmemmap region in the kernel virtual
> > address space is currently derived from the base2 order of the size of a
> > struct page. This makes for nicely aligned constants with lots of
> > leading 0xf and trailing 0x0 digits, but given that the actual struct
> > pages are indexed as an ordinary array, this resulting region is
> > severely overdimensioned when the size of a struct page is just over a
> > power of 2.
> >
> > This doesn't matter today, but once we enable 52-bit virtual addressing
> > for 4k pages configurations, the vmemmap region may take up almost half
> > of the upper VA region with the current struct page upper bound at 64
> > bytes. And once we enable KMSAN or other features that push the size of
> > a struct page over 64 bytes, we will run out of VMALLOC space entirely.
> >
> > So instead, let's derive the region size from the actual size of a
> > struct page, and place the entire region 1 GB from the top of the VA
> > space, where it still doesn't share any lower level translation table
> > entries with the fixmap.
> >
> > Acked-by: Mark Rutland <mark.rutland@arm.com>
> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > ---
> > arch/arm64/include/asm/memory.h | 8 ++++----
> > 1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 2745bed8ae5b..b49575a92afc 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -30,8 +30,8 @@
> > * keep a constant PAGE_OFFSET and "fallback" to using the higher end
> > * of the VMEMMAP where 52-bit support is not available in hardware.
> > */
> > -#define VMEMMAP_SHIFT (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT)
> > -#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) >> VMEMMAP_SHIFT)
> > +#define VMEMMAP_RANGE (_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET)
> > +#define VMEMMAP_SIZE ((VMEMMAP_RANGE >> PAGE_SHIFT) * sizeof(struct page))
> >
> > /*
> > * PAGE_OFFSET - the virtual address of the start of the linear map, at the
> > @@ -47,8 +47,8 @@
> > #define MODULES_END (MODULES_VADDR + MODULES_VSIZE)
> > #define MODULES_VADDR (_PAGE_END(VA_BITS_MIN))
> > #define MODULES_VSIZE (SZ_2G)
> > -#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
> > -#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE)
> > +#define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
> > +#define VMEMMAP_END (-UL(SZ_1G))
> > #define PCI_IO_START (VMEMMAP_END + SZ_8M)
> > #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
> > #define FIXADDR_TOP (-UL(SZ_8M))
>
> I realise I've acked this already, but my big concern here is still that it's
> hard to see why these don't overlap (though the assert in fixmap.c will save
> us). Usually we try to make that clear by construction, and I think we can do
> that here with something like:
>
> | #define GUARD_VA_SIZE (UL(SZ_8M))
> |
> | #define FIXADDR_TOP (-GUARD_VA_SIZE)
> | #define FIXADDR_SIZE_MAX SZ_8M
> | #define FIXADDR_START_MIN (FIXADDR_TOP - FIXADDR_SIZE_MAX)
> |
> | #define PCI_IO_END (FIXADDR_START_MIN - GUARD_VA_SIZE)
> | #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
> |
> | #define VMEMMAP_END (ALIGN_DOWN(PCI_IO_START - GUARD_VA_SIZE, SZ_1G))
> | #define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
>
> ... and in fixmap.h have:
>
> /* Ensure the estimate in memory.h was big enough */
> static_assert(FIXADDR_SIZE_MAX > FIXADDR_SIZE);
>
> I might be missing some reason why we can't do that; I locally tried the above
> atop this series with defconfig+4K and defcconfig+64K, and both build and boot
> without sisue.
>
I am really struggling to understand what the issue is that you are
trying to solve here.
What I am proposing is
#define VMEMMAP_END (-UL(SZ_1G))
#define PCI_IO_START (VMEMMAP_END + SZ_8M)
#define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
#define FIXADDR_TOP (-UL(SZ_8M))
and in fixmap.c
static_assert(FIXADDR_TOT_START > PCI_IO_END);
(which sadly has to live outside of asm/memory.h for reasons of #include hell).
This leaves the top 1G for PCI I/O plus fixmap, and ensures that the
latter does not collide with the former.
> Other than that, the series looks good to me.
>
Thanks.
> If you're happy with the above I can go spin that as a patch to apply atop.
>
I won't object to that but I can't say I am convinced of the need.
Thanks,
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region
2023-12-13 14:09 ` Ard Biesheuvel
@ 2023-12-13 14:39 ` Mark Rutland
2024-02-09 17:33 ` Catalin Marinas
0 siblings, 1 reply; 13+ messages in thread
From: Mark Rutland @ 2023-12-13 14:39 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: Ard Biesheuvel, linux-arm-kernel, Catalin Marinas, Will Deacon,
Marc Zyngier
On Wed, Dec 13, 2023 at 03:09:54PM +0100, Ard Biesheuvel wrote:
> On Wed, 13 Dec 2023 at 14:49, Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > Hi Ard,
> >
> > On Wed, Dec 13, 2023 at 09:40:30AM +0100, Ard Biesheuvel wrote:
> > > From: Ard Biesheuvel <ardb@kernel.org>
> > >
> > > The placement and size of the vmemmap region in the kernel virtual
> > > address space is currently derived from the base2 order of the size of a
> > > struct page. This makes for nicely aligned constants with lots of
> > > leading 0xf and trailing 0x0 digits, but given that the actual struct
> > > pages are indexed as an ordinary array, this resulting region is
> > > severely overdimensioned when the size of a struct page is just over a
> > > power of 2.
> > >
> > > This doesn't matter today, but once we enable 52-bit virtual addressing
> > > for 4k pages configurations, the vmemmap region may take up almost half
> > > of the upper VA region with the current struct page upper bound at 64
> > > bytes. And once we enable KMSAN or other features that push the size of
> > > a struct page over 64 bytes, we will run out of VMALLOC space entirely.
> > >
> > > So instead, let's derive the region size from the actual size of a
> > > struct page, and place the entire region 1 GB from the top of the VA
> > > space, where it still doesn't share any lower level translation table
> > > entries with the fixmap.
> > >
> > > Acked-by: Mark Rutland <mark.rutland@arm.com>
> > > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > > ---
> > > arch/arm64/include/asm/memory.h | 8 ++++----
> > > 1 file changed, 4 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index 2745bed8ae5b..b49575a92afc 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -30,8 +30,8 @@
> > > * keep a constant PAGE_OFFSET and "fallback" to using the higher end
> > > * of the VMEMMAP where 52-bit support is not available in hardware.
> > > */
> > > -#define VMEMMAP_SHIFT (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT)
> > > -#define VMEMMAP_SIZE ((_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET) >> VMEMMAP_SHIFT)
> > > +#define VMEMMAP_RANGE (_PAGE_END(VA_BITS_MIN) - PAGE_OFFSET)
> > > +#define VMEMMAP_SIZE ((VMEMMAP_RANGE >> PAGE_SHIFT) * sizeof(struct page))
> > >
> > > /*
> > > * PAGE_OFFSET - the virtual address of the start of the linear map, at the
> > > @@ -47,8 +47,8 @@
> > > #define MODULES_END (MODULES_VADDR + MODULES_VSIZE)
> > > #define MODULES_VADDR (_PAGE_END(VA_BITS_MIN))
> > > #define MODULES_VSIZE (SZ_2G)
> > > -#define VMEMMAP_START (-(UL(1) << (VA_BITS - VMEMMAP_SHIFT)))
> > > -#define VMEMMAP_END (VMEMMAP_START + VMEMMAP_SIZE)
> > > +#define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
> > > +#define VMEMMAP_END (-UL(SZ_1G))
> > > #define PCI_IO_START (VMEMMAP_END + SZ_8M)
> > > #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
> > > #define FIXADDR_TOP (-UL(SZ_8M))
> >
> > I realise I've acked this already, but my big concern here is still that it's
> > hard to see why these don't overlap (though the assert in fixmap.c will save
> > us). Usually we try to make that clear by construction, and I think we can do
> > that here with something like:
> >
> > | #define GUARD_VA_SIZE (UL(SZ_8M))
> > |
> > | #define FIXADDR_TOP (-GUARD_VA_SIZE)
> > | #define FIXADDR_SIZE_MAX SZ_8M
> > | #define FIXADDR_START_MIN (FIXADDR_TOP - FIXADDR_SIZE_MAX)
> > |
> > | #define PCI_IO_END (FIXADDR_START_MIN - GUARD_VA_SIZE)
> > | #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
> > |
> > | #define VMEMMAP_END (ALIGN_DOWN(PCI_IO_START - GUARD_VA_SIZE, SZ_1G))
> > | #define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
> >
> > ... and in fixmap.h have:
> >
> > /* Ensure the estimate in memory.h was big enough */
> > static_assert(FIXADDR_SIZE_MAX > FIXADDR_SIZE);
> >
> > I might be missing some reason why we can't do that; I locally tried the above
> > atop this series with defconfig+4K and defcconfig+64K, and both build and boot
> > without sisue.
> >
>
> I am really struggling to understand what the issue is that you are
> trying to solve here.
>
> What I am proposing is
>
> #define VMEMMAP_END (-UL(SZ_1G))
> #define PCI_IO_START (VMEMMAP_END + SZ_8M)
> #define PCI_IO_END (PCI_IO_START + PCI_IO_SIZE)
> #define FIXADDR_TOP (-UL(SZ_8M))
>
> and in fixmap.c
>
> static_assert(FIXADDR_TOT_START > PCI_IO_END);
>
> (which sadly has to live outside of asm/memory.h for reasons of #include hell).
>
> This leaves the top 1G for PCI I/O plus fixmap, and ensures that the
> latter does not collide with the former.
Sure, I get that (and that #include hell is why I have FIXADDR_SIZE_MAX above).
What I'm trying to do is make the relationship between those regions clear *in
one place*, so that this is easier to follow, as that was one of the things I
found painful during review. That said, it's not clear cut, and I'll happily
defer to the judgement of Will and Catalin.
For the series as-is:
Acked-by: Mark Rutland <mark.rutland@arm.com>
Will/Catalin, please let me know your preference on the above.
Thanks,
Mark.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region
2023-12-13 14:39 ` Mark Rutland
@ 2024-02-09 17:33 ` Catalin Marinas
0 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2024-02-09 17:33 UTC (permalink / raw)
To: Mark Rutland
Cc: Ard Biesheuvel, Ard Biesheuvel, linux-arm-kernel, Will Deacon,
Marc Zyngier
On Wed, Dec 13, 2023 at 02:39:30PM +0000, Mark Rutland wrote:
> > On Wed, 13 Dec 2023 at 14:49, Mark Rutland <mark.rutland@arm.com> wrote:
> > > | #define GUARD_VA_SIZE (UL(SZ_8M))
> > > |
> > > | #define FIXADDR_TOP (-GUARD_VA_SIZE)
> > > | #define FIXADDR_SIZE_MAX SZ_8M
> > > | #define FIXADDR_START_MIN (FIXADDR_TOP - FIXADDR_SIZE_MAX)
> > > |
> > > | #define PCI_IO_END (FIXADDR_START_MIN - GUARD_VA_SIZE)
> > > | #define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
> > > |
> > > | #define VMEMMAP_END (ALIGN_DOWN(PCI_IO_START - GUARD_VA_SIZE, SZ_1G))
> > > | #define VMEMMAP_START (VMEMMAP_END - VMEMMAP_SIZE)
> > >
> > > ... and in fixmap.h have:
> > >
> > > /* Ensure the estimate in memory.h was big enough */
> > > static_assert(FIXADDR_SIZE_MAX > FIXADDR_SIZE);
[...]
> What I'm trying to do is make the relationship between those regions clear *in
> one place*, so that this is easier to follow, as that was one of the things I
> found painful during review. That said, it's not clear cut, and I'll happily
> defer to the judgement of Will and Catalin.
I think Mark's proposal is slightly cleaner. There's no actual layout
change, just making the maximum fixaddr size explicit in memory.h.
I'm queuing this series, so Mark please send a patch on top (I'll kick
off b4 ty soon).
Thanks.
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v7 0/7] arm64: Reorganize kernel VA space
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
` (6 preceding siblings ...)
2023-12-13 8:40 ` [PATCH v7 7/7] arm64: kaslr: Adjust randomization range dynamically Ard Biesheuvel
@ 2024-02-09 17:33 ` Catalin Marinas
7 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2024-02-09 17:33 UTC (permalink / raw)
To: linux-arm-kernel, Ard Biesheuvel
Cc: Will Deacon, Ard Biesheuvel, Marc Zyngier, Mark Rutland
On Wed, 13 Dec 2023 09:40:25 +0100, Ard Biesheuvel wrote:
> These seven patches were taken from [0] and tweaked to address the
> feedback by Mark Rutland. They reconfigure the upper region of the
> kernel VA space so that the vmemmap region can be resized dynamically
> on 52-bit builds running on 48-bit-only hardware. This is needed for
> LPA2 support.
>
> They can be applied onto the arm62 lpa2-prep branch.
>
> [...]
Applied to arm64 (for-next/reorg-va-space), thanks!
[1/7] arm64: mm: Move PCI I/O emulation region above the vmemmap region
https://git.kernel.org/arm64/c/031e011d8b22
[2/7] arm64: mm: Move fixmap region above vmemmap region
https://git.kernel.org/arm64/c/b730b0f2b1fc
[3/7] arm64: ptdump: Allow all region boundaries to be defined at boot time
https://git.kernel.org/arm64/c/34f879fbe461
[4/7] arm64: ptdump: Discover start of vmemmap region at runtime
https://git.kernel.org/arm64/c/f9cca2444187
[5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region
https://git.kernel.org/arm64/c/32697ff38287
[6/7] arm64: mm: Reclaim unused vmemmap region for vmalloc use
https://git.kernel.org/arm64/c/d432b8d57c0c
[7/7] arm64: kaslr: Adjust randomization range dynamically
https://git.kernel.org/arm64/c/3567fa63cb56
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-02-09 17:34 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-13 8:40 [PATCH v7 0/7] arm64: Reorganize kernel VA space Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 1/7] arm64: mm: Move PCI I/O emulation region above the vmemmap region Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 2/7] arm64: mm: Move fixmap region above " Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 3/7] arm64: ptdump: Allow all region boundaries to be defined at boot time Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 4/7] arm64: ptdump: Discover start of vmemmap region at runtime Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 5/7] arm64: vmemmap: Avoid base2 order of struct page size to dimension region Ard Biesheuvel
2023-12-13 13:49 ` Mark Rutland
2023-12-13 14:09 ` Ard Biesheuvel
2023-12-13 14:39 ` Mark Rutland
2024-02-09 17:33 ` Catalin Marinas
2023-12-13 8:40 ` [PATCH v7 6/7] arm64: mm: Reclaim unused vmemmap region for vmalloc use Ard Biesheuvel
2023-12-13 8:40 ` [PATCH v7 7/7] arm64: kaslr: Adjust randomization range dynamically Ard Biesheuvel
2024-02-09 17:33 ` [PATCH v7 0/7] arm64: Reorganize kernel VA space Catalin Marinas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).