* [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-18 15:54 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement Oleksii Kurochko
` (16 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Introduce gstage_mode_detect() to probe supported G-stage paging
modes at boot. The function iterates over possible HGATP modes
(Sv32x4 on RV32, Sv39x4/Sv48x4/Sv57x4 on RV64) and selects the
first valid one by programming CSR_HGATP and reading it back.
The selected mode is stored in gstage_mode (marked __ro_after_init)
and reported via printk. If no supported mode is found, Xen panics
since Bare mode is not expected to be used.
Finally, CSR_HGATP is cleared and a local_hfence_gvma_all() is issued
to avoid any potential speculative pollution of the TLB, as required
by the RISC-V spec.
The following issue starts to occur:
./<riscv>/asm/flushtlb.h:37:55: error: 'struct page_info' declared inside
parameter list will not be visible outside of this definition or
declaration [-Werror]
37 | static inline void page_set_tlbflush_timestamp(struct page_info *page)
To resolve it, forward declaration of struct page_info is added to
<asm/flushtlb.h.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V5:
- New patch.
---
xen/arch/riscv/Makefile | 1 +
xen/arch/riscv/include/asm/flushtlb.h | 7 ++
xen/arch/riscv/include/asm/p2m.h | 4 +
xen/arch/riscv/include/asm/riscv_encoding.h | 5 ++
xen/arch/riscv/p2m.c | 91 +++++++++++++++++++++
xen/arch/riscv/setup.c | 3 +
6 files changed, 111 insertions(+)
create mode 100644 xen/arch/riscv/p2m.c
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index e2b8aa42c8..264e265699 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -7,6 +7,7 @@ obj-y += intc.o
obj-y += irq.o
obj-y += mm.o
obj-y += pt.o
+obj-y += p2m.o
obj-$(CONFIG_RISCV_64) += riscv64/
obj-y += sbi.o
obj-y += setup.o
diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include/asm/flushtlb.h
index 51c8f753c5..e70badae0c 100644
--- a/xen/arch/riscv/include/asm/flushtlb.h
+++ b/xen/arch/riscv/include/asm/flushtlb.h
@@ -7,6 +7,13 @@
#include <asm/sbi.h>
+struct page_info;
+
+static inline void local_hfence_gvma_all(void)
+{
+ asm volatile ( "hfence.gvma zero, zero" ::: "memory" );
+}
+
/* Flush TLB of local processor for address va. */
static inline void flush_tlb_one_local(vaddr_t va)
{
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index e43c559e0c..9d4a5d6a2e 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -6,6 +6,8 @@
#include <asm/page-bits.h>
+extern unsigned long gstage_mode;
+
#define paddr_bits PADDR_BITS
/*
@@ -88,6 +90,8 @@ static inline bool arch_acquire_resource_check(struct domain *d)
return false;
}
+void gstage_mode_detect(void);
+
#endif /* ASM__RISCV__P2M_H */
/*
diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/include/asm/riscv_encoding.h
index 6cc8f4eb45..b15f5ad0b4 100644
--- a/xen/arch/riscv/include/asm/riscv_encoding.h
+++ b/xen/arch/riscv/include/asm/riscv_encoding.h
@@ -131,13 +131,16 @@
#define HGATP_MODE_SV32X4 _UL(1)
#define HGATP_MODE_SV39X4 _UL(8)
#define HGATP_MODE_SV48X4 _UL(9)
+#define HGATP_MODE_SV57X4 _UL(10)
#define HGATP32_MODE_SHIFT 31
+#define HGATP32_MODE_MASK _UL(0x80000000)
#define HGATP32_VMID_SHIFT 22
#define HGATP32_VMID_MASK _UL(0x1FC00000)
#define HGATP32_PPN _UL(0x003FFFFF)
#define HGATP64_MODE_SHIFT 60
+#define HGATP64_MODE_MASK _ULL(0xF000000000000000)
#define HGATP64_VMID_SHIFT 44
#define HGATP64_VMID_MASK _ULL(0x03FFF00000000000)
#define HGATP64_PPN _ULL(0x00000FFFFFFFFFFF)
@@ -170,6 +173,7 @@
#define HGATP_VMID_SHIFT HGATP64_VMID_SHIFT
#define HGATP_VMID_MASK HGATP64_VMID_MASK
#define HGATP_MODE_SHIFT HGATP64_MODE_SHIFT
+#define HGATP_MODE_MASK HGATP64_MODE_MASK
#else
#define MSTATUS_SD MSTATUS32_SD
#define SSTATUS_SD SSTATUS32_SD
@@ -181,6 +185,7 @@
#define HGATP_VMID_SHIFT HGATP32_VMID_SHIFT
#define HGATP_VMID_MASK HGATP32_VMID_MASK
#define HGATP_MODE_SHIFT HGATP32_MODE_SHIFT
+#define HGATP_MODE_MASK HGATP32_MODE_MASK
#endif
#define TOPI_IID_SHIFT 16
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
new file mode 100644
index 0000000000..56113a2f7a
--- /dev/null
+++ b/xen/arch/riscv/p2m.c
@@ -0,0 +1,91 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#include <xen/init.h>
+#include <xen/lib.h>
+#include <xen/macros.h>
+#include <xen/sections.h>
+
+#include <asm/csr.h>
+#include <asm/flushtlb.h>
+#include <asm/riscv_encoding.h>
+
+unsigned long __ro_after_init gstage_mode;
+
+void __init gstage_mode_detect(void)
+{
+ unsigned int mode_idx;
+
+ const struct {
+ unsigned long mode;
+ unsigned int paging_levels;
+ const char *name;
+ } modes[] = {
+ /*
+ * Based on the RISC-V spec:
+ * When SXLEN=32, the only other valid setting for MODE is Sv32,
+ * a paged virtual-memory scheme described in Section 10.3.
+ * When SXLEN=64, three paged virtual-memory schemes are defined:
+ * Sv39, Sv48, and Sv57.
+ */
+#ifdef CONFIG_RISCV_32
+ { HGATP_MODE_SV32X4, 2, "Sv32x4" }
+#else
+ { HGATP_MODE_SV39X4, 3, "Sv39x4" },
+ { HGATP_MODE_SV48X4, 4, "Sv48x4" },
+ { HGATP_MODE_SV57X4, 5, "Sv57x4" },
+#endif
+ };
+
+ gstage_mode = HGATP_MODE_OFF;
+
+ for ( mode_idx = 0; mode_idx < ARRAY_SIZE(modes); mode_idx++ )
+ {
+ unsigned long mode = modes[mode_idx].mode;
+
+ csr_write(CSR_HGATP, MASK_INSR(mode, HGATP_MODE_MASK));
+
+ if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) == mode )
+ {
+ gstage_mode = mode;
+ break;
+ }
+ }
+
+ if ( gstage_mode == HGATP_MODE_OFF )
+ panic("Xen expects that G-stage won't be Bare mode\n");
+
+ printk("%s: G-stage mode is %s\n", __func__, modes[mode_idx].name);
+
+ csr_write(CSR_HGATP, 0);
+
+ /*
+ * From RISC-V spec:
+ * Speculative executions of the address-translation algorithm behave as
+ * non-speculative executions of the algorithm do, except that they must
+ * not set the dirty bit for a PTE, they must not trigger an exception,
+ * and they must not create address-translation cache entries if those
+ * entries would have been invalidated by any SFENCE.VMA instruction
+ * executed by the hart since the speculative execution of the algorithm
+ * began.
+ * The quote above mention explicitly SFENCE.VMA, but I assume it is true
+ * for HFENCE.VMA.
+ *
+ * Also, despite of the fact here it is mentioned that when V=0 two-stage
+ * address translation is inactivated:
+ * The current virtualization mode, denoted V, indicates whether the hart
+ * is currently executing in a guest. When V=1, the hart is either in
+ * virtual S-mode (VS-mode), or in virtual U-mode (VU-mode) atop a guest
+ * OS running in VS-mode. When V=0, the hart is either in M-mode, in
+ * HS-mode, or in U-mode atop an OS running in HS-mode. The
+ * virtualization mode also indicates whether two-stage address
+ * translation is active (V=1) or inactive (V=0).
+ * But on the same side, writing to hgatp register activates it:
+ * The hgatp register is considered active for the purposes of
+ * the address-translation algorithm unless the effective privilege mode
+ * is U and hstatus.HU=0.
+ *
+ * Thereby it leaves some room for speculation even in this stage of boot,
+ * so it could be that we polluted local TLB so flush all guest TLB.
+ */
+ local_hfence_gvma_all();
+}
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 483cdd7e17..87ee96bdb3 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -22,6 +22,7 @@
#include <asm/early_printk.h>
#include <asm/fixmap.h>
#include <asm/intc.h>
+#include <asm/p2m.h>
#include <asm/sbi.h>
#include <asm/setup.h>
#include <asm/traps.h>
@@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
console_init_postirq();
+ gstage_mode_detect();
+
printk("All set up\n");
machine_halt();
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode
2025-09-17 21:55 ` [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode Oleksii Kurochko
@ 2025-09-18 15:54 ` Jan Beulich
2025-09-24 11:31 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-18 15:54 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/p2m.c
> @@ -0,0 +1,91 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#include <xen/init.h>
> +#include <xen/lib.h>
> +#include <xen/macros.h>
> +#include <xen/sections.h>
> +
> +#include <asm/csr.h>
> +#include <asm/flushtlb.h>
> +#include <asm/riscv_encoding.h>
> +
> +unsigned long __ro_after_init gstage_mode;
> +
> +void __init gstage_mode_detect(void)
> +{
> + unsigned int mode_idx;
> +
> + const struct {
static and __initconst.
> + unsigned long mode;
Here and also for the global var: Why "long", when it's at most 4 bits?
> + unsigned int paging_levels;
> + const char *name;
More efficiently char[8]?
> + } modes[] = {
> + /*
> + * Based on the RISC-V spec:
> + * When SXLEN=32, the only other valid setting for MODE is Sv32,
The use of "other" is lacking some context here.
> + * a paged virtual-memory scheme described in Section 10.3.
Section numbers tend to change. Either to disambiguate by also spcifying
the doc version, or (preferably) you give the section title instead.
> + * When SXLEN=64, three paged virtual-memory schemes are defined:
> + * Sv39, Sv48, and Sv57.
> + */
> +#ifdef CONFIG_RISCV_32
> + { HGATP_MODE_SV32X4, 2, "Sv32x4" }
> +#else
> + { HGATP_MODE_SV39X4, 3, "Sv39x4" },
> + { HGATP_MODE_SV48X4, 4, "Sv48x4" },
> + { HGATP_MODE_SV57X4, 5, "Sv57x4" },
> +#endif
> + };
> +
> + gstage_mode = HGATP_MODE_OFF;
> +
> + for ( mode_idx = 0; mode_idx < ARRAY_SIZE(modes); mode_idx++ )
> + {
> + unsigned long mode = modes[mode_idx].mode;
> +
> + csr_write(CSR_HGATP, MASK_INSR(mode, HGATP_MODE_MASK));
> +
> + if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) == mode )
> + {
> + gstage_mode = mode;
> + break;
> + }
> + }
> +
> + if ( gstage_mode == HGATP_MODE_OFF )
> + panic("Xen expects that G-stage won't be Bare mode\n");
> +
> + printk("%s: G-stage mode is %s\n", __func__, modes[mode_idx].name);
I don't think the function name matters here at all.
> --- a/xen/arch/riscv/setup.c
> +++ b/xen/arch/riscv/setup.c
> @@ -22,6 +22,7 @@
> #include <asm/early_printk.h>
> #include <asm/fixmap.h>
> #include <asm/intc.h>
> +#include <asm/p2m.h>
> #include <asm/sbi.h>
> #include <asm/setup.h>
> #include <asm/traps.h>
> @@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>
> console_init_postirq();
>
> + gstage_mode_detect();
I find it odd for something as fine grained as this to be called from top-
level start_xen(). Imo this wants to be a sub-function of whatever does
global paging and/or p2m preparations (or even more generally guest ones).
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode
2025-09-18 15:54 ` Jan Beulich
@ 2025-09-24 11:31 ` Oleksii Kurochko
2025-09-24 15:00 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-24 11:31 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2842 bytes --]
On 9/18/25 5:54 PM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>> --- /dev/null
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -0,0 +1,91 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +
>> +#include <xen/init.h>
>> +#include <xen/lib.h>
>> +#include <xen/macros.h>
>> +#include <xen/sections.h>
>> +
>> +#include <asm/csr.h>
>> +#include <asm/flushtlb.h>
>> +#include <asm/riscv_encoding.h>
>> +
>> +unsigned long __ro_after_init gstage_mode;
>> +
>> +void __init gstage_mode_detect(void)
>> +{
>> + unsigned int mode_idx;
>> +
>> + const struct {
> static and __initconst.
>
>> + unsigned long mode;
> Here and also for the global var: Why "long", when it's at most 4 bits?
No specific reason now. In the first version of this function they were used
directly to write a value to CSR register which is 'unsigned long'.
Considering that MASK_INSR() and MASK_EXTR() are used, 'char' should be enough
to describe mode.
>
>> + unsigned int paging_levels;
>> + const char *name;
> More efficiently char[8]?
I wanted to be sure that the name will always have correct length. But I agree
that char[8] is more efficient and a length could be checked "manually". I will
use char[8] instead of 'char *'.
>
>> + } modes[] = {
>> + /*
>> + * Based on the RISC-V spec:
>> + * When SXLEN=32, the only other valid setting for MODE is Sv32,
> The use of "other" is lacking some context here.
I will add the following:
Bare mode is always supported, regardless of SXLEN.
>
>> + * a paged virtual-memory scheme described in Section 10.3.
> Section numbers tend to change. Either to disambiguate by also spcifying
> the doc version, or (preferably) you give the section title instead.
I will take that into account in the future. For now, I think that this part
of the comment could be just dropped as here it doesn't matter what is a scheme
of Sv32.
>> --- a/xen/arch/riscv/setup.c
>> +++ b/xen/arch/riscv/setup.c
>> @@ -22,6 +22,7 @@
>> #include <asm/early_printk.h>
>> #include <asm/fixmap.h>
>> #include <asm/intc.h>
>> +#include <asm/p2m.h>
>> #include <asm/sbi.h>
>> #include <asm/setup.h>
>> #include <asm/traps.h>
>> @@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>>
>> console_init_postirq();
>>
>> + gstage_mode_detect();
> I find it odd for something as fine grained as this to be called from top-
> level start_xen(). Imo this wants to be a sub-function of whatever does
> global paging and/or p2m preparations (or even more generally guest ones).
It makes sense. I will move the call to gstage_mode_detect() into p2m_init()
when the latter is introduced.
Probably, I will move the current patch after p2m_init() is introduced to make
gstage_mode_detect() static function.
Thanks.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 4683 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode
2025-09-24 11:31 ` Oleksii Kurochko
@ 2025-09-24 15:00 ` Oleksii Kurochko
2025-09-25 13:46 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-24 15:00 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 1519 bytes --]
On 9/24/25 1:31 PM, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/setup.c
>>> +++ b/xen/arch/riscv/setup.c
>>> @@ -22,6 +22,7 @@
>>> #include <asm/early_printk.h>
>>> #include <asm/fixmap.h>
>>> #include <asm/intc.h>
>>> +#include <asm/p2m.h>
>>> #include <asm/sbi.h>
>>> #include <asm/setup.h>
>>> #include <asm/traps.h>
>>> @@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>>>
>>> console_init_postirq();
>>>
>>> + gstage_mode_detect();
>> I find it odd for something as fine grained as this to be called from top-
>> level start_xen(). Imo this wants to be a sub-function of whatever does
>> global paging and/or p2m preparations (or even more generally guest ones).
> It makes sense. I will move the call to gstage_mode_detect() into p2m_init()
> when the latter is introduced.
> Probably, I will move the current patch after p2m_init() is introduced to make
> gstage_mode_detect() static function.
Maybe putting gstage_mode_detect() into p2m_init() is not a good idea, since it
is called during domain creation. I am not sure there is any point in calling
gstage_mode_detect() each time.
It seems that gstage_mode_detect() should be called once during physical CPU
initialization.
A sub-function (riscv_hart_mm_init()? probably, riscv should be dropped from
the name) could be added in setup.c and then called in start_xen(), but
is it really needed a separate sub-function for something that will be called
once per initialization of pCPU?
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 2105 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode
2025-09-24 15:00 ` Oleksii Kurochko
@ 2025-09-25 13:46 ` Jan Beulich
2025-09-26 7:30 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-25 13:46 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 24.09.2025 17:00, Oleksii Kurochko wrote:
>
> On 9/24/25 1:31 PM, Oleksii Kurochko wrote:
>>>> --- a/xen/arch/riscv/setup.c
>>>> +++ b/xen/arch/riscv/setup.c
>>>> @@ -22,6 +22,7 @@
>>>> #include <asm/early_printk.h>
>>>> #include <asm/fixmap.h>
>>>> #include <asm/intc.h>
>>>> +#include <asm/p2m.h>
>>>> #include <asm/sbi.h>
>>>> #include <asm/setup.h>
>>>> #include <asm/traps.h>
>>>> @@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>>>>
>>>> console_init_postirq();
>>>>
>>>> + gstage_mode_detect();
>>> I find it odd for something as fine grained as this to be called from top-
>>> level start_xen(). Imo this wants to be a sub-function of whatever does
>>> global paging and/or p2m preparations (or even more generally guest ones).
>> It makes sense. I will move the call to gstage_mode_detect() into p2m_init()
>> when the latter is introduced.
>> Probably, I will move the current patch after p2m_init() is introduced to make
>> gstage_mode_detect() static function.
>
> Maybe putting gstage_mode_detect() into p2m_init() is not a good idea, since it
> is called during domain creation. I am not sure there is any point in calling
> gstage_mode_detect() each time.
>
> It seems that gstage_mode_detect() should be called once during physical CPU
> initialization.
Indeed.
> A sub-function (riscv_hart_mm_init()? probably, riscv should be dropped from
> the name) could be added in setup.c and then called in start_xen(), but
> is it really needed a separate sub-function for something that will be called
> once per initialization of pCPU?
Counter question: Is this going to remain the only piece of global init that's
needed for P2M machinery? Right in the next patch you already add vmid_init()
as another top-level call.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode
2025-09-25 13:46 ` Jan Beulich
@ 2025-09-26 7:30 ` Oleksii Kurochko
0 siblings, 0 replies; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-26 7:30 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2120 bytes --]
On 9/25/25 3:46 PM, Jan Beulich wrote:
> On 24.09.2025 17:00, Oleksii Kurochko wrote:
>> On 9/24/25 1:31 PM, Oleksii Kurochko wrote:
>>>>> --- a/xen/arch/riscv/setup.c
>>>>> +++ b/xen/arch/riscv/setup.c
>>>>> @@ -22,6 +22,7 @@
>>>>> #include <asm/early_printk.h>
>>>>> #include <asm/fixmap.h>
>>>>> #include <asm/intc.h>
>>>>> +#include <asm/p2m.h>
>>>>> #include <asm/sbi.h>
>>>>> #include <asm/setup.h>
>>>>> #include <asm/traps.h>
>>>>> @@ -148,6 +149,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>>>>>
>>>>> console_init_postirq();
>>>>>
>>>>> + gstage_mode_detect();
>>>> I find it odd for something as fine grained as this to be called from top-
>>>> level start_xen(). Imo this wants to be a sub-function of whatever does
>>>> global paging and/or p2m preparations (or even more generally guest ones).
>>> It makes sense. I will move the call to gstage_mode_detect() into p2m_init()
>>> when the latter is introduced.
>>> Probably, I will move the current patch after p2m_init() is introduced to make
>>> gstage_mode_detect() static function.
>> Maybe putting gstage_mode_detect() into p2m_init() is not a good idea, since it
>> is called during domain creation. I am not sure there is any point in calling
>> gstage_mode_detect() each time.
>>
>> It seems that gstage_mode_detect() should be called once during physical CPU
>> initialization.
> Indeed.
>
>> A sub-function (riscv_hart_mm_init()? probably, riscv should be dropped from
>> the name) could be added in setup.c and then called in start_xen(), but
>> is it really needed a separate sub-function for something that will be called
>> once per initialization of pCPU?
> Counter question: Is this going to remain the only piece of global init that's
> needed for P2M machinery? Right in the next patch you already add vmid_init()
> as another top-level call.
No, it isn’t the only piece — at least|gstage_mode_detect()| and|vmid_init()| are
also needed for the P2M machinery.
Okay, then it would be better to introduce a sub-function now and re-use it later
for other pCPUs as well.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 3234 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
2025-09-17 21:55 ` [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-19 21:26 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 03/18] xen/riscv: introduce things necessary for p2m initialization Oleksii Kurochko
` (15 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Current implementation is based on x86's way to allocate VMIDs:
VMIDs partition the physical TLB. In the current implementation VMIDs are
introduced to reduce the number of TLB flushes. Each time a guest-physical
address space changes, instead of flushing the TLB, a new VMID is
assigned. This reduces the number of TLB flushes to at most 1/#VMIDs.
The biggest advantage is that hot parts of the hypervisor's code and data
retain in the TLB.
VMIDs are a hart-local resource. As preemption of VMIDs is not possible,
VMIDs are assigned in a round-robin scheme. To minimize the overhead of
VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a
64-bit generation. Only on a generation overflow the code needs to
invalidate all VMID information stored at the VCPUs with are run on the
specific physical processor. When this overflow appears VMID usage is
disabled to retain correctness.
Only minor changes are made compared to the x86 implementation.
These include using RISC-V-specific terminology, adding a check to ensure
the type used for storing the VMID has enough bits to hold VMIDLEN,
and introducing a new function vmidlen_detect() to clarify the VMIDLEN
value, rename stuff connected to VMID enable/disable to "VMID use
enable/disable".
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- s/guest's virtual/guest-physical in the comment inside vmid.c
and in commit message.
- Drop x86-related numbers in the comment about "Sketch of the Implementation".
- s/__read_only/__ro_after_init in declaration of opt_vmid_enabled.
- s/hart_vmid_generation/generation.
- Update vmidlen_detect() to work with unsigned int type for vmid_bits
variable.
- Drop old variable im vmdidlen_detetct, it seems like there is no any reason
to restore old value of hgatp with no guest running on a hart yet.
- Update the comment above local_hfence_gvma_all() in vmidlen_detect().
- s/max_availalbe_bits/max_available_bits.
- use BITS_PER_BYTE, instead of "<< 3".
- Add BUILD_BUILD_BUG_ON() instead run-time check that an amount of set bits
can be held in vmid_data->max_vmid.
- Apply changes from the patch "x86/HVM: polish hvm_asid_init() a little" here
(changes connected to g_disabled) with the following minor changes:
Update the printk() message to "VMIDs use is...".
Rename g_disabled to g_vmid_used.
- Rename member 'disabled' of vmid_data structure to used.
- Use gstage_mode to properly detect VMIDLEN.
---
Changes in V3:
- Reimplemnt VMID allocation similar to what x86 has implemented.
---
Changes in V2:
- New patch.
---
xen/arch/riscv/Makefile | 1 +
xen/arch/riscv/include/asm/domain.h | 6 +
xen/arch/riscv/include/asm/vmid.h | 8 ++
xen/arch/riscv/setup.c | 3 +
xen/arch/riscv/vmid.c | 193 ++++++++++++++++++++++++++++
5 files changed, 211 insertions(+)
create mode 100644 xen/arch/riscv/include/asm/vmid.h
create mode 100644 xen/arch/riscv/vmid.c
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 264e265699..e2499210c8 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -17,6 +17,7 @@ obj-y += smpboot.o
obj-y += stubs.o
obj-y += time.o
obj-y += traps.o
+obj-y += vmid.o
obj-y += vm_event.o
$(TARGET): $(TARGET)-syms
diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/asm/domain.h
index c3d965a559..aac1040658 100644
--- a/xen/arch/riscv/include/asm/domain.h
+++ b/xen/arch/riscv/include/asm/domain.h
@@ -5,6 +5,11 @@
#include <xen/xmalloc.h>
#include <public/hvm/params.h>
+struct vcpu_vmid {
+ uint64_t generation;
+ uint16_t vmid;
+};
+
struct hvm_domain
{
uint64_t params[HVM_NR_PARAMS];
@@ -14,6 +19,7 @@ struct arch_vcpu_io {
};
struct arch_vcpu {
+ struct vcpu_vmid vmid;
};
struct arch_domain {
diff --git a/xen/arch/riscv/include/asm/vmid.h b/xen/arch/riscv/include/asm/vmid.h
new file mode 100644
index 0000000000..2f1f7ec9a2
--- /dev/null
+++ b/xen/arch/riscv/include/asm/vmid.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef ASM_RISCV_VMID_H
+#define ASM_RISCV_VMID_H
+
+void vmid_init(void);
+
+#endif /* ASM_RISCV_VMID_H */
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 87ee96bdb3..3c9e6a9ee3 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -26,6 +26,7 @@
#include <asm/sbi.h>
#include <asm/setup.h>
#include <asm/traps.h>
+#include <asm/vmid.h>
/* Xen stack for bringing up the first CPU. */
unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
@@ -151,6 +152,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
gstage_mode_detect();
+ vmid_init();
+
printk("All set up\n");
machine_halt();
diff --git a/xen/arch/riscv/vmid.c b/xen/arch/riscv/vmid.c
new file mode 100644
index 0000000000..b94d082c82
--- /dev/null
+++ b/xen/arch/riscv/vmid.c
@@ -0,0 +1,193 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#include <xen/domain.h>
+#include <xen/init.h>
+#include <xen/sections.h>
+#include <xen/lib.h>
+#include <xen/param.h>
+#include <xen/percpu.h>
+
+#include <asm/atomic.h>
+#include <asm/csr.h>
+#include <asm/flushtlb.h>
+#include <asm/p2m.h>
+
+/* Xen command-line option to enable VMIDs */
+static bool __ro_after_init opt_vmid_use_enabled = true;
+boolean_param("vmid", opt_vmid_use_enabled);
+
+/*
+ * VMIDs partition the physical TLB. In the current implementation VMIDs are
+ * introduced to reduce the number of TLB flushes. Each time a guest-physical
+ * address space changes, instead of flushing the TLB, a new VMID is
+ * assigned. This reduces the number of TLB flushes to at most 1/#VMIDs.
+ * The biggest advantage is that hot parts of the hypervisor's code and data
+ * retain in the TLB.
+ *
+ * Sketch of the Implementation:
+ *
+ * VMIDs are a hart-local resource. As preemption of VMIDs is not possible,
+ * VMIDs are assigned in a round-robin scheme. To minimize the overhead of
+ * VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a
+ * 64-bit generation. Only on a generation overflow the code needs to
+ * invalidate all VMID information stored at the VCPUs with are run on the
+ * specific physical processor. When this overflow appears VMID usage is
+ * disabled to retain correctness.
+ */
+
+/* Per-Hart VMID management. */
+struct vmid_data {
+ uint64_t generation;
+ uint16_t next_vmid;
+ uint16_t max_vmid;
+ bool used;
+};
+
+static DEFINE_PER_CPU(struct vmid_data, vmid_data);
+
+static unsigned int vmidlen_detect(void)
+{
+ unsigned int vmid_bits;
+
+ /*
+ * According to the RISC-V Privileged Architecture Spec:
+ * When MODE=Bare, guest physical addresses are equal to supervisor
+ * physical addresses, and there is no further memory protection
+ * for a guest virtual machine beyond the physical memory protection
+ * scheme described in Section 3.7.
+ * In this case, the remaining fields in hgatp must be set to zeros.
+ * Thereby it is necessary to set gstage_mode not equal to Bare.
+ */
+ ASSERT(gstage_mode != HGATP_MODE_OFF);
+ csr_write(CSR_HGATP,
+ MASK_INSR(gstage_mode, HGATP_MODE_MASK) | HGATP_VMID_MASK);
+ vmid_bits = MASK_EXTR(csr_read(CSR_HGATP), HGATP_VMID_MASK);
+ vmid_bits = flsl(vmid_bits);
+ csr_write(CSR_HGATP, _AC(0, UL));
+
+ /*
+ * From RISC-V spec:
+ * Speculative executions of the address-translation algorithm behave as
+ * non-speculative executions of the algorithm do, except that they must
+ * not set the dirty bit for a PTE, they must not trigger an exception,
+ * and they must not create address-translation cache entries if those
+ * entries would have been invalidated by any SFENCE.VMA instruction
+ * executed by the hart since the speculative execution of the algorithm
+ * began.
+ *
+ * Also, despite of the fact here it is mentioned that when V=0 two-stage
+ * address translation is inactivated:
+ * The current virtualization mode, denoted V, indicates whether the hart
+ * is currently executing in a guest. When V=1, the hart is either in
+ * virtual S-mode (VS-mode), or in virtual U-mode (VU-mode) atop a guest
+ * OS running in VS-mode. When V=0, the hart is either in M-mode, in
+ * HS-mode, or in U-mode atop an OS running in HS-mode. The
+ * virtualization mode also indicates whether two-stage address
+ * translation is active (V=1) or inactive (V=0).
+ * But on the same side, writing to hgatp register activates it:
+ * The hgatp register is considered active for the purposes of
+ * the address-translation algorithm unless the effective privilege mode
+ * is U and hstatus.HU=0.
+ *
+ * Thereby it leaves some room for speculation even in this stage of boot,
+ * so it could be that we polluted local TLB so flush all guest TLB.
+ */
+ local_hfence_gvma_all();
+
+ return vmid_bits;
+}
+
+void vmid_init(void)
+{
+ static int8_t g_vmid_used = -1;
+
+ unsigned int vmid_len = vmidlen_detect();
+ struct vmid_data *data = &this_cpu(vmid_data);
+
+ BUILD_BUG_ON((HGATP_VMID_MASK >> HGATP_VMID_SHIFT) >
+ (BIT((sizeof(data->max_vmid) * BITS_PER_BYTE), UL) - 1));
+
+ data->max_vmid = BIT(vmid_len, U) - 1;
+ data->used = !opt_vmid_use_enabled || (vmid_len <= 1);
+
+ if ( g_vmid_used < 0 )
+ {
+ g_vmid_used = data->used;
+ printk("VMIDs use is %sabled\n", data->used ? "dis" : "en");
+ }
+ else if ( g_vmid_used != data->used )
+ printk("CPU%u: VMIDs use is %sabled\n", smp_processor_id(),
+ data->used ? "dis" : "en");
+
+ /* Zero indicates 'invalid generation', so we start the count at one. */
+ data->generation = 1;
+
+ /* Zero indicates 'VMIDs use disabled', so we start the count at one. */
+ data->next_vmid = 1;
+}
+
+void vcpu_vmid_flush_vcpu(struct vcpu *v)
+{
+ write_atomic(&v->arch.vmid.generation, 0);
+}
+
+void vmid_flush_hart(void)
+{
+ struct vmid_data *data = &this_cpu(vmid_data);
+
+ if ( data->used )
+ return;
+
+ if ( likely(++data->generation != 0) )
+ return;
+
+ /*
+ * VMID generations are 64 bit. Overflow of generations never happens.
+ * For safety, we simply disable ASIDs, so correctness is established; it
+ * only runs a bit slower.
+ */
+ printk("%s: VMID generation overrun. Disabling VMIDs.\n", __func__);
+ data->used = 1;
+}
+
+bool vmid_handle_vmenter(struct vcpu_vmid *vmid)
+{
+ struct vmid_data *data = &this_cpu(vmid_data);
+
+ /* Test if VCPU has valid VMID. */
+ if ( read_atomic(&vmid->generation) == data->generation )
+ return 0;
+
+ /* If there are no free VMIDs, need to go to a new generation. */
+ if ( unlikely(data->next_vmid > data->max_vmid) )
+ {
+ vmid_flush_hart();
+ data->next_vmid = 1;
+ if ( data->used )
+ goto disabled;
+ }
+
+ /* Now guaranteed to be a free VMID. */
+ vmid->vmid = data->next_vmid++;
+ write_atomic(&vmid->generation, data->generation);
+
+ /*
+ * When we assign VMID 1, flush all TLB entries as we are starting a new
+ * generation, and all old VMID allocations are now stale.
+ */
+ return (vmid->vmid == 1);
+
+ disabled:
+ vmid->vmid = 0;
+ return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement
2025-09-17 21:55 ` [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement Oleksii Kurochko
@ 2025-09-19 21:26 ` Jan Beulich
2025-09-24 14:25 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-19 21:26 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> @@ -151,6 +152,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>
> gstage_mode_detect();
>
> + vmid_init();
Like for the earlier patch, I'm not convinced this is a function good
to call from the top-level start_xen(). The two functions sitting side
by side actually demonstrates the scalability issue pretty well.
> --- /dev/null
> +++ b/xen/arch/riscv/vmid.c
> @@ -0,0 +1,193 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#include <xen/domain.h>
> +#include <xen/init.h>
> +#include <xen/sections.h>
> +#include <xen/lib.h>
> +#include <xen/param.h>
> +#include <xen/percpu.h>
> +
> +#include <asm/atomic.h>
> +#include <asm/csr.h>
> +#include <asm/flushtlb.h>
> +#include <asm/p2m.h>
> +
> +/* Xen command-line option to enable VMIDs */
> +static bool __ro_after_init opt_vmid_use_enabled = true;
> +boolean_param("vmid", opt_vmid_use_enabled);
Is there a particular reason to not have the variable be simply opt_vmid,
properly in sync with the command line option?
> +void vmid_init(void)
> +{
> + static int8_t g_vmid_used = -1;
> +
> + unsigned int vmid_len = vmidlen_detect();
> + struct vmid_data *data = &this_cpu(vmid_data);
> +
> + BUILD_BUG_ON((HGATP_VMID_MASK >> HGATP_VMID_SHIFT) >
> + (BIT((sizeof(data->max_vmid) * BITS_PER_BYTE), UL) - 1));
> +
> + data->max_vmid = BIT(vmid_len, U) - 1;
> + data->used = !opt_vmid_use_enabled || (vmid_len <= 1);
Since you inverted the sense of variable and field, you also need to invert
the expression here:
data->used = opt_vmid_use_enabled && (vmid_len > 1);
> + if ( g_vmid_used < 0 )
> + {
> + g_vmid_used = data->used;
> + printk("VMIDs use is %sabled\n", data->used ? "dis" : "en");
Same here - "dis" and "en" need to switch places.
> + }
> + else if ( g_vmid_used != data->used )
> + printk("CPU%u: VMIDs use is %sabled\n", smp_processor_id(),
> + data->used ? "dis" : "en");
And again here.
> +void vcpu_vmid_flush_vcpu(struct vcpu *v)
An reason to have two "vcpu" in the name?
> +{
> + write_atomic(&v->arch.vmid.generation, 0);
> +}
> +
> +void vmid_flush_hart(void)
> +{
> + struct vmid_data *data = &this_cpu(vmid_data);
> +
> + if ( data->used )
> + return;
Again the sense needs reversting.
> + if ( likely(++data->generation != 0) )
> + return;
> +
> + /*
> + * VMID generations are 64 bit. Overflow of generations never happens.
> + * For safety, we simply disable ASIDs, so correctness is established; it
> + * only runs a bit slower.
> + */
> + printk("%s: VMID generation overrun. Disabling VMIDs.\n", __func__);
> + data->used = 1;
And yet again.
> +bool vmid_handle_vmenter(struct vcpu_vmid *vmid)
> +{
> + struct vmid_data *data = &this_cpu(vmid_data);
> +
> + /* Test if VCPU has valid VMID. */
> + if ( read_atomic(&vmid->generation) == data->generation )
> + return 0;
> +
> + /* If there are no free VMIDs, need to go to a new generation. */
> + if ( unlikely(data->next_vmid > data->max_vmid) )
> + {
> + vmid_flush_hart();
> + data->next_vmid = 1;
> + if ( data->used )
> + goto disabled;
And yet another time.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement
2025-09-19 21:26 ` Jan Beulich
@ 2025-09-24 14:25 ` Oleksii Kurochko
2025-09-25 13:53 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-24 14:25 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 1822 bytes --]
On 9/19/25 11:26 PM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>> @@ -151,6 +152,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>>
>> gstage_mode_detect();
>>
>> + vmid_init();
> Like for the earlier patch, I'm not convinced this is a function good
> to call from the top-level start_xen(). The two functions sitting side
> by side actually demonstrates the scalability issue pretty well.
In the case of vmid_init(), it could be a good place to call it here since
vmid_init() is expected to be called once when a pCPU is booted. For the boot
CPU, all "setup" functions are called in start_xen(), so vmid_init() could
potentially be called there as well.
For other (non-boot) CPUs, vmid_init() could be called somewhere in the
__cpu_up() code or at the CPU’s entry point.
>> --- /dev/null
>> +++ b/xen/arch/riscv/vmid.c
>> @@ -0,0 +1,193 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +
>> +#include <xen/domain.h>
>> +#include <xen/init.h>
>> +#include <xen/sections.h>
>> +#include <xen/lib.h>
>> +#include <xen/param.h>
>> +#include <xen/percpu.h>
>> +
>> +#include <asm/atomic.h>
>> +#include <asm/csr.h>
>> +#include <asm/flushtlb.h>
>> +#include <asm/p2m.h>
>> +
>> +/* Xen command-line option to enable VMIDs */
>> +static bool __ro_after_init opt_vmid_use_enabled = true;
>> +boolean_param("vmid", opt_vmid_use_enabled);
> Is there a particular reason to not have the variable be simply opt_vmid,
> properly in sync with the command line option?
There is no a specific reason for that, just made it in sync with x86.
opt_vmid could be used instead. I will do s/opt_vmid_use_enabled/opt_vmid.
>> +void vcpu_vmid_flush_vcpu(struct vcpu *v)
> An reason to have two "vcpu" in the name?
The first "vcpu" should be really dropped.
Thanks.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 2863 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement
2025-09-24 14:25 ` Oleksii Kurochko
@ 2025-09-25 13:53 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-25 13:53 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 24.09.2025 16:25, Oleksii Kurochko wrote:
>
> On 9/19/25 11:26 PM, Jan Beulich wrote:
>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>> @@ -151,6 +152,8 @@ void __init noreturn start_xen(unsigned long bootcpu_id,
>>>
>>> gstage_mode_detect();
>>>
>>> + vmid_init();
>> Like for the earlier patch, I'm not convinced this is a function good
>> to call from the top-level start_xen(). The two functions sitting side
>> by side actually demonstrates the scalability issue pretty well.
>
> In the case of vmid_init(), it could be a good place to call it here since
> vmid_init() is expected to be called once when a pCPU is booted. For the boot
> CPU, all "setup" functions are called in start_xen(), so vmid_init() could
> potentially be called there as well.
>
> For other (non-boot) CPUs, vmid_init() could be called somewhere in the
> __cpu_up() code or at the CPU’s entry point.
And then perhaps many more functions. This simply doesn't scale well. See
how we have hvm_enable() for the boot CPU part of this, and then
{svm,vmx}_cpu_up() for the secondary CPUs.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 03/18] xen/riscv: introduce things necessary for p2m initialization
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
2025-09-17 21:55 ` [PATCH v4 01/18] xen/riscv: detect and initialize G-stage mode Oleksii Kurochko
2025-09-17 21:55 ` [PATCH v4 02/18] xen/riscv: introduce VMID allocation and manegement Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-19 21:43 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 04/18] xen/riscv: construct the P2M pages pool for guests Oleksii Kurochko
` (14 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Introduce the following things:
- Update p2m_domain structure, which describe per p2m-table state, with:
- lock to protect updates to p2m.
- pool with pages used to construct p2m.
- back pointer to domain structure.
- p2m_init() to initalize members introduced in p2m_domain structure.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Move an introduction of clean_pte member of p2m_domain structure to the
patch where it is started to be used:
xen/riscv: add root page table allocation
- Add prototype of p2m_init() to asm/p2m.h.
---
Changes in V3:
- s/p2m_type/p2m_types.
- Drop init. of p2m->clean_pte in p2m_init() as CONFIG_HAS_PASSTHROUGH is
going to be selected unconditionaly. Plus CONFIG_HAS_PASSTHROUGH isn't
ready to be used for RISC-V.
Add compilation error to not forget to init p2m->clean_pte.
- Move defintion of p2m->domain up in p2m_init().
- Add iommu_use_hap_pt() when p2m->clean_pte is initialized.
- Add the comment above p2m_types member of p2m_domain struct.
- Add need_flush member to p2m_domain structure.
- Move introduction of p2m_write_(un)lock() and p2m_tlb_flush_sync()
to the patch where they are really used:
xen/riscv: implement guest_physmap_add_entry() for mapping GFNs to MFN
- Add p2m member to arch_domain structure.
- Drop p2m_types from struct p2m_domain as P2M type for PTE will be stored
differently.
- Drop default_access as it isn't going to be used for now.
- Move defintion of p2m_is_write_locked() to "implement function to map memory
in guest p2m" where it is really used.
---
Changes in V2:
- Use introduced erlier sbi_remote_hfence_gvma_vmid() for proper implementation
of p2m_force_tlb_flush_sync() as TLB flushing needs to happen for each pCPU
which potentially has cached a mapping, what is tracked by d->dirty_cpumask.
- Drop unnecessary blanks.
- Fix code style for # of pre-processor directive.
- Drop max_mapped_gfn and lowest_mapped_gfn as they aren't used now.
- [p2m_init()] Set p2m->clean_pte=false if CONFIG_HAS_PASSTHROUGH=n.
- [p2m_init()] Update the comment above p2m->domain = d;
- Drop p2m->need_flush as it seems to be always true for RISC-V and as a
consequence drop p2m_tlb_flush_sync().
- Move to separate patch an introduction of root page table allocation.
---
xen/arch/riscv/include/asm/domain.h | 5 +++++
xen/arch/riscv/include/asm/p2m.h | 33 +++++++++++++++++++++++++++++
xen/arch/riscv/p2m.c | 20 +++++++++++++++++
3 files changed, 58 insertions(+)
diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/asm/domain.h
index aac1040658..e688980efa 100644
--- a/xen/arch/riscv/include/asm/domain.h
+++ b/xen/arch/riscv/include/asm/domain.h
@@ -5,6 +5,8 @@
#include <xen/xmalloc.h>
#include <public/hvm/params.h>
+#include <asm/p2m.h>
+
struct vcpu_vmid {
uint64_t generation;
uint16_t vmid;
@@ -24,6 +26,9 @@ struct arch_vcpu {
struct arch_domain {
struct hvm_domain hvm;
+
+ /* Virtual MMU */
+ struct p2m_domain p2m;
};
#include <xen/sched.h>
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index 9d4a5d6a2e..2672dcdecb 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -3,6 +3,9 @@
#define ASM__RISCV__P2M_H
#include <xen/errno.h>
+#include <xen/mm.h>
+#include <xen/rwlock.h>
+#include <xen/types.h>
#include <asm/page-bits.h>
@@ -10,6 +13,34 @@ extern unsigned long gstage_mode;
#define paddr_bits PADDR_BITS
+/* Get host p2m table */
+#define p2m_get_hostp2m(d) (&(d)->arch.p2m)
+
+/* Per-p2m-table state */
+struct p2m_domain {
+ /*
+ * Lock that protects updates to the p2m.
+ */
+ rwlock_t lock;
+
+ /* Pages used to construct the p2m */
+ struct page_list_head pages;
+
+ /* Back pointer to domain */
+ struct domain *domain;
+
+ /*
+ * P2M updates may required TLBs to be flushed (invalidated).
+ *
+ * Flushes may be deferred by setting 'need_flush' and then flushing
+ * when the p2m write lock is released.
+ *
+ * If an immediate flush is required (e.g, if a super page is
+ * shattered), call p2m_tlb_flush_sync().
+ */
+ bool need_flush;
+};
+
/*
* List of possible type for each page in the p2m entry.
* The number of available bit per page in the pte for this purpose is 2 bits.
@@ -92,6 +123,8 @@ static inline bool arch_acquire_resource_check(struct domain *d)
void gstage_mode_detect(void);
+int p2m_init(struct domain *d);
+
#endif /* ASM__RISCV__P2M_H */
/*
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index 56113a2f7a..70f9e97ab6 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -3,6 +3,10 @@
#include <xen/init.h>
#include <xen/lib.h>
#include <xen/macros.h>
+#include <xen/mm.h>
+#include <xen/paging.h>
+#include <xen/rwlock.h>
+#include <xen/sched.h>
#include <xen/sections.h>
#include <asm/csr.h>
@@ -89,3 +93,19 @@ void __init gstage_mode_detect(void)
*/
local_hfence_gvma_all();
}
+
+int p2m_init(struct domain *d)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+ /*
+ * "Trivial" initialisation is now complete. Set the backpointer so the
+ * users of p2m could get an access to domain structure.
+ */
+ p2m->domain = d;
+
+ rwlock_init(&p2m->lock);
+ INIT_PAGE_LIST_HEAD(&p2m->pages);
+
+ return 0;
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 03/18] xen/riscv: introduce things necessary for p2m initialization
2025-09-17 21:55 ` [PATCH v4 03/18] xen/riscv: introduce things necessary for p2m initialization Oleksii Kurochko
@ 2025-09-19 21:43 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-19 21:43 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> Introduce the following things:
> - Update p2m_domain structure, which describe per p2m-table state, with:
> - lock to protect updates to p2m.
> - pool with pages used to construct p2m.
> - back pointer to domain structure.
> - p2m_init() to initalize members introduced in p2m_domain structure.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 04/18] xen/riscv: construct the P2M pages pool for guests
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (2 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 03/18] xen/riscv: introduce things necessary for p2m initialization Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-17 21:55 ` [PATCH v4 05/18] xen/riscv: add root page table allocation Oleksii Kurochko
` (13 subsequent siblings)
17 siblings, 0 replies; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Implement p2m_set_allocation() to construct p2m pages pool for guests
based on required number of pages.
This is implemented by:
- Adding a `struct paging_domain` which contains a freelist, a
counter variable and a spinlock to `struct arch_domain` to
indicate the free p2m pages and the number of p2m total pages in
the p2m pages pool.
- Adding a helper `p2m_set_allocation` to set the p2m pages pool
size. This helper should be called before allocating memory for
a guest and is called from domain_p2m_set_allocation(), the latter
is a part of common dom0less code.
- Adding implementation of paging_freelist_adjust() and
paging_domain_init().
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V4:
- s/paging_freelist_init/paging_freelist_adjust.
- Add empty line between definiton of paging_freelist_adjust()
and paging_domain_init().
- Update commit message.
- Add Acked-by: Jan Beulich <jbeulich@suse.com>.
---
Changes in v3:
- Drop usage of p2m_ prefix inside struct paging_domain().
- Introduce paging_domain_init() to init paging struct.
---
Changes in v2:
- Drop the comment above inclusion of <xen/event.h> in riscv/p2m.c.
- Use ACCESS_ONCE() for lhs and rhs for the expressions in
p2m_set_allocation().
---
xen/arch/riscv/Makefile | 1 +
xen/arch/riscv/include/asm/Makefile | 1 -
xen/arch/riscv/include/asm/domain.h | 12 ++++++
xen/arch/riscv/include/asm/paging.h | 13 ++++++
xen/arch/riscv/p2m.c | 18 ++++++++
xen/arch/riscv/paging.c | 65 +++++++++++++++++++++++++++++
6 files changed, 109 insertions(+), 1 deletion(-)
create mode 100644 xen/arch/riscv/include/asm/paging.h
create mode 100644 xen/arch/riscv/paging.c
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index e2499210c8..6b912465b9 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -6,6 +6,7 @@ obj-y += imsic.o
obj-y += intc.o
obj-y += irq.o
obj-y += mm.o
+obj-y += paging.o
obj-y += pt.o
obj-y += p2m.o
obj-$(CONFIG_RISCV_64) += riscv64/
diff --git a/xen/arch/riscv/include/asm/Makefile b/xen/arch/riscv/include/asm/Makefile
index bfdf186c68..3824f31c39 100644
--- a/xen/arch/riscv/include/asm/Makefile
+++ b/xen/arch/riscv/include/asm/Makefile
@@ -6,7 +6,6 @@ generic-y += hardirq.h
generic-y += hypercall.h
generic-y += iocap.h
generic-y += irq-dt.h
-generic-y += paging.h
generic-y += percpu.h
generic-y += perfc_defn.h
generic-y += random.h
diff --git a/xen/arch/riscv/include/asm/domain.h b/xen/arch/riscv/include/asm/domain.h
index e688980efa..316e7c6c84 100644
--- a/xen/arch/riscv/include/asm/domain.h
+++ b/xen/arch/riscv/include/asm/domain.h
@@ -2,6 +2,8 @@
#ifndef ASM__RISCV__DOMAIN_H
#define ASM__RISCV__DOMAIN_H
+#include <xen/mm.h>
+#include <xen/spinlock.h>
#include <xen/xmalloc.h>
#include <public/hvm/params.h>
@@ -24,11 +26,21 @@ struct arch_vcpu {
struct vcpu_vmid vmid;
};
+struct paging_domain {
+ spinlock_t lock;
+ /* Free pages from the pre-allocated pool */
+ struct page_list_head freelist;
+ /* Number of pages from the pre-allocated pool */
+ unsigned long total_pages;
+};
+
struct arch_domain {
struct hvm_domain hvm;
/* Virtual MMU */
struct p2m_domain p2m;
+
+ struct paging_domain paging;
};
#include <xen/sched.h>
diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/asm/paging.h
new file mode 100644
index 0000000000..98d8b06d45
--- /dev/null
+++ b/xen/arch/riscv/include/asm/paging.h
@@ -0,0 +1,13 @@
+#ifndef ASM_RISCV_PAGING_H
+#define ASM_RISCV_PAGING_H
+
+#include <asm-generic/paging.h>
+
+struct domain;
+
+int paging_domain_init(struct domain *d);
+
+int paging_freelist_adjust(struct domain *d, unsigned long pages,
+ bool *preempted);
+
+#endif /* ASM_RISCV_PAGING_H */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index 70f9e97ab6..dc0f2b2a23 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -11,6 +11,7 @@
#include <asm/csr.h>
#include <asm/flushtlb.h>
+#include <asm/paging.h>
#include <asm/riscv_encoding.h>
unsigned long __ro_after_init gstage_mode;
@@ -104,8 +105,25 @@ int p2m_init(struct domain *d)
*/
p2m->domain = d;
+ paging_domain_init(d);
+
rwlock_init(&p2m->lock);
INIT_PAGE_LIST_HEAD(&p2m->pages);
return 0;
}
+
+/*
+ * Set the pool of pages to the required number of pages.
+ * Returns 0 for success, non-zero for failure.
+ * Call with d->arch.paging.lock held.
+ */
+int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
+{
+ int rc;
+
+ if ( (rc = paging_freelist_adjust(d, pages, preempted)) )
+ return rc;
+
+ return 0;
+}
diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c
new file mode 100644
index 0000000000..2df8de033b
--- /dev/null
+++ b/xen/arch/riscv/paging.c
@@ -0,0 +1,65 @@
+#include <xen/event.h>
+#include <xen/lib.h>
+#include <xen/mm.h>
+#include <xen/sched.h>
+#include <xen/spinlock.h>
+
+int paging_freelist_adjust(struct domain *d, unsigned long pages,
+ bool *preempted)
+{
+ struct page_info *pg;
+
+ ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+ for ( ; ; )
+ {
+ if ( d->arch.paging.total_pages < pages )
+ {
+ /* Need to allocate more memory from domheap */
+ pg = alloc_domheap_page(d, MEMF_no_owner);
+ if ( pg == NULL )
+ {
+ printk(XENLOG_ERR "Failed to allocate pages.\n");
+ return -ENOMEM;
+ }
+ ACCESS_ONCE(d->arch.paging.total_pages)++;
+ page_list_add_tail(pg, &d->arch.paging.freelist);
+ }
+ else if ( d->arch.paging.total_pages > pages )
+ {
+ /* Need to return memory to domheap */
+ pg = page_list_remove_head(&d->arch.paging.freelist);
+ if ( pg )
+ {
+ ACCESS_ONCE(d->arch.paging.total_pages)--;
+ free_domheap_page(pg);
+ }
+ else
+ {
+ printk(XENLOG_ERR
+ "Failed to free pages, freelist is empty.\n");
+ return -ENOMEM;
+ }
+ }
+ else
+ break;
+
+ /* Check to see if we need to yield and try again */
+ if ( preempted && general_preempt_check() )
+ {
+ *preempted = true;
+ return -ERESTART;
+ }
+ }
+
+ return 0;
+}
+
+/* Domain paging struct initialization. */
+int paging_domain_init(struct domain *d)
+{
+ spin_lock_init(&d->arch.paging.lock);
+ INIT_PAGE_LIST_HEAD(&d->arch.paging.freelist);
+
+ return 0;
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* [PATCH v4 05/18] xen/riscv: add root page table allocation
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (3 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 04/18] xen/riscv: construct the P2M pages pool for guests Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-19 22:14 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 06/18] xen/riscv: introduce pte_{set,get}_mfn() Oleksii Kurochko
` (12 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Introduce support for allocating and initializing the root page table
required for RISC-V stage-2 address translation.
To implement root page table allocation the following is introduced:
- p2m_get_clean_page() and p2m_alloc_root_table(), p2m_allocate_root()
helpers to allocate and zero a 16 KiB root page table, as mandated
by the RISC-V privileged specification for Sv32x4/Sv39x4/Sv48x4/Sv57x4
modes.
- Update p2m_init() to inititialize p2m_root_order.
- Add maddr_to_page() and page_to_maddr() macros for easier address
manipulation.
- Introduce paging_ret_pages_to_domheap() to return some pages before
allocate 16 KiB pages for root page table.
- Allocate root p2m table after p2m pool is initialized.
- Add construct_hgatp() to construct the hgatp register value based on
p2m->root, p2m->hgatp_mode and VMID.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Drop hgatp_mode from p2m_domain as gstage_mode was introduced and
initlialized earlier patch. So use gstage_mode instead.
- s/GUEST_ROOT_PAGE_TABLE_SIZE/GSTAGE_ROOT_PAGE_TABLE_SIZE.
- Drop p2m_root_order and re-define P2M_ROOT_ORDER:
#define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHIFT)
- Update implementation of construct_hgatp(): use introduced gstage_mode
and use MASK_INSRT() to construct ppn value.
- Drop nr_root_pages variable inside p2m_alloc_root_table().
- Update the printk's message inside paging_ret_pages_to_domheap().
- Add an introduction of clean_pte member of p2m_domain structure to this
patch as it is started to be used here.
Rename clean_pte to clean_dcache.
- Drop p2m_allocate_root() function as it is going to be used only in one
place.
- Propogate rc from p2m_alloc_root_table() in p2m_set_allocation().
- Return P2M_ROOT_PAGES to freelist in case of allocation of root page
table failed.
- Add allocated root tables pages to p2m->pages pool so a usage of pages
could be properly taken into account.
---
Changes in v3:
- Drop insterting of p2m->vmid in hgatp_from_page() as now vmid is allocated
per-CPU, not per-domain, so it will be inserted later somewhere in
context_switch or before returning control to a guest.
- use BIT() to init nr_pages in p2m_allocate_root() instead of open-code
BIT() macros.
- Fix order in clear_and_clean_page().
- s/panic("Specify more xen,domain-p2m-mem-mb\n")/return NULL.
- Use lock around a procedure of returning back pages necessary for p2m
root table.
- Update the comment about allocation of page for root page table.
- Update an argument of hgatp_from_page() to "struct page_info *p2m_root_page"
to be consistent with the function name.
- Use p2m_get_hostp2m(d) instead of open-coding it.
- Update the comment above the call of p2m_alloc_root_table().
- Update the comments in p2m_allocate_root().
- Move part which returns some page to domheap before root page table allocation
to paging.c.
- Pass p2m_domain * instead of struct domain * for p2m_alloc_root_table().
- Introduce construct_hgatp() instead of hgatp_from_page().
- Add vmid and hgatp_mode member of struct p2m_domain.
- Add explanatory comment above clean_dcache_va_range() in
clear_and_clean_page().
- Introduce P2M_ROOT_ORDER and P2M_ROOT_PAGES.
- Drop vmid member from p2m_domain as now we are using per-pCPU
VMID allocation.
- Update a declaration of construct_hgatp() to recieve VMID as it
isn't per-VM anymore.
- Drop hgatp member of p2m_domain struct as with the new VMID scheme
allocation construction of hgatp will be needed more often.
- Drop is_hardware_domain() case in p2m_allocate_root(), just always
allocate root using p2m pool pages.
- Refactor p2m_alloc_root_table() and p2m_alloc_table().
---
Changes in v2:
- This patch was created from "xen/riscv: introduce things necessary for p2m
initialization" with the following changes:
- [clear_and_clean_page()] Add missed call of clean_dcache_va_range().
- Drop p2m_get_clean_page() as it is going to be used only once to allocate
root page table. Open-code it explicittly in p2m_allocate_root(). Also,
it will help avoid duplication of the code connected to order and nr_pages
of p2m root page table.
- Instead of using order 2 for alloc_domheap_pages(), use
get_order_from_bytes(KB(16)).
- Clear and clean a proper amount of allocated pages in p2m_allocate_root().
- Drop _info from the function name hgatp_from_page_info() and its argument
page_info.
- Introduce HGATP_MODE_MASK and use MASK_INSR() instead of shift to calculate
value of hgatp.
- Drop unnecessary parentheses in definition of page_to_maddr().
- Add support of VMID.
- Drop TLB flushing in p2m_alloc_root_table() and do that once when VMID
is re-used. [Look at p2m_alloc_vmid()]
- Allocate p2m root table after p2m pool is fully initialized: first
return pages to p2m pool them allocate p2m root table.
---
xen/arch/riscv/include/asm/mm.h | 4 +
xen/arch/riscv/include/asm/p2m.h | 15 +++
xen/arch/riscv/include/asm/paging.h | 3 +
xen/arch/riscv/include/asm/riscv_encoding.h | 2 +
xen/arch/riscv/p2m.c | 90 +++++++++++++++-
xen/arch/riscv/paging.c | 108 +++++++++++++++-----
6 files changed, 193 insertions(+), 29 deletions(-)
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 9283616c02..dd8cdc9782 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -167,6 +167,10 @@ extern struct page_info *frametable_virt_start;
#define mfn_to_page(mfn) (frametable_virt_start + mfn_x(mfn))
#define page_to_mfn(pg) _mfn((pg) - frametable_virt_start)
+/* Convert between machine addresses and page-info structures. */
+#define maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma))
+#define page_to_maddr(pg) mfn_to_maddr(page_to_mfn(pg))
+
static inline void *page_to_virt(const struct page_info *pg)
{
return mfn_to_virt(mfn_x(page_to_mfn(pg)));
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index 2672dcdecb..7b263cb354 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -2,6 +2,7 @@
#ifndef ASM__RISCV__P2M_H
#define ASM__RISCV__P2M_H
+#include <xen/bitops.h>
#include <xen/errno.h>
#include <xen/mm.h>
#include <xen/rwlock.h>
@@ -11,6 +12,9 @@
extern unsigned long gstage_mode;
+#define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHIFT)
+#define P2M_ROOT_PAGES BIT(P2M_ROOT_ORDER, U)
+
#define paddr_bits PADDR_BITS
/* Get host p2m table */
@@ -26,6 +30,9 @@ struct p2m_domain {
/* Pages used to construct the p2m */
struct page_list_head pages;
+ /* The root of the p2m tree. May be concatenated */
+ struct page_info *root;
+
/* Back pointer to domain */
struct domain *domain;
@@ -39,6 +46,12 @@ struct p2m_domain {
* shattered), call p2m_tlb_flush_sync().
*/
bool need_flush;
+
+ /*
+ * Indicate if it is required to clean the cache when writing an entry or
+ * when a page is needed to be fully cleared and cleaned.
+ */
+ bool clean_dcache;
};
/*
@@ -125,6 +138,8 @@ void gstage_mode_detect(void);
int p2m_init(struct domain *d);
+unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid);
+
#endif /* ASM__RISCV__P2M_H */
/*
diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/asm/paging.h
index 98d8b06d45..befad14f82 100644
--- a/xen/arch/riscv/include/asm/paging.h
+++ b/xen/arch/riscv/include/asm/paging.h
@@ -10,4 +10,7 @@ int paging_domain_init(struct domain *d);
int paging_freelist_adjust(struct domain *d, unsigned long pages,
bool *preempted);
+int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages);
+int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages);
+
#endif /* ASM_RISCV_PAGING_H */
diff --git a/xen/arch/riscv/include/asm/riscv_encoding.h b/xen/arch/riscv/include/asm/riscv_encoding.h
index b15f5ad0b4..8890b903e1 100644
--- a/xen/arch/riscv/include/asm/riscv_encoding.h
+++ b/xen/arch/riscv/include/asm/riscv_encoding.h
@@ -188,6 +188,8 @@
#define HGATP_MODE_MASK HGATP32_MODE_MASK
#endif
+#define GSTAGE_ROOT_PAGE_TABLE_SIZE KB(16)
+
#define TOPI_IID_SHIFT 16
#define TOPI_IID_MASK 0xfff
#define TOPI_IPRIO_MASK 0xff
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index dc0f2b2a23..ad0478f155 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -3,6 +3,7 @@
#include <xen/init.h>
#include <xen/lib.h>
#include <xen/macros.h>
+#include <xen/domain_page.h>
#include <xen/mm.h>
#include <xen/paging.h>
#include <xen/rwlock.h>
@@ -95,6 +96,70 @@ void __init gstage_mode_detect(void)
local_hfence_gvma_all();
}
+static void clear_and_clean_page(struct page_info *page, bool clean_dcache)
+{
+ clear_domain_page(page_to_mfn(page));
+
+ /*
+ * If the IOMMU doesn't support coherent walks and the p2m tables are
+ * shared between the CPU and IOMMU, it is necessary to clean the
+ * d-cache.
+ */
+ if ( clean_dcache )
+ clean_dcache_va_range(page, PAGE_SIZE);
+}
+
+unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid)
+{
+ return MASK_INSR(mfn_x(page_to_mfn(p2m->root)), HGATP_PPN) |
+ MASK_INSR(gstage_mode, HGATP_MODE_MASK) |
+ MASK_INSR(vmid, HGATP_VMID_MASK);
+}
+
+static int p2m_alloc_root_table(struct p2m_domain *p2m)
+{
+ struct domain *d = p2m->domain;
+ struct page_info *page;
+ int rc;
+
+ /*
+ * Return back P2M_ROOT_PAGES to assure the root table memory is also
+ * accounted against the P2M pool of the domain.
+ */
+ if ( (rc = paging_ret_pages_to_domheap(d, P2M_ROOT_PAGES)) )
+ return rc;
+
+ /*
+ * As mentioned in the Priviliged Architecture Spec (version 20240411)
+ * in Section 18.5.1, for the paged virtual-memory schemes (Sv32x4,
+ * Sv39x4, Sv48x4, and Sv57x4), the root page table is 16 KiB and must
+ * be aligned to a 16-KiB boundary.
+ */
+ page = alloc_domheap_pages(d, P2M_ROOT_ORDER, MEMF_no_owner);
+ if ( !page )
+ {
+ /*
+ * If allocation of root table pages fails, the pages acquired above
+ * must be returned to the freelist to maintain proper freelist
+ * balance.
+ */
+ paging_ret_pages_to_freelist(d, P2M_ROOT_PAGES);
+
+ return -ENOMEM;
+ }
+
+ for ( unsigned int i = 0; i < P2M_ROOT_PAGES; i++ )
+ {
+ clear_and_clean_page(page + i, p2m->clean_dcache);
+
+ page_list_add(page + i, &p2m->pages);
+ }
+
+ p2m->root = page;
+
+ return 0;
+}
+
int p2m_init(struct domain *d)
{
struct p2m_domain *p2m = p2m_get_hostp2m(d);
@@ -110,6 +175,19 @@ int p2m_init(struct domain *d)
rwlock_init(&p2m->lock);
INIT_PAGE_LIST_HEAD(&p2m->pages);
+ /*
+ * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHROUGH
+ * is not ready for RISC-V support.
+ *
+ * When CONFIG_HAS_PASSTHROUGH=y, p2m->clean_dcache must be properly
+ * initialized.
+ * At the moment, it defaults to false because the p2m structure is
+ * zero-initialized.
+ */
+#ifdef CONFIG_HAS_PASSTHROUGH
+# error "Add init of p2m->clean_dcache"
+#endif
+
return 0;
}
@@ -120,10 +198,20 @@ int p2m_init(struct domain *d)
*/
int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
int rc;
if ( (rc = paging_freelist_adjust(d, pages, preempted)) )
return rc;
- return 0;
+ /*
+ * First, initialize p2m pool. Then allocate the root
+ * table so that the necessary pages can be returned from the p2m pool,
+ * since the root table must be allocated using alloc_domheap_pages(...)
+ * to meet its specific requirements.
+ */
+ if ( !p2m->root )
+ rc = p2m_alloc_root_table(p2m);
+
+ return rc;
}
diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c
index 2df8de033b..ed537fee07 100644
--- a/xen/arch/riscv/paging.c
+++ b/xen/arch/riscv/paging.c
@@ -4,46 +4,67 @@
#include <xen/sched.h>
#include <xen/spinlock.h>
+static int paging_ret_page_to_domheap(struct domain *d)
+{
+ struct page_info *page;
+
+ ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+ /* Return memory to domheap. */
+ page = page_list_remove_head(&d->arch.paging.freelist);
+ if( page )
+ {
+ ACCESS_ONCE(d->arch.paging.total_pages)--;
+ free_domheap_page(page);
+ }
+ else
+ {
+ printk(XENLOG_ERR
+ "Failed to free P2M pages, P2M freelist is empty.\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int paging_ret_page_to_freelist(struct domain *d)
+{
+ struct page_info *page;
+
+ ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+ /* Need to allocate more memory from domheap */
+ page = alloc_domheap_page(d, MEMF_no_owner);
+ if ( page == NULL )
+ {
+ printk(XENLOG_ERR "Failed to allocate pages.\n");
+ return -ENOMEM;
+ }
+ ACCESS_ONCE(d->arch.paging.total_pages)++;
+ page_list_add_tail(page, &d->arch.paging.freelist);
+
+ return 0;
+}
+
int paging_freelist_adjust(struct domain *d, unsigned long pages,
bool *preempted)
{
- struct page_info *pg;
-
ASSERT(spin_is_locked(&d->arch.paging.lock));
for ( ; ; )
{
+ int rc = 0;
+
if ( d->arch.paging.total_pages < pages )
- {
- /* Need to allocate more memory from domheap */
- pg = alloc_domheap_page(d, MEMF_no_owner);
- if ( pg == NULL )
- {
- printk(XENLOG_ERR "Failed to allocate pages.\n");
- return -ENOMEM;
- }
- ACCESS_ONCE(d->arch.paging.total_pages)++;
- page_list_add_tail(pg, &d->arch.paging.freelist);
- }
+ rc = paging_ret_page_to_freelist(d);
else if ( d->arch.paging.total_pages > pages )
- {
- /* Need to return memory to domheap */
- pg = page_list_remove_head(&d->arch.paging.freelist);
- if ( pg )
- {
- ACCESS_ONCE(d->arch.paging.total_pages)--;
- free_domheap_page(pg);
- }
- else
- {
- printk(XENLOG_ERR
- "Failed to free pages, freelist is empty.\n");
- return -ENOMEM;
- }
- }
+ rc = paging_ret_page_to_domheap(d);
else
break;
+ if ( rc )
+ return rc;
+
/* Check to see if we need to yield and try again */
if ( preempted && general_preempt_check() )
{
@@ -55,6 +76,37 @@ int paging_freelist_adjust(struct domain *d, unsigned long pages,
return 0;
}
+int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages)
+{
+ ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+ for ( unsigned int i = 0; i < nr_pages; i++ )
+ {
+ int rc = paging_ret_page_to_freelist(d);
+ if ( rc )
+ return rc;
+ }
+
+ return 0;
+}
+
+int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages)
+{
+ ASSERT(spin_is_locked(&d->arch.paging.lock));
+
+ if ( ACCESS_ONCE(d->arch.paging.total_pages) < nr_pages )
+ return false;
+
+ for ( unsigned int i = 0; i < nr_pages; i++ )
+ {
+ int rc = paging_ret_page_to_domheap(d);
+ if ( rc )
+ return rc;
+ }
+
+ return 0;
+}
+
/* Domain paging struct initialization. */
int paging_domain_init(struct domain *d)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 05/18] xen/riscv: add root page table allocation
2025-09-17 21:55 ` [PATCH v4 05/18] xen/riscv: add root page table allocation Oleksii Kurochko
@ 2025-09-19 22:14 ` Jan Beulich
2025-09-24 15:40 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-19 22:14 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -3,6 +3,7 @@
> #include <xen/init.h>
> #include <xen/lib.h>
> #include <xen/macros.h>
> +#include <xen/domain_page.h>
> #include <xen/mm.h>
> #include <xen/paging.h>
> #include <xen/rwlock.h>
> @@ -95,6 +96,70 @@ void __init gstage_mode_detect(void)
> local_hfence_gvma_all();
> }
>
> +static void clear_and_clean_page(struct page_info *page, bool clean_dcache)
> +{
> + clear_domain_page(page_to_mfn(page));
> +
> + /*
> + * If the IOMMU doesn't support coherent walks and the p2m tables are
> + * shared between the CPU and IOMMU, it is necessary to clean the
> + * d-cache.
> + */
> + if ( clean_dcache )
> + clean_dcache_va_range(page, PAGE_SIZE);
> +}
> +
> +unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid)
pointer-to-const?
> +{
> + return MASK_INSR(mfn_x(page_to_mfn(p2m->root)), HGATP_PPN) |
> + MASK_INSR(gstage_mode, HGATP_MODE_MASK) |
> + MASK_INSR(vmid, HGATP_VMID_MASK);
> +}
> +
> +static int p2m_alloc_root_table(struct p2m_domain *p2m)
> +{
> + struct domain *d = p2m->domain;
> + struct page_info *page;
> + int rc;
> +
> + /*
> + * Return back P2M_ROOT_PAGES to assure the root table memory is also
> + * accounted against the P2M pool of the domain.
> + */
> + if ( (rc = paging_ret_pages_to_domheap(d, P2M_ROOT_PAGES)) )
> + return rc;
I read the "ret" in the name as "return" here. However, ...
> + /*
> + * As mentioned in the Priviliged Architecture Spec (version 20240411)
> + * in Section 18.5.1, for the paged virtual-memory schemes (Sv32x4,
> + * Sv39x4, Sv48x4, and Sv57x4), the root page table is 16 KiB and must
> + * be aligned to a 16-KiB boundary.
> + */
> + page = alloc_domheap_pages(d, P2M_ROOT_ORDER, MEMF_no_owner);
> + if ( !page )
> + {
> + /*
> + * If allocation of root table pages fails, the pages acquired above
> + * must be returned to the freelist to maintain proper freelist
> + * balance.
> + */
> + paging_ret_pages_to_freelist(d, P2M_ROOT_PAGES);
... "return" doesn't make sense here, so I wonder what the "ret" here means.
> @@ -55,6 +76,37 @@ int paging_freelist_adjust(struct domain *d, unsigned long pages,
> return 0;
> }
>
> +int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages)
> +{
> + ASSERT(spin_is_locked(&d->arch.paging.lock));
> +
> + for ( unsigned int i = 0; i < nr_pages; i++ )
> + {
> + int rc = paging_ret_page_to_freelist(d);
> + if ( rc )
Nit (style): Blank line between declaration(s) and statement(s) please.
> + return rc;
> + }
> +
> + return 0;
> +}
> +
> +int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages)
> +{
> + ASSERT(spin_is_locked(&d->arch.paging.lock));
> +
> + if ( ACCESS_ONCE(d->arch.paging.total_pages) < nr_pages )
> + return false;
> +
> + for ( unsigned int i = 0; i < nr_pages; i++ )
> + {
> + int rc = paging_ret_page_to_domheap(d);
> + if ( rc )
Same here.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 05/18] xen/riscv: add root page table allocation
2025-09-19 22:14 ` Jan Beulich
@ 2025-09-24 15:40 ` Oleksii Kurochko
2025-09-25 13:56 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-24 15:40 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2050 bytes --]
On 9/20/25 12:14 AM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>
>> +{
>> + return MASK_INSR(mfn_x(page_to_mfn(p2m->root)), HGATP_PPN) |
>> + MASK_INSR(gstage_mode, HGATP_MODE_MASK) |
>> + MASK_INSR(vmid, HGATP_VMID_MASK);
>> +}
>> +
>> +static int p2m_alloc_root_table(struct p2m_domain *p2m)
>> +{
>> + struct domain *d = p2m->domain;
>> + struct page_info *page;
>> + int rc;
>> +
>> + /*
>> + * Return back P2M_ROOT_PAGES to assure the root table memory is also
>> + * accounted against the P2M pool of the domain.
>> + */
>> + if ( (rc = paging_ret_pages_to_domheap(d, P2M_ROOT_PAGES)) )
>> + return rc;
> I read the "ret" in the name as "return" here. However, ...
>
>> + /*
>> + * As mentioned in the Priviliged Architecture Spec (version 20240411)
>> + * in Section 18.5.1, for the paged virtual-memory schemes (Sv32x4,
>> + * Sv39x4, Sv48x4, and Sv57x4), the root page table is 16 KiB and must
>> + * be aligned to a 16-KiB boundary.
>> + */
>> + page = alloc_domheap_pages(d, P2M_ROOT_ORDER, MEMF_no_owner);
>> + if ( !page )
>> + {
>> + /*
>> + * If allocation of root table pages fails, the pages acquired above
>> + * must be returned to the freelist to maintain proper freelist
>> + * balance.
>> + */
>> + paging_ret_pages_to_freelist(d, P2M_ROOT_PAGES);
> ... "return" doesn't make sense here, so I wonder what the "ret" here means.
In both cases,|ret| was supposed to mean/"return"/, since in both cases we
“return” memory.
I agree that in the case of|paging_ret_pages_to_freelist()|, the flow is
slightly different: a page is allocated from the domheap and then added back
to the freelist. That looks more like/adding/ than/returning/. Still, I felt that
“return” could also apply here, as the page is being given back.
For more clarity, do you think it would make sense to rename
|paging_ret_pages_to_freelist()| to|paging_add_page_to_freelist()|?
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 3043 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 05/18] xen/riscv: add root page table allocation
2025-09-24 15:40 ` Oleksii Kurochko
@ 2025-09-25 13:56 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-25 13:56 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 24.09.2025 17:40, Oleksii Kurochko wrote:
>
> On 9/20/25 12:14 AM, Jan Beulich wrote:
>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>
>>> +{
>>> + return MASK_INSR(mfn_x(page_to_mfn(p2m->root)), HGATP_PPN) |
>>> + MASK_INSR(gstage_mode, HGATP_MODE_MASK) |
>>> + MASK_INSR(vmid, HGATP_VMID_MASK);
>>> +}
>>> +
>>> +static int p2m_alloc_root_table(struct p2m_domain *p2m)
>>> +{
>>> + struct domain *d = p2m->domain;
>>> + struct page_info *page;
>>> + int rc;
>>> +
>>> + /*
>>> + * Return back P2M_ROOT_PAGES to assure the root table memory is also
>>> + * accounted against the P2M pool of the domain.
>>> + */
>>> + if ( (rc = paging_ret_pages_to_domheap(d, P2M_ROOT_PAGES)) )
>>> + return rc;
>> I read the "ret" in the name as "return" here. However, ...
>>
>>> + /*
>>> + * As mentioned in the Priviliged Architecture Spec (version 20240411)
>>> + * in Section 18.5.1, for the paged virtual-memory schemes (Sv32x4,
>>> + * Sv39x4, Sv48x4, and Sv57x4), the root page table is 16 KiB and must
>>> + * be aligned to a 16-KiB boundary.
>>> + */
>>> + page = alloc_domheap_pages(d, P2M_ROOT_ORDER, MEMF_no_owner);
>>> + if ( !page )
>>> + {
>>> + /*
>>> + * If allocation of root table pages fails, the pages acquired above
>>> + * must be returned to the freelist to maintain proper freelist
>>> + * balance.
>>> + */
>>> + paging_ret_pages_to_freelist(d, P2M_ROOT_PAGES);
>> ... "return" doesn't make sense here, so I wonder what the "ret" here means.
>
> In both cases,|ret| was supposed to mean/"return"/, since in both cases we
> “return” memory.
> I agree that in the case of|paging_ret_pages_to_freelist()|, the flow is
> slightly different: a page is allocated from the domheap and then added back
> to the freelist. That looks more like/adding/ than/returning/. Still, I felt that
> “return” could also apply here, as the page is being given back.
>
> For more clarity, do you think it would make sense to rename
> |paging_ret_pages_to_freelist()| to|paging_add_page_to_freelist()|?
In place of "add" I'd perhaps use "refill" and then really refill_from_domheap
to properly indicate the opposite of ret_to_domheap. (For both of these: If
already you want to use such long-ish names.)
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 06/18] xen/riscv: introduce pte_{set,get}_mfn()
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (4 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 05/18] xen/riscv: add root page table allocation Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-17 21:55 ` [PATCH v4 07/18] xen/riscv: add new p2m types and helper macros for type classification Oleksii Kurochko
` (11 subsequent siblings)
17 siblings, 0 replies; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Introduce helpers pte_{set,get}_mfn() to simplify setting and getting
of mfn.
Also, introduce PTE_PPN_MASK and add BUILD_BUG_ON() to be sure that
PTE_PPN_MASK remains the same for all MMU modes except Sv32.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V4:
- Nothing changed. Only Rebase.
---
Changes in V3:
- Add Acked-by: Jan Beulich <jbeulich@suse.com>.
---
Changes in V2:
- Patch "[PATCH v1 4/6] xen/riscv: define pt_t and pt_walk_t structures" was
renamed to xen/riscv: introduce pte_{set,get}_mfn() as after dropping of
bitfields for PTE structure, this patch introduce only pte_{set,get}_mfn().
- As pt_t and pt_walk_t were dropped, update implementation of
pte_{set,get}_mfn() to use bit operations and shifts instead of bitfields.
- Introduce PTE_PPN_MASK to be able to use MASK_INSR for setting/getting PPN.
- Add BUILD_BUG_ON(RV_STAGE1_MODE > SATP_MODE_SV57) to be sure that when
new MMU mode will be added, someone checks that PPN is still bits 53:10.
---
xen/arch/riscv/include/asm/page.h | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
index ddcc4da0a3..66cb192316 100644
--- a/xen/arch/riscv/include/asm/page.h
+++ b/xen/arch/riscv/include/asm/page.h
@@ -112,6 +112,30 @@ typedef struct {
#endif
} pte_t;
+#if RV_STAGE1_MODE != SATP_MODE_SV32
+#define PTE_PPN_MASK _UL(0x3FFFFFFFFFFC00)
+#else
+#define PTE_PPN_MASK _U(0xFFFFFC00)
+#endif
+
+static inline void pte_set_mfn(pte_t *p, mfn_t mfn)
+{
+ /*
+ * At the moment spec provides Sv32 - Sv57.
+ * If one day new MMU mode will be added it will be needed
+ * to check that PPN mask still continue to cover bits 53:10.
+ */
+ BUILD_BUG_ON(RV_STAGE1_MODE > SATP_MODE_SV57);
+
+ p->pte &= ~PTE_PPN_MASK;
+ p->pte |= MASK_INSR(mfn_x(mfn), PTE_PPN_MASK);
+}
+
+static inline mfn_t pte_get_mfn(pte_t p)
+{
+ return _mfn(MASK_EXTR(p.pte, PTE_PPN_MASK));
+}
+
static inline bool pte_is_valid(pte_t p)
{
return p.pte & PTE_VALID;
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* [PATCH v4 07/18] xen/riscv: add new p2m types and helper macros for type classification
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (5 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 06/18] xen/riscv: introduce pte_{set,get}_mfn() Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-19 22:18 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 08/18] xen/dom0less: abstract Arm-specific p2m type name for device MMIO mappings Oleksii Kurochko
` (10 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
- Extended p2m_type_t with additional types: p2m_mmio_direct,
p2m_ext_storage.
- Added macros to classify memory types: P2M_RAM_TYPES.
- Introduced helper predicates: p2m_is_ram(), p2m_is_any_ram().
- Introduce arch_dt_passthrough() to tell handle_passthrough_prop()
from common code how to map device memory.
- Introduce p2m_first_external for detection for relational operations
with p2m type which is stored outside P2M's PTE bits.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Drop underscode in p2m_to_mask()'s argument and for other similar helpers.
- Introduce arch_dt_passthrough_p2m_type() instead of p2m_mmio_direct.
- Drop for the moment grant tables related stuff as it isn't going to be used in the nearest future.
---
Changes in V3:
- Drop p2m_ram_ro.
- Rename p2m_mmio_direct_dev to p2m_mmio_direct_io to make it more RISC-V specicific.
- s/p2m_mmio_direct_dev/p2m_mmio_direct_io.
---
Changes in V2:
- Drop stuff connected to foreign mapping as it isn't necessary for RISC-V
right now.
---
xen/arch/riscv/include/asm/p2m.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index 7b263cb354..8a6f5f3092 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -64,8 +64,29 @@ struct p2m_domain {
typedef enum {
p2m_invalid = 0, /* Nothing mapped here */
p2m_ram_rw, /* Normal read/write domain RAM */
+ p2m_mmio_direct_io, /* Read/write mapping of genuine Device MMIO area,
+ PTE_PBMT_IO will be used for such mappings */
+ p2m_ext_storage, /* Following types'll be stored outsude PTE bits: */
+
+ /* Sentinel — not a real type, just a marker for comparison */
+ p2m_first_external = p2m_ext_storage,
} p2m_type_t;
+static inline p2m_type_t arch_dt_passthrough_p2m_type(void)
+{
+ return p2m_mmio_direct_io;
+}
+
+/* We use bitmaps and mask to handle groups of types */
+#define p2m_to_mask(t) BIT(t, UL)
+
+/* RAM types, which map to real machine frames */
+#define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw))
+
+/* Useful predicates */
+#define p2m_is_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES)
+#define p2m_is_any_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES)
+
#include <xen/p2m-common.h>
static inline int get_page_and_type(struct page_info *page,
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 07/18] xen/riscv: add new p2m types and helper macros for type classification
2025-09-17 21:55 ` [PATCH v4 07/18] xen/riscv: add new p2m types and helper macros for type classification Oleksii Kurochko
@ 2025-09-19 22:18 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-19 22:18 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> - Extended p2m_type_t with additional types: p2m_mmio_direct,
> p2m_ext_storage.
> - Added macros to classify memory types: P2M_RAM_TYPES.
> - Introduced helper predicates: p2m_is_ram(), p2m_is_any_ram().
> - Introduce arch_dt_passthrough() to tell handle_passthrough_prop()
> from common code how to map device memory.
> - Introduce p2m_first_external for detection for relational operations
> with p2m type which is stored outside P2M's PTE bits.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V4:
> - Drop underscode in p2m_to_mask()'s argument and for other similar helpers.
Except that ...
> --- a/xen/arch/riscv/include/asm/p2m.h
> +++ b/xen/arch/riscv/include/asm/p2m.h
> @@ -64,8 +64,29 @@ struct p2m_domain {
> typedef enum {
> p2m_invalid = 0, /* Nothing mapped here */
> p2m_ram_rw, /* Normal read/write domain RAM */
> + p2m_mmio_direct_io, /* Read/write mapping of genuine Device MMIO area,
> + PTE_PBMT_IO will be used for such mappings */
> + p2m_ext_storage, /* Following types'll be stored outsude PTE bits: */
> +
> + /* Sentinel — not a real type, just a marker for comparison */
> + p2m_first_external = p2m_ext_storage,
> } p2m_type_t;
>
> +static inline p2m_type_t arch_dt_passthrough_p2m_type(void)
> +{
> + return p2m_mmio_direct_io;
> +}
> +
> +/* We use bitmaps and mask to handle groups of types */
> +#define p2m_to_mask(t) BIT(t, UL)
> +
> +/* RAM types, which map to real machine frames */
> +#define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw))
> +
> +/* Useful predicates */
> +#define p2m_is_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES)
> +#define p2m_is_any_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES)
... they're still present here. With these also dropped:
Acked-by: Jan Beulich <jbeulich@suse.com>
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 08/18] xen/dom0less: abstract Arm-specific p2m type name for device MMIO mappings
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (6 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 07/18] xen/riscv: add new p2m types and helper macros for type classification Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-17 21:55 ` [PATCH v4 09/18] xen/riscv: implement function to map memory in guest p2m Oleksii Kurochko
` (9 subsequent siblings)
17 siblings, 0 replies; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Stefano Stabellini, Julien Grall,
Bertrand Marquis, Michal Orzel, Volodymyr Babchuk, Jan Beulich
Introduce arch_dt_passthrough_p2m_type() and use it instead of
`p2m_mmio_direct_dev` to avoid leaking Arm-specific naming into
common Xen code, such as dom0less passthrough property handling.
This helps reduce platform-specific terminology in shared logic and
improves clarity for future non-Arm ports (e.g. RISC-V or PowerPC).
No functional changes — the definition is preserved via a static inline
function for Arm.
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Introduce arch_dt_passthrough_p2m_type() instead of re-defining of
p2m_mmio_direct.
---
Changes in V3:
- New patch.
---
xen/arch/arm/include/asm/p2m.h | 5 +++++
xen/common/device-tree/dom0less-build.c | 2 +-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index ef98bc5f4d..010ce8c9eb 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -137,6 +137,11 @@ typedef enum {
p2m_max_real_type, /* Types after this won't be store in the p2m */
} p2m_type_t;
+static inline p2m_type_t arch_dt_passthrough_p2m_type(void)
+{
+ return p2m_mmio_direct_dev;
+}
+
/* We use bitmaps and mask to handle groups of types */
#define p2m_to_mask(_t) (1UL << (_t))
diff --git a/xen/common/device-tree/dom0less-build.c b/xen/common/device-tree/dom0less-build.c
index 9fd004c42a..8214a6639f 100644
--- a/xen/common/device-tree/dom0less-build.c
+++ b/xen/common/device-tree/dom0less-build.c
@@ -185,7 +185,7 @@ static int __init handle_passthrough_prop(struct kernel_info *kinfo,
gaddr_to_gfn(gstart),
PFN_DOWN(size),
maddr_to_mfn(mstart),
- p2m_mmio_direct_dev);
+ arch_dt_passthrough_p2m_type());
if ( res < 0 )
{
printk(XENLOG_ERR
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* [PATCH v4 09/18] xen/riscv: implement function to map memory in guest p2m
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (7 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 08/18] xen/dom0less: abstract Arm-specific p2m type name for device MMIO mappings Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-19 23:12 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 10/18] xen/riscv: implement p2m_set_range() Oleksii Kurochko
` (8 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Implement map_regions_p2mt() to map a region in the guest p2m with
a specific p2m type. The memory attributes will be derived from the
p2m type. This function is used in dom0less common
code.
To implement it, introduce:
- p2m_write_(un)lock() to ensure safe concurrent updates to the P2M.
As part of this change, introduce p2m_tlb_flush_sync() and
p2m_force_tlb_flush_sync().
- A stub for p2m_set_range() to map a range of GFNs to MFNs.
- p2m_insert_mapping().
- p2m_is_write_locked().
Drop guest_physmap_add_entry() and call map_regions_p2mt() directly
from guest_physmap_add_page(), making guest_physmap_add_entry()
unnecessary.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Update the comment above declaration of map_regions_p2mt():
s/guest p2m/guest's hostp2m.
- Add const for p2m_force_tlb_flush_sync()'s local variable `d`.
- Stray 'w' in the comment inside p2m_write_unlock().
- Drop p2m_insert_mapping() and leave only map_regions_p2mt() as it
is just re-use insert_mapping().
- Rename p2m_force_tlb_flush_sync() to p2m_tlb_flush().
- Update prototype of p2m_is_write_locked() to return bool instead of
int.
---
Changes in v3:
- Introudce p2m_write_lock() and p2m_is_write_locked().
- Introduce p2m_force_tlb_flush_sync() and p2m_flush_tlb() to flush TLBs
after p2m table update.
- Change an argument of p2m_insert_mapping() from struct domain *d to
p2m_domain *p2m.
- Drop guest_physmap_add_entry() and use map_regions_p2mt() to define
guest_physmap_add_page().
- Add declaration of map_regions_p2mt() to asm/p2m.h.
- Rewrite commit message and subject.
- Drop p2m_access_t related stuff.
- Add defintion of p2m_is_write_locked().
---
Changes in v2:
- This changes were part of "xen/riscv: implement p2m mapping functionality".
No additional signigicant changes were done.
---
xen/arch/riscv/include/asm/p2m.h | 31 ++++++++++++-----
xen/arch/riscv/p2m.c | 60 ++++++++++++++++++++++++++++++++
2 files changed, 82 insertions(+), 9 deletions(-)
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index 8a6f5f3092..c98cf547f1 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -122,21 +122,22 @@ static inline int guest_physmap_mark_populate_on_demand(struct domain *d,
return -EOPNOTSUPP;
}
-static inline int guest_physmap_add_entry(struct domain *d,
- gfn_t gfn, mfn_t mfn,
- unsigned long page_order,
- p2m_type_t t)
-{
- BUG_ON("unimplemented");
- return -EINVAL;
-}
+/*
+ * Map a region in the guest's hostp2m p2m with a specific p2m type.
+ * The memory attributes will be derived from the p2m type.
+ */
+int map_regions_p2mt(struct domain *d,
+ gfn_t gfn,
+ unsigned long nr,
+ mfn_t mfn,
+ p2m_type_t p2mt);
/* Untyped version for RAM only, for compatibility */
static inline int __must_check
guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn,
unsigned int page_order)
{
- return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw);
+ return map_regions_p2mt(d, gfn, BIT(page_order, UL), mfn, p2m_ram_rw);
}
static inline mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
@@ -159,6 +160,18 @@ void gstage_mode_detect(void);
int p2m_init(struct domain *d);
+static inline void p2m_write_lock(struct p2m_domain *p2m)
+{
+ write_lock(&p2m->lock);
+}
+
+void p2m_write_unlock(struct p2m_domain *p2m);
+
+static inline bool p2m_is_write_locked(struct p2m_domain *p2m)
+{
+ return rw_is_write_locked(&p2m->lock);
+}
+
unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid);
#endif /* ASM__RISCV__P2M_H */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index ad0478f155..d8b611961c 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -96,6 +96,41 @@ void __init gstage_mode_detect(void)
local_hfence_gvma_all();
}
+/*
+ * Force a synchronous P2M TLB flush.
+ *
+ * Must be called with the p2m lock held.
+ */
+static void p2m_tlb_flush(struct p2m_domain *p2m)
+{
+ const struct domain *d = p2m->domain;
+
+ ASSERT(p2m_is_write_locked(p2m));
+
+ sbi_remote_hfence_gvma(d->dirty_cpumask, 0, 0);
+
+ p2m->need_flush = false;
+}
+
+void p2m_tlb_flush_sync(struct p2m_domain *p2m)
+{
+ if ( p2m->need_flush )
+ p2m_tlb_flush(p2m);
+}
+
+/* Unlock the flush and do a P2M TLB flush if necessary */
+void p2m_write_unlock(struct p2m_domain *p2m)
+{
+ /*
+ * The final flush is done with the P2M write lock taken to avoid
+ * someone else modifying the P2M before the TLB invalidation has
+ * completed.
+ */
+ p2m_tlb_flush_sync(p2m);
+
+ write_unlock(&p2m->lock);
+}
+
static void clear_and_clean_page(struct page_info *page, bool clean_dcache)
{
clear_domain_page(page_to_mfn(page));
@@ -215,3 +250,28 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
return rc;
}
+
+static int p2m_set_range(struct p2m_domain *p2m,
+ gfn_t sgfn,
+ unsigned long nr,
+ mfn_t smfn,
+ p2m_type_t t)
+{
+ return -EOPNOTSUPP;
+}
+
+int map_regions_p2mt(struct domain *d,
+ gfn_t gfn,
+ unsigned long nr,
+ mfn_t mfn,
+ p2m_type_t p2mt)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+ int rc;
+
+ p2m_write_lock(p2m);
+ rc = p2m_set_range(p2m, gfn, nr, mfn, p2mt);
+ p2m_write_unlock(p2m);
+
+ return rc;
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 09/18] xen/riscv: implement function to map memory in guest p2m
2025-09-17 21:55 ` [PATCH v4 09/18] xen/riscv: implement function to map memory in guest p2m Oleksii Kurochko
@ 2025-09-19 23:12 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-19 23:12 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -96,6 +96,41 @@ void __init gstage_mode_detect(void)
> local_hfence_gvma_all();
> }
>
> +/*
> + * Force a synchronous P2M TLB flush.
> + *
> + * Must be called with the p2m lock held.
> + */
> +static void p2m_tlb_flush(struct p2m_domain *p2m)
> +{
> + const struct domain *d = p2m->domain;
> +
> + ASSERT(p2m_is_write_locked(p2m));
> +
> + sbi_remote_hfence_gvma(d->dirty_cpumask, 0, 0);
> +
> + p2m->need_flush = false;
While with the p2m lock held it shouldn't matter, purely for doc
purposes I would recommend to clear the flag before doing the flush.
> +/* Unlock the flush and do a P2M TLB flush if necessary */
Don't you mean "P2M" in place of the first "flush"?
Ideally with both adjustments:
Acked-by: Jan Beulich <jbeulich@suse.com>
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 10/18] xen/riscv: implement p2m_set_range()
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (8 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 09/18] xen/riscv: implement function to map memory in guest p2m Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-19 23:36 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers Oleksii Kurochko
` (7 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
This patch introduces p2m_set_range() and its core helper p2m_set_entry() for
RISC-V, based loosely on the Arm implementation, with several RISC-V-specific
modifications.
The main changes are:
- Simplification of Break-Before-Make (BBM) approach as according to RISC-V
spec:
It is permitted for multiple address-translation cache entries to co-exist
for the same address. This represents the fact that in a conventional
TLB hierarchy, it is possible for multiple entries to match a single
address if, for example, a page is upgraded to a superpage without first
clearing the original non-leaf PTE’s valid bit and executing an SFENCE.VMA
with rs1=x0, or if multiple TLBs exist in parallel at a given level of the
hierarchy. In this case, just as if an SFENCE.VMA is not executed between
a write to the memory-management tables and subsequent implicit read of the
same address: it is unpredictable whether the old non-leaf PTE or the new
leaf PTE is used, but the behavior is otherwise well defined.
In contrast to the Arm architecture, where BBM is mandatory and failing to
use it in some cases can lead to CPU instability, RISC-V guarantees
stability, and the behavior remains safe — though unpredictable in terms of
which translation will be used.
- Unlike Arm, the valid bit is not repurposed for other uses in this
implementation. Instead, entry validity is determined based solely on P2M
PTE's valid bit.
The main functionality is in p2m_set_entry(), which handles mappings aligned
to page table block entries (e.g., 1GB, 2MB, or 4KB with 4KB granularity).
p2m_set_range() breaks a region down into block-aligned mappings and calls
p2m_set_entry() accordingly.
Stub implementations (to be completed later) include:
- p2m_free_subtree()
- p2m_next_level()
- p2m_pte_from_mfn()
Note: Support for shattering block entries is not implemented in this
patch and will be added separately.
Additionally, some straightforward helper functions are now implemented:
- p2m_write_pte()
- p2m_remove_pte()
- p2m_get_root_pointer()
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Introduce gstage_root_level and use it for defintion of P2M_ROOT_LEVEL.
- Introduce P2M_LEVEL_ORDER() macros and P2M_PAGETABLE_ENTRIES().
- Add the TODO comment in p2m_write_pte() about possible perfomance
optimization.
- Use compound literal for `pte` variable inside p2m_clean_pte().
- Fix the comment above p2m_next_level().
- Update ASSERT() inside p2m_set_entry() and leave only a check of a
target as p2m_mapping_order() that page_order will be correctly
aligned.
- Update the comment above declaration of `removing_mapping` in
p2m_set_entry().
- Stray blanks.
- Handle possibly overflow of an amount of unmapped GFNs in case of
some failute in p2m_set_range().
- Handle a case when MFN is 0 and removing of such MFN is happening in
p2m_set_entry.
- Fix p2m_get_root_pointer() to return correct pointer to root page table.
---
Changes in V3:
- Drop p2m_access_t connected stuff as it isn't going to be used, at least
now.
- Move defintion of P2M_ROOT_ORDER and P2M_ROOT_PAGES to earlier patches.
- Update the comment above lowest_mapped_gfn declaration.
- Update the comment above p2m_get_root_pointer(): s/"...ofset of the root
table"/"...ofset into root table".
- s/p2m_remove_pte/p2m_clean_pte.
- Use plain 0 instead of 0x00 in p2m_clean_pte().
- s/p2m_entry_from_mfn/p2m_pte_from_mfn.
- s/GUEST_TABLE_*/P2M_TABLE_*.
- Update the comment above p2m_next_level(): "GFN entry" -> "corresponding
the entry corresponding to the GFN".
- s/__p2m_set_entry/_p2m_set_entry.
- drop "s" for sgfn and smfn prefixes of _p2m_set_entry()'s arguments
as this function work only with one GFN and one MFN.
- Return correct return code when p2m_next_level() faild in _p2m_set_entry(),
also drop "else" and just handle case (rc != P2M_TABLE_NORMAL) separately.
- Code style fixes.
- Use unsigned int for "order" in p2m_set_entry().
- s/p2m_set_entry/p2m_free_subtree.
- Update ASSERT() in __p2m_set_enty() to check that page_order is propertly
aligned.
- Return -EACCES instead of -ENOMEM in the chase when domain is dying and
someone called p2m_set_entry.
- s/p2m_set_entry/p2m_set_range.
- s/__p2m_set_entry/p2m_set_entry
- s/p2me_is_valid/p2m_is_valid()
- Return a number of successfully mapped GFNs in case if not all were mapped
in p2m_set_range().
- Use BIT(order, UL) instead of 1 << order.
- Drop IOMMU flushing code from p2m_set_entry().
- set p2m->need_flush=true when entry in p2m_set_entry() is changed.
- Introduce p2m_mapping_order() to support superpages.
- Drop p2m_is_valid() and use pte_is_valid() instead as there is no tricks
with copying of valid bit anymore.
- Update p2m_pte_from_mfn() prototype: drop p2m argument.
---
Changes in V2:
- New patch. It was a part of a big patch "xen/riscv: implement p2m mapping
functionality" which was splitted to smaller.
- Update the way when p2m TLB is flushed:
- RISC-V does't require BBM so there is no need to remove PTE before making
new so drop 'if /*pte_is_valid(orig_pte) */' and remove PTE only removing
has been requested.
- Drop p2m->need_flush |= !!pte_is_valid(orig_pte); for the case when
PTE's removing is happening as RISC-V could cache invalid PTE and thereby
it requires to do a flush each time and it doesn't matter if PTE is valid
or not at the moment when PTE removing is happening.
- Drop a check if PTE is valid in case of PTE is modified as it was mentioned
above as BBM isn't required so TLB flushing could be defered and there is
no need to do it before modifying of PTE.
- Drop p2m->need_flush as it seems like it will be always true.
- Drop foreign mapping things as it isn't necessary for RISC-V right now.
- s/p2m_is_valid/p2me_is_valid.
- Move definition and initalization of p2m->{max_mapped_gfn,lowest_mapped_gfn}
to this patch.
---
xen/arch/riscv/include/asm/p2m.h | 39 +++++
xen/arch/riscv/p2m.c | 281 ++++++++++++++++++++++++++++++-
2 files changed, 319 insertions(+), 1 deletion(-)
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index c98cf547f1..1a43736855 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -8,12 +8,41 @@
#include <xen/rwlock.h>
#include <xen/types.h>
+#include <asm/page.h>
#include <asm/page-bits.h>
extern unsigned long gstage_mode;
+extern unsigned int gstage_root_level;
#define P2M_ROOT_ORDER (ilog2(GSTAGE_ROOT_PAGE_TABLE_SIZE) - PAGE_SHIFT)
#define P2M_ROOT_PAGES BIT(P2M_ROOT_ORDER, U)
+#define P2M_ROOT_LEVEL gstage_root_level
+
+/*
+ * According to the RISC-V spec:
+ * When hgatp.MODE specifies a translation scheme of Sv32x4, Sv39x4, Sv48x4,
+ * or Sv57x4, G-stage address translation is a variation on the usual
+ * page-based virtual address translation scheme of Sv32, Sv39, Sv48, or
+ * Sv57, respectively. In each case, the size of the incoming address is
+ * widened by 2 bits (to 34, 41, 50, or 59 bits).
+ *
+ * P2M_LEVEL_ORDER(lvl) defines the bit position in the GFN from which
+ * the index for this level of the P2M page table starts. The extra 2
+ * bits added by the "x4" schemes only affect the root page table width.
+ *
+ * Therefore, this macro can safely reuse XEN_PT_LEVEL_ORDER() for all
+ * levels: the extra 2 bits do not change the indices of lower levels.
+ *
+ * The extra 2 bits are only relevant if one tried to address beyond the
+ * root level (i.e., P2M_LEVEL_ORDER(P2M_ROOT_LEVEL + 1)), which is
+ * invalid.
+ */
+#define P2M_LEVEL_ORDER(lvl) XEN_PT_LEVEL_ORDER(lvl)
+
+#define P2M_ROOT_EXTRA_BITS(lvl) (2 * ((lvl) == P2M_ROOT_LEVEL))
+
+#define P2M_PAGETABLE_ENTRIES(lvl) \
+ (BIT(PAGETABLE_ORDER + P2M_ROOT_EXTRA_BITS(lvl), UL))
#define paddr_bits PADDR_BITS
@@ -52,6 +81,16 @@ struct p2m_domain {
* when a page is needed to be fully cleared and cleaned.
*/
bool clean_dcache;
+
+ /* Highest guest frame that's ever been mapped in the p2m */
+ gfn_t max_mapped_gfn;
+
+ /*
+ * Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+ * preemptible manner this is updated to track where to resume
+ * the search. Apart from during teardown this can only decrease.
+ */
+ gfn_t lowest_mapped_gfn;
};
/*
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index d8b611961c..db9f7a77ff 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -16,6 +16,7 @@
#include <asm/riscv_encoding.h>
unsigned long __ro_after_init gstage_mode;
+unsigned int __ro_after_init gstage_root_level;
void __init gstage_mode_detect(void)
{
@@ -53,6 +54,7 @@ void __init gstage_mode_detect(void)
if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) == mode )
{
gstage_mode = mode;
+ gstage_root_level = modes[mode_idx].paging_levels - 1;
break;
}
}
@@ -210,6 +212,9 @@ int p2m_init(struct domain *d)
rwlock_init(&p2m->lock);
INIT_PAGE_LIST_HEAD(&p2m->pages);
+ p2m->max_mapped_gfn = _gfn(0);
+ p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
+
/*
* Currently, the infrastructure required to enable CONFIG_HAS_PASSTHROUGH
* is not ready for RISC-V support.
@@ -251,13 +256,287 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
return rc;
}
+/*
+ * Find and map the root page table. The caller is responsible for
+ * unmapping the table.
+ *
+ * The function will return NULL if the offset into the root table is
+ * invalid.
+ */
+static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
+{
+ unsigned long root_table_indx;
+
+ root_table_indx = gfn_x(gfn) >> P2M_LEVEL_ORDER(P2M_ROOT_LEVEL);
+ if ( root_table_indx >= P2M_ROOT_PAGES )
+ return NULL;
+
+ /*
+ * The P2M root page table is extended by 2 bits, making its size 16KB
+ * (instead of 4KB for non-root page tables). Therefore, p2m->root is
+ * allocated as four consecutive 4KB pages (since alloc_domheap_pages()
+ * only allocates 4KB pages).
+ *
+ * To determine which of these four 4KB pages the root_table_indx falls
+ * into, we divide root_table_indx by
+ * P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1).
+ */
+ root_table_indx /= P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1);
+
+ return __map_domain_page(p2m->root + root_table_indx);
+}
+
+static inline void p2m_write_pte(pte_t *p, pte_t pte, bool clean_pte)
+{
+ write_pte(p, pte);
+
+ /*
+ * TODO: if multiple adjacent PTEs are written without releasing
+ * the lock, this then redundant cache flushing can be a
+ * performance issue.
+ */
+ if ( clean_pte )
+ clean_dcache_va_range(p, sizeof(*p));
+}
+
+static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
+{
+ pte_t pte = { .pte = 0 };
+
+ p2m_write_pte(p, pte, clean_pte);
+}
+
+static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
+{
+ panic("%s: hasn't been implemented yet\n", __func__);
+
+ return (pte_t) { .pte = 0 };
+}
+
+#define P2M_TABLE_MAP_NONE 0
+#define P2M_TABLE_MAP_NOMEM 1
+#define P2M_TABLE_SUPER_PAGE 2
+#define P2M_TABLE_NORMAL 3
+
+/*
+ * Take the currently mapped table, find the entry corresponding to the GFN,
+ * and map the next-level table if available. The previous table will be
+ * unmapped if the next level was mapped (e.g., when P2M_TABLE_NORMAL is
+ * returned).
+ *
+ * `alloc_tbl` parameter indicates whether intermediate tables should
+ * be allocated when not present.
+ *
+ * Return values:
+ * P2M_TABLE_MAP_NONE: a table allocation isn't permitted.
+ * P2M_TABLE_MAP_NOMEM: allocating a new page failed.
+ * P2M_TABLE_SUPER_PAGE: next level or leaf mapped normally.
+ * P2M_TABLE_NORMAL: The next entry points to a superpage.
+ */
+static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
+ unsigned int level, pte_t **table,
+ unsigned int offset)
+{
+ panic("%s: hasn't been implemented yet\n", __func__);
+
+ return P2M_TABLE_MAP_NONE;
+}
+
+/* Free pte sub-tree behind an entry */
+static void p2m_free_subtree(struct p2m_domain *p2m,
+ pte_t entry, unsigned int level)
+{
+ panic("%s: hasn't been implemented yet\n", __func__);
+}
+
+/*
+ * Insert an entry in the p2m. This should be called with a mapping
+ * equal to a page/superpage.
+ */
+static int p2m_set_entry(struct p2m_domain *p2m,
+ gfn_t gfn,
+ unsigned long page_order,
+ mfn_t mfn,
+ p2m_type_t t)
+{
+ unsigned int level;
+ unsigned int target = page_order / PAGETABLE_ORDER;
+ pte_t *entry, *table, orig_pte;
+ int rc;
+ /*
+ * A mapping is removed only if the MFN is explicitly set to INVALID_MFN.
+ * Other MFNs that are considered invalid by mfn_valid() (e.g., MMIO)
+ * are still allowed.
+ */
+ bool removing_mapping = mfn_eq(mfn, INVALID_MFN);
+ DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
+
+ ASSERT(p2m_is_write_locked(p2m));
+
+ /*
+ * Check if the level target is valid: we only support
+ * 4K - 2M - 1G mapping.
+ */
+ ASSERT(target <= 2);
+
+ table = p2m_get_root_pointer(p2m, gfn);
+ if ( !table )
+ return -EINVAL;
+
+ for ( level = P2M_ROOT_LEVEL; level > target; level-- )
+ {
+ /*
+ * Don't try to allocate intermediate page table if the mapping
+ * is about to be removed.
+ */
+ rc = p2m_next_level(p2m, !removing_mapping,
+ level, &table, offsets[level]);
+ if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
+ {
+ rc = (rc == P2M_TABLE_MAP_NONE) ? -ENOENT : -ENOMEM;
+ /*
+ * We are here because p2m_next_level has failed to map
+ * the intermediate page table (e.g the table does not exist
+ * and they p2m tree is read-only). It is a valid case
+ * when removing a mapping as it may not exist in the
+ * page table. In this case, just ignore it.
+ */
+ rc = removing_mapping ? 0 : rc;
+ goto out;
+ }
+
+ if ( rc != P2M_TABLE_NORMAL )
+ break;
+ }
+
+ entry = table + offsets[level];
+
+ /*
+ * If we are here with level > target, we must be at a leaf node,
+ * and we need to break up the superpage.
+ */
+ if ( level > target )
+ {
+ panic("Shattering isn't implemented\n");
+ }
+
+ /*
+ * We should always be there with the correct level because all the
+ * intermediate tables have been installed if necessary.
+ */
+ ASSERT(level == target);
+
+ orig_pte = *entry;
+
+ if ( removing_mapping )
+ p2m_clean_pte(entry, p2m->clean_dcache);
+ else
+ {
+ pte_t pte = p2m_pte_from_mfn(mfn, t);
+
+ p2m_write_pte(entry, pte, p2m->clean_dcache);
+
+ p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
+ gfn_add(gfn, BIT(page_order, UL) - 1));
+ p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, gfn);
+ }
+
+ p2m->need_flush = true;
+
+ /*
+ * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHROUGH
+ * is not ready for RISC-V support.
+ *
+ * When CONFIG_HAS_PASSTHROUGH=y, iommu_iotlb_flush() should be done
+ * here.
+ */
+#ifdef CONFIG_HAS_PASSTHROUGH
+# error "add code to flush IOMMU TLB"
+#endif
+
+ rc = 0;
+
+ /*
+ * Free the entry only if the original pte was valid and the base
+ * is different (to avoid freeing when permission is changed).
+ *
+ * If previously MFN 0 was mapped and it is going to be removed
+ * and considering that during removing MFN 0 is used then `entry`
+ * and `new_entry` will be the same and p2m_free_subtree() won't be
+ * called. This case is handled explicitly.
+ */
+ if ( pte_is_valid(orig_pte) &&
+ (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
+ (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
+ p2m_free_subtree(p2m, orig_pte, level);
+
+ out:
+ unmap_domain_page(table);
+
+ return rc;
+}
+
+/* Return mapping order for given gfn, mfn and nr */
+static unsigned long p2m_mapping_order(gfn_t gfn, mfn_t mfn, unsigned long nr)
+{
+ unsigned long mask;
+ /* 1gb, 2mb, 4k mappings are supported */
+ unsigned int level = min(P2M_ROOT_LEVEL, _AC(2, U));
+ unsigned long order = 0;
+
+ mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
+ mask |= gfn_x(gfn);
+
+ for ( ; level != 0; level-- )
+ {
+ if ( !(mask & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)) &&
+ (nr >= BIT(P2M_LEVEL_ORDER(level), UL)) )
+ {
+ order = P2M_LEVEL_ORDER(level);
+ break;
+ }
+ }
+
+ return order;
+}
+
static int p2m_set_range(struct p2m_domain *p2m,
gfn_t sgfn,
unsigned long nr,
mfn_t smfn,
p2m_type_t t)
{
- return -EOPNOTSUPP;
+ int rc = 0;
+ unsigned long left = nr;
+
+ /*
+ * Any reference taken by the P2M mappings (e.g. foreign mapping) will
+ * be dropped in relinquish_p2m_mapping(). As the P2M will still
+ * be accessible after, we need to prevent mapping to be added when the
+ * domain is dying.
+ */
+ if ( unlikely(p2m->domain->is_dying) )
+ return -EACCES;
+
+ while ( left )
+ {
+ unsigned long order = p2m_mapping_order(sgfn, smfn, left);
+
+ rc = p2m_set_entry(p2m, sgfn, order, smfn, t);
+ if ( rc )
+ break;
+
+ sgfn = gfn_add(sgfn, BIT(order, UL));
+ if ( !mfn_eq(smfn, INVALID_MFN) )
+ smfn = mfn_add(smfn, BIT(order, UL));
+
+ left -= BIT(order, UL);
+ }
+
+ if ( left > INT_MAX )
+ rc = -EOVERFLOW;
+
+ return !left ? rc : left;
}
int map_regions_p2mt(struct domain *d,
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 10/18] xen/riscv: implement p2m_set_range()
2025-09-17 21:55 ` [PATCH v4 10/18] xen/riscv: implement p2m_set_range() Oleksii Kurochko
@ 2025-09-19 23:36 ` Jan Beulich
2025-09-25 20:08 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-19 23:36 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -16,6 +16,7 @@
> #include <asm/riscv_encoding.h>
>
> unsigned long __ro_after_init gstage_mode;
> +unsigned int __ro_after_init gstage_root_level;
>
> void __init gstage_mode_detect(void)
> {
> @@ -53,6 +54,7 @@ void __init gstage_mode_detect(void)
> if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) == mode )
> {
> gstage_mode = mode;
> + gstage_root_level = modes[mode_idx].paging_levels - 1;
> break;
> }
> }
> @@ -210,6 +212,9 @@ int p2m_init(struct domain *d)
> rwlock_init(&p2m->lock);
> INIT_PAGE_LIST_HEAD(&p2m->pages);
>
> + p2m->max_mapped_gfn = _gfn(0);
> + p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
> +
> /*
> * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHROUGH
> * is not ready for RISC-V support.
> @@ -251,13 +256,287 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
> return rc;
> }
>
> +/*
> + * Find and map the root page table. The caller is responsible for
> + * unmapping the table.
With the root table being 4 pages, "the root table" is slightly misleading
here: Yu never map the entire table.
> + * The function will return NULL if the offset into the root table is
> + * invalid.
> + */
> +static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
> +{
> + unsigned long root_table_indx;
> +
> + root_table_indx = gfn_x(gfn) >> P2M_LEVEL_ORDER(P2M_ROOT_LEVEL);
> + if ( root_table_indx >= P2M_ROOT_PAGES )
> + return NULL;
> +
> + /*
> + * The P2M root page table is extended by 2 bits, making its size 16KB
> + * (instead of 4KB for non-root page tables). Therefore, p2m->root is
> + * allocated as four consecutive 4KB pages (since alloc_domheap_pages()
> + * only allocates 4KB pages).
> + *
> + * To determine which of these four 4KB pages the root_table_indx falls
> + * into, we divide root_table_indx by
> + * P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1).
> + */
> + root_table_indx /= P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1);
The subtraction of 1 here feels odd: You're after the root table's
number of entries, i.e. I'd expect you to pass just P2M_ROOT_LEVEL.
And the way P2M_PAGETABLE_ENTRIES() works also suggests so.
> +/*
> + * Insert an entry in the p2m. This should be called with a mapping
> + * equal to a page/superpage.
> + */
I don't follow this comment: There isn't any mapping being passed in, is there?
> +static int p2m_set_entry(struct p2m_domain *p2m,
> + gfn_t gfn,
> + unsigned long page_order,
> + mfn_t mfn,
> + p2m_type_t t)
Nit: Indentation.
> +{
> + unsigned int level;
> + unsigned int target = page_order / PAGETABLE_ORDER;
> + pte_t *entry, *table, orig_pte;
> + int rc;
> + /*
> + * A mapping is removed only if the MFN is explicitly set to INVALID_MFN.
> + * Other MFNs that are considered invalid by mfn_valid() (e.g., MMIO)
> + * are still allowed.
> + */
> + bool removing_mapping = mfn_eq(mfn, INVALID_MFN);
> + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
> +
> + ASSERT(p2m_is_write_locked(p2m));
> +
> + /*
> + * Check if the level target is valid: we only support
> + * 4K - 2M - 1G mapping.
> + */
> + ASSERT(target <= 2);
> +
> + table = p2m_get_root_pointer(p2m, gfn);
> + if ( !table )
> + return -EINVAL;
> +
> + for ( level = P2M_ROOT_LEVEL; level > target; level-- )
> + {
> + /*
> + * Don't try to allocate intermediate page table if the mapping
> + * is about to be removed.
> + */
> + rc = p2m_next_level(p2m, !removing_mapping,
> + level, &table, offsets[level]);
> + if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
> + {
> + rc = (rc == P2M_TABLE_MAP_NONE) ? -ENOENT : -ENOMEM;
> + /*
> + * We are here because p2m_next_level has failed to map
> + * the intermediate page table (e.g the table does not exist
> + * and they p2m tree is read-only).
I thought I commented on this or something similar already: Calling the
p2m tree "read-only" is imo misleading.
> It is a valid case
> + * when removing a mapping as it may not exist in the
> + * page table. In this case, just ignore it.
I fear the "it" has no reference; aiui you mean "ignore the lookup failure",
but the comment isn't worded to refer to that by "it".
> + */
> + rc = removing_mapping ? 0 : rc;
> + goto out;
> + }
> +
> + if ( rc != P2M_TABLE_NORMAL )
> + break;
> + }
> +
> + entry = table + offsets[level];
> +
> + /*
> + * If we are here with level > target, we must be at a leaf node,
> + * and we need to break up the superpage.
> + */
> + if ( level > target )
> + {
> + panic("Shattering isn't implemented\n");
> + }
> +
> + /*
> + * We should always be there with the correct level because all the
> + * intermediate tables have been installed if necessary.
> + */
> + ASSERT(level == target);
> +
> + orig_pte = *entry;
> +
> + if ( removing_mapping )
> + p2m_clean_pte(entry, p2m->clean_dcache);
> + else
> + {
> + pte_t pte = p2m_pte_from_mfn(mfn, t);
> +
> + p2m_write_pte(entry, pte, p2m->clean_dcache);
> +
> + p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
> + gfn_add(gfn, BIT(page_order, UL) - 1));
> + p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, gfn);
> + }
> +
> + p2m->need_flush = true;
> +
> + /*
> + * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHROUGH
> + * is not ready for RISC-V support.
> + *
> + * When CONFIG_HAS_PASSTHROUGH=y, iommu_iotlb_flush() should be done
> + * here.
> + */
> +#ifdef CONFIG_HAS_PASSTHROUGH
> +# error "add code to flush IOMMU TLB"
> +#endif
> +
> + rc = 0;
> +
> + /*
> + * Free the entry only if the original pte was valid and the base
> + * is different (to avoid freeing when permission is changed).
> + *
> + * If previously MFN 0 was mapped and it is going to be removed
> + * and considering that during removing MFN 0 is used then `entry`
> + * and `new_entry` will be the same and p2m_free_subtree() won't be
> + * called. This case is handled explicitly.
> + */
> + if ( pte_is_valid(orig_pte) &&
> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
> + (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
> + p2m_free_subtree(p2m, orig_pte, level);
I continue to fail to understand why the MFN would matter here. Isn't the
need to free strictly tied to a VALID -> NOT VALID transition? A permission
change simply retains the VALID state of an entry.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 10/18] xen/riscv: implement p2m_set_range()
2025-09-19 23:36 ` Jan Beulich
@ 2025-09-25 20:08 ` Oleksii Kurochko
2025-09-26 7:07 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-25 20:08 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 10193 bytes --]
On 9/20/25 1:36 AM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -16,6 +16,7 @@
>> #include <asm/riscv_encoding.h>
>>
>> unsigned long __ro_after_init gstage_mode;
>> +unsigned int __ro_after_init gstage_root_level;
>>
>> void __init gstage_mode_detect(void)
>> {
>> @@ -53,6 +54,7 @@ void __init gstage_mode_detect(void)
>> if ( MASK_EXTR(csr_read(CSR_HGATP), HGATP_MODE_MASK) == mode )
>> {
>> gstage_mode = mode;
>> + gstage_root_level = modes[mode_idx].paging_levels - 1;
>> break;
>> }
>> }
>> @@ -210,6 +212,9 @@ int p2m_init(struct domain *d)
>> rwlock_init(&p2m->lock);
>> INIT_PAGE_LIST_HEAD(&p2m->pages);
>>
>> + p2m->max_mapped_gfn = _gfn(0);
>> + p2m->lowest_mapped_gfn = _gfn(ULONG_MAX);
>> +
>> /*
>> * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHROUGH
>> * is not ready for RISC-V support.
>> @@ -251,13 +256,287 @@ int p2m_set_allocation(struct domain *d, unsigned long pages, bool *preempted)
>> return rc;
>> }
>>
>> +/*
>> + * Find and map the root page table. The caller is responsible for
>> + * unmapping the table.
> With the root table being 4 pages, "the root table" is slightly misleading
> here: Yu never map the entire table.
I will update the comment then to:
/*
* Map one of the four root pages of the P2M root page table.
*
* The P2M root page table is larger than normal (16KB instead of 4KB),
* so it is allocated as four consecutive 4KB pages. This function selects
* the appropriate 4KB page based on the given GFN and returns a mapping
* to it.
*
* The caller is responsible for unmapping the page after use.
*
* Returns NULL if the calculated offset into the root table is invalid.
*/
>
>> + * The function will return NULL if the offset into the root table is
>> + * invalid.
>> + */
>> +static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
>> +{
>> + unsigned long root_table_indx;
>> +
>> + root_table_indx = gfn_x(gfn) >> P2M_LEVEL_ORDER(P2M_ROOT_LEVEL);
>> + if ( root_table_indx >= P2M_ROOT_PAGES )
>> + return NULL;
>> +
>> + /*
>> + * The P2M root page table is extended by 2 bits, making its size 16KB
>> + * (instead of 4KB for non-root page tables). Therefore, p2m->root is
>> + * allocated as four consecutive 4KB pages (since alloc_domheap_pages()
>> + * only allocates 4KB pages).
>> + *
>> + * To determine which of these four 4KB pages the root_table_indx falls
>> + * into, we divide root_table_indx by
>> + * P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1).
>> + */
>> + root_table_indx /= P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1);
> The subtraction of 1 here feels odd: You're after the root table's
> number of entries, i.e. I'd expect you to pass just P2M_ROOT_LEVEL.
> And the way P2M_PAGETABLE_ENTRIES() works also suggests so.
The purpose of this line is to select the page within the root table, which
consists of 4 consecutive pages. However, P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL)
returns 2048, so root_table_idx will always be 0 after devision, which is not
what we want.
As an alternative, P2M_PAGETABLE_ENTRIES(0) could be used, since it always
returns 512. Dividing root_table_idx by 512 then yields the index of the page
within the root table, which is made up of 4 consecutive pages.
Does it make sense now?
The problem may occur with DECLARE_OFFSET(), which can produce an incorrect
index within the root page table. Since the index is in the range [0, 2047],
it becomes an issue if the value is greater than 511, because DECLARE_OFFSET()
does not account for the larger range of the root index.
I am not sure whether it is better to make DECLARE_OFFSET() generic enough
for both P2M and Xen page tables, or to provide a separate P2M_DECLARE_OFFSET()
and use it only in P2M-related code.
Also, it could be an option to move DECLARE_OFFSET() from asm/page.h header
to riscv/pt.c and define another one DECLARE_OFFSETS in riscv/p2m.c.
Do you have a preference?
>
>> +/*
>> + * Insert an entry in the p2m. This should be called with a mapping
>> + * equal to a page/superpage.
>> + */
> I don't follow this comment: There isn't any mapping being passed in, is there?
I think this comment should be dropped, it was about that requested mapping
should be equal to a page/superpage(4k, 2m, 1g), the correct order is always
guaranteed by p2m_mapping_order().
>
>> +static int p2m_set_entry(struct p2m_domain *p2m,
>> + gfn_t gfn,
>> + unsigned long page_order,
>> + mfn_t mfn,
>> + p2m_type_t t)
> Nit: Indentation.
>
>> +{
>> + unsigned int level;
>> + unsigned int target = page_order / PAGETABLE_ORDER;
>> + pte_t *entry, *table, orig_pte;
>> + int rc;
>> + /*
>> + * A mapping is removed only if the MFN is explicitly set to INVALID_MFN.
>> + * Other MFNs that are considered invalid by mfn_valid() (e.g., MMIO)
>> + * are still allowed.
>> + */
>> + bool removing_mapping = mfn_eq(mfn, INVALID_MFN);
>> + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
>> +
>> + ASSERT(p2m_is_write_locked(p2m));
>> +
>> + /*
>> + * Check if the level target is valid: we only support
>> + * 4K - 2M - 1G mapping.
>> + */
>> + ASSERT(target <= 2);
>> +
>> + table = p2m_get_root_pointer(p2m, gfn);
>> + if ( !table )
>> + return -EINVAL;
>> +
>> + for ( level = P2M_ROOT_LEVEL; level > target; level-- )
>> + {
>> + /*
>> + * Don't try to allocate intermediate page table if the mapping
>> + * is about to be removed.
>> + */
>> + rc = p2m_next_level(p2m, !removing_mapping,
>> + level, &table, offsets[level]);
>> + if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
>> + {
>> + rc = (rc == P2M_TABLE_MAP_NONE) ? -ENOENT : -ENOMEM;
>> + /*
>> + * We are here because p2m_next_level has failed to map
>> + * the intermediate page table (e.g the table does not exist
>> + * and they p2m tree is read-only).
> I thought I commented on this or something similar already: Calling the
> p2m tree "read-only" is imo misleading.
I will change then "read-only" to "not allocatable".
>
>> It is a valid case
>> + * when removing a mapping as it may not exist in the
>> + * page table. In this case, just ignore it.
> I fear the "it" has no reference; aiui you mean "ignore the lookup failure",
> but the comment isn't worded to refer to that by "it".
I will update the comment correspondingly.
>
>> + */
>> + rc = removing_mapping ? 0 : rc;
>> + goto out;
>> + }
>> +
>> + if ( rc != P2M_TABLE_NORMAL )
>> + break;
>> + }
>> +
>> + entry = table + offsets[level];
>> +
>> + /*
>> + * If we are here with level > target, we must be at a leaf node,
>> + * and we need to break up the superpage.
>> + */
>> + if ( level > target )
>> + {
>> + panic("Shattering isn't implemented\n");
>> + }
>> +
>> + /*
>> + * We should always be there with the correct level because all the
>> + * intermediate tables have been installed if necessary.
>> + */
>> + ASSERT(level == target);
>> +
>> + orig_pte = *entry;
>> +
>> + if ( removing_mapping )
>> + p2m_clean_pte(entry, p2m->clean_dcache);
>> + else
>> + {
>> + pte_t pte = p2m_pte_from_mfn(mfn, t);
>> +
>> + p2m_write_pte(entry, pte, p2m->clean_dcache);
>> +
>> + p2m->max_mapped_gfn = gfn_max(p2m->max_mapped_gfn,
>> + gfn_add(gfn, BIT(page_order, UL) - 1));
>> + p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, gfn);
>> + }
>> +
>> + p2m->need_flush = true;
>> +
>> + /*
>> + * Currently, the infrastructure required to enable CONFIG_HAS_PASSTHROUGH
>> + * is not ready for RISC-V support.
>> + *
>> + * When CONFIG_HAS_PASSTHROUGH=y, iommu_iotlb_flush() should be done
>> + * here.
>> + */
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>> +# error "add code to flush IOMMU TLB"
>> +#endif
>> +
>> + rc = 0;
>> +
>> + /*
>> + * Free the entry only if the original pte was valid and the base
>> + * is different (to avoid freeing when permission is changed).
>> + *
>> + * If previously MFN 0 was mapped and it is going to be removed
>> + * and considering that during removing MFN 0 is used then `entry`
>> + * and `new_entry` will be the same and p2m_free_subtree() won't be
>> + * called. This case is handled explicitly.
>> + */
>> + if ( pte_is_valid(orig_pte) &&
>> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
>> + (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
>> + p2m_free_subtree(p2m, orig_pte, level);
> I continue to fail to understand why the MFN would matter here.
My understanding is that if, for the same GFN, the MFN changes fromMFN_1 to
MFN_2, then we need to update any references on the page referenced by
|orig_pte| to ensure the proper reference counter is maintained for the page
pointed to byMFN_1.
> Isn't the
> need to free strictly tied to a VALID -> NOT VALID transition? A permission
> change simply retains the VALID state of an entry.
It covers a case when removing happens and probably in this case we don't need
to check specifically for mfn(0) case "mfn_eq(pte_get_mfn(*entry), _mfn(0))",
but it would be enough to check that pte_is_valid(entry) instead:
...
(removing_mapping && !pte_is_valid(entry)))) )
Or only check removing_mapping variable as `entry` would be invalided by the
code above anyway. So we will get:
+ if ( pte_is_valid(orig_pte) &&
+ (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) || removing_mapping) )
+ p2m_free_subtree(p2m, orig_pte, level);
Does it make sense now?
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 12378 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 10/18] xen/riscv: implement p2m_set_range()
2025-09-25 20:08 ` Oleksii Kurochko
@ 2025-09-26 7:07 ` Jan Beulich
2025-09-26 8:58 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-26 7:07 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 25.09.2025 22:08, Oleksii Kurochko wrote:
> On 9/20/25 1:36 AM, Jan Beulich wrote:
>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>> +static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
>>> +{
>>> + unsigned long root_table_indx;
>>> +
>>> + root_table_indx = gfn_x(gfn) >> P2M_LEVEL_ORDER(P2M_ROOT_LEVEL);
>>> + if ( root_table_indx >= P2M_ROOT_PAGES )
>>> + return NULL;
>>> +
>>> + /*
>>> + * The P2M root page table is extended by 2 bits, making its size 16KB
>>> + * (instead of 4KB for non-root page tables). Therefore, p2m->root is
>>> + * allocated as four consecutive 4KB pages (since alloc_domheap_pages()
>>> + * only allocates 4KB pages).
>>> + *
>>> + * To determine which of these four 4KB pages the root_table_indx falls
>>> + * into, we divide root_table_indx by
>>> + * P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1).
>>> + */
>>> + root_table_indx /= P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1);
>> The subtraction of 1 here feels odd: You're after the root table's
>> number of entries, i.e. I'd expect you to pass just P2M_ROOT_LEVEL.
>> And the way P2M_PAGETABLE_ENTRIES() works also suggests so.
>
> The purpose of this line is to select the page within the root table, which
> consists of 4 consecutive pages. However, P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL)
> returns 2048, so root_table_idx will always be 0 after devision, which is not
> what we want.
>
> As an alternative, P2M_PAGETABLE_ENTRIES(0) could be used, since it always
> returns 512. Dividing root_table_idx by 512 then yields the index of the page
> within the root table, which is made up of 4 consecutive pages.
>
> Does it make sense now?
Yes and no. I understand what you're after, but that doesn't make the use of
P2M_PAGETABLE_ENTRIES() (with an arbitrary level as argument) correct. This
calculation wants doing by solely using properties of the top level.
> The problem may occur with DECLARE_OFFSET(), which can produce an incorrect
> index within the root page table. Since the index is in the range [0, 2047],
> it becomes an issue if the value is greater than 511, because DECLARE_OFFSET()
> does not account for the larger range of the root index.
>
> I am not sure whether it is better to make DECLARE_OFFSET() generic enough
> for both P2M and Xen page tables, or to provide a separate P2M_DECLARE_OFFSET()
> and use it only in P2M-related code.
> Also, it could be an option to move DECLARE_OFFSET() from asm/page.h header
> to riscv/pt.c and define another one DECLARE_OFFSETS in riscv/p2m.c.
>
> Do you have a preference?
Not really, no. I don't like DECLARE_OFFSETS() anyway.
>>> +static int p2m_set_entry(struct p2m_domain *p2m,
>>> + gfn_t gfn,
>>> + unsigned long page_order,
>>> + mfn_t mfn,
>>> + p2m_type_t t)
>> Nit: Indentation.
>>
>>> +{
>>> + unsigned int level;
>>> + unsigned int target = page_order / PAGETABLE_ORDER;
>>> + pte_t *entry, *table, orig_pte;
>>> + int rc;
>>> + /*
>>> + * A mapping is removed only if the MFN is explicitly set to INVALID_MFN.
>>> + * Other MFNs that are considered invalid by mfn_valid() (e.g., MMIO)
>>> + * are still allowed.
>>> + */
>>> + bool removing_mapping = mfn_eq(mfn, INVALID_MFN);
>>> + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
>>> +
>>> + ASSERT(p2m_is_write_locked(p2m));
>>> +
>>> + /*
>>> + * Check if the level target is valid: we only support
>>> + * 4K - 2M - 1G mapping.
>>> + */
>>> + ASSERT(target <= 2);
>>> +
>>> + table = p2m_get_root_pointer(p2m, gfn);
>>> + if ( !table )
>>> + return -EINVAL;
>>> +
>>> + for ( level = P2M_ROOT_LEVEL; level > target; level-- )
>>> + {
>>> + /*
>>> + * Don't try to allocate intermediate page table if the mapping
>>> + * is about to be removed.
>>> + */
>>> + rc = p2m_next_level(p2m, !removing_mapping,
>>> + level, &table, offsets[level]);
>>> + if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
>>> + {
>>> + rc = (rc == P2M_TABLE_MAP_NONE) ? -ENOENT : -ENOMEM;
>>> + /*
>>> + * We are here because p2m_next_level has failed to map
>>> + * the intermediate page table (e.g the table does not exist
>>> + * and they p2m tree is read-only).
>> I thought I commented on this or something similar already: Calling the
>> p2m tree "read-only" is imo misleading.
>
> I will change then "read-only" to "not allocatable".
That'll be only marginally better: What's "allocatable"? Why not something
like "... does not exist and none should be allocated"? Or maybe simply
omit this part of the comment?
>>> + /*
>>> + * Free the entry only if the original pte was valid and the base
>>> + * is different (to avoid freeing when permission is changed).
>>> + *
>>> + * If previously MFN 0 was mapped and it is going to be removed
>>> + * and considering that during removing MFN 0 is used then `entry`
>>> + * and `new_entry` will be the same and p2m_free_subtree() won't be
>>> + * called. This case is handled explicitly.
>>> + */
>>> + if ( pte_is_valid(orig_pte) &&
>>> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
>>> + (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
>>> + p2m_free_subtree(p2m, orig_pte, level);
>> I continue to fail to understand why the MFN would matter here.
>
> My understanding is that if, for the same GFN, the MFN changes fromMFN_1 to
> MFN_2, then we need to update any references on the page referenced by
> |orig_pte| to ensure the proper reference counter is maintained for the page
> pointed to byMFN_1.
>
>> Isn't the
>> need to free strictly tied to a VALID -> NOT VALID transition? A permission
>> change simply retains the VALID state of an entry.
>
> It covers a case when removing happens and probably in this case we don't need
> to check specifically for mfn(0) case "mfn_eq(pte_get_mfn(*entry), _mfn(0))",
> but it would be enough to check that pte_is_valid(entry) instead:
> ...
> (removing_mapping && !pte_is_valid(entry)))) )
>
> Or only check removing_mapping variable as `entry` would be invalided by the
> code above anyway. So we will get:
> + if ( pte_is_valid(orig_pte) &&
> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) || removing_mapping) )
> + p2m_free_subtree(p2m, orig_pte, level);
>
> Does it make sense now?
Not really, sorry. Imo the complicated condition indicates that something is
wrong (or at least inefficient) here.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 10/18] xen/riscv: implement p2m_set_range()
2025-09-26 7:07 ` Jan Beulich
@ 2025-09-26 8:58 ` Oleksii Kurochko
2025-10-13 11:59 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-26 8:58 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 6949 bytes --]
On 9/26/25 9:07 AM, Jan Beulich wrote:
> On 25.09.2025 22:08, Oleksii Kurochko wrote:
>> On 9/20/25 1:36 AM, Jan Beulich wrote:
>>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>>> +static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
>>>> +{
>>>> + unsigned long root_table_indx;
>>>> +
>>>> + root_table_indx = gfn_x(gfn) >> P2M_LEVEL_ORDER(P2M_ROOT_LEVEL);
>>>> + if ( root_table_indx >= P2M_ROOT_PAGES )
>>>> + return NULL;
>>>> +
>>>> + /*
>>>> + * The P2M root page table is extended by 2 bits, making its size 16KB
>>>> + * (instead of 4KB for non-root page tables). Therefore, p2m->root is
>>>> + * allocated as four consecutive 4KB pages (since alloc_domheap_pages()
>>>> + * only allocates 4KB pages).
>>>> + *
>>>> + * To determine which of these four 4KB pages the root_table_indx falls
>>>> + * into, we divide root_table_indx by
>>>> + * P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1).
>>>> + */
>>>> + root_table_indx /= P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL - 1);
>>> The subtraction of 1 here feels odd: You're after the root table's
>>> number of entries, i.e. I'd expect you to pass just P2M_ROOT_LEVEL.
>>> And the way P2M_PAGETABLE_ENTRIES() works also suggests so.
>> The purpose of this line is to select the page within the root table, which
>> consists of 4 consecutive pages. However, P2M_PAGETABLE_ENTRIES(P2M_ROOT_LEVEL)
>> returns 2048, so root_table_idx will always be 0 after devision, which is not
>> what we want.
>>
>> As an alternative, P2M_PAGETABLE_ENTRIES(0) could be used, since it always
>> returns 512. Dividing root_table_idx by 512 then yields the index of the page
>> within the root table, which is made up of 4 consecutive pages.
>>
>> Does it make sense now?
> Yes and no. I understand what you're after, but that doesn't make the use of
> P2M_PAGETABLE_ENTRIES() (with an arbitrary level as argument) correct. This
> calculation wants doing by solely using properties of the top level.
Got it, thanks. Then I will use solely properties of the top level.
>>>> +static int p2m_set_entry(struct p2m_domain *p2m,
>>>> + gfn_t gfn,
>>>> + unsigned long page_order,
>>>> + mfn_t mfn,
>>>> + p2m_type_t t)
>>> Nit: Indentation.
>>>
>>>> +{
>>>> + unsigned int level;
>>>> + unsigned int target = page_order / PAGETABLE_ORDER;
>>>> + pte_t *entry, *table, orig_pte;
>>>> + int rc;
>>>> + /*
>>>> + * A mapping is removed only if the MFN is explicitly set to INVALID_MFN.
>>>> + * Other MFNs that are considered invalid by mfn_valid() (e.g., MMIO)
>>>> + * are still allowed.
>>>> + */
>>>> + bool removing_mapping = mfn_eq(mfn, INVALID_MFN);
>>>> + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
>>>> +
>>>> + ASSERT(p2m_is_write_locked(p2m));
>>>> +
>>>> + /*
>>>> + * Check if the level target is valid: we only support
>>>> + * 4K - 2M - 1G mapping.
>>>> + */
>>>> + ASSERT(target <= 2);
>>>> +
>>>> + table = p2m_get_root_pointer(p2m, gfn);
>>>> + if ( !table )
>>>> + return -EINVAL;
>>>> +
>>>> + for ( level = P2M_ROOT_LEVEL; level > target; level-- )
>>>> + {
>>>> + /*
>>>> + * Don't try to allocate intermediate page table if the mapping
>>>> + * is about to be removed.
>>>> + */
>>>> + rc = p2m_next_level(p2m, !removing_mapping,
>>>> + level, &table, offsets[level]);
>>>> + if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
>>>> + {
>>>> + rc = (rc == P2M_TABLE_MAP_NONE) ? -ENOENT : -ENOMEM;
>>>> + /*
>>>> + * We are here because p2m_next_level has failed to map
>>>> + * the intermediate page table (e.g the table does not exist
>>>> + * and they p2m tree is read-only).
>>> I thought I commented on this or something similar already: Calling the
>>> p2m tree "read-only" is imo misleading.
>> I will change then "read-only" to "not allocatable".
> That'll be only marginally better: What's "allocatable"? Why not something
> like "... does not exist and none should be allocated"? Or maybe simply
> omit this part of the comment?
Agree, "allocatable" could be also confusing. Perhaps, just omitting will
be fine.
>
>>>> + /*
>>>> + * Free the entry only if the original pte was valid and the base
>>>> + * is different (to avoid freeing when permission is changed).
>>>> + *
>>>> + * If previously MFN 0 was mapped and it is going to be removed
>>>> + * and considering that during removing MFN 0 is used then `entry`
>>>> + * and `new_entry` will be the same and p2m_free_subtree() won't be
>>>> + * called. This case is handled explicitly.
>>>> + */
>>>> + if ( pte_is_valid(orig_pte) &&
>>>> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
>>>> + (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
>>>> + p2m_free_subtree(p2m, orig_pte, level);
>>> I continue to fail to understand why the MFN would matter here.
>> My understanding is that if, for the same GFN, the MFN changes fromMFN_1 to
>> MFN_2, then we need to update any references on the page referenced by
>> |orig_pte| to ensure the proper reference counter is maintained for the page
>> pointed to byMFN_1.
>>
>>> Isn't the
>>> need to free strictly tied to a VALID -> NOT VALID transition? A permission
>>> change simply retains the VALID state of an entry.
>> It covers a case when removing happens and probably in this case we don't need
>> to check specifically for mfn(0) case "mfn_eq(pte_get_mfn(*entry), _mfn(0))",
>> but it would be enough to check that pte_is_valid(entry) instead:
>> ...
>> (removing_mapping && !pte_is_valid(entry)))) )
>>
>> Or only check removing_mapping variable as `entry` would be invalided by the
>> code above anyway. So we will get:
>> + if ( pte_is_valid(orig_pte) &&
>> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) || removing_mapping) )
>> + p2m_free_subtree(p2m, orig_pte, level);
>>
>> Does it make sense now?
> Not really, sorry. Imo the complicated condition indicates that something is
> wrong (or at least inefficient) here.
Then, in the case of aVALID -> VALID transition, where the MFN is changed for the
same PTE, should something be done with the old MFN (e.g., calling|p2m_put_page()|
for it), or can freeing the old MFN be delayed until|domain_relinquish_resources() |is called? If so, wouldn’t that lead to a situation where many old MFNs accumulate
and cannot be re-used until|domain_relinquish_resources()| (or another function that
explicitly frees pages) is invoked?
If we only need to care about theVALID -> NOT VALID transition, doesn’t that mean
|p2m_free_subtree()| should be called only when a removal actually occurs?
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 9088 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 10/18] xen/riscv: implement p2m_set_range()
2025-09-26 8:58 ` Oleksii Kurochko
@ 2025-10-13 11:59 ` Oleksii Kurochko
0 siblings, 0 replies; 62+ messages in thread
From: Oleksii Kurochko @ 2025-10-13 11:59 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 3263 bytes --]
On 9/26/25 10:58 AM, Oleksii Kurochko wrote:
>>>>> + /*
>>>>> + * Free the entry only if the original pte was valid and the base
>>>>> + * is different (to avoid freeing when permission is changed).
>>>>> + *
>>>>> + * If previously MFN 0 was mapped and it is going to be removed
>>>>> + * and considering that during removing MFN 0 is used then `entry`
>>>>> + * and `new_entry` will be the same and p2m_free_subtree() won't be
>>>>> + * called. This case is handled explicitly.
>>>>> + */
>>>>> + if ( pte_is_valid(orig_pte) &&
>>>>> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
>>>>> + (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
>>>>> + p2m_free_subtree(p2m, orig_pte, level);
>>>> I continue to fail to understand why the MFN would matter here.
>>> My understanding is that if, for the same GFN, the MFN changes fromMFN_1 to
>>> MFN_2, then we need to update any references on the page referenced by
>>> |orig_pte| to ensure the proper reference counter is maintained for the page
>>> pointed to byMFN_1.
>>>
>>>> Isn't the
>>>> need to free strictly tied to a VALID -> NOT VALID transition? A permission
>>>> change simply retains the VALID state of an entry.
>>> It covers a case when removing happens and probably in this case we don't need
>>> to check specifically for mfn(0) case "mfn_eq(pte_get_mfn(*entry), _mfn(0))",
>>> but it would be enough to check that pte_is_valid(entry) instead:
>>> ...
>>> (removing_mapping && !pte_is_valid(entry)))) )
>>>
>>> Or only check removing_mapping variable as `entry` would be invalided by the
>>> code above anyway. So we will get:
>>> + if ( pte_is_valid(orig_pte) &&
>>> + (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) || removing_mapping) )
>>> + p2m_free_subtree(p2m, orig_pte, level);
>>>
>>> Does it make sense now?
>> Not really, sorry. Imo the complicated condition indicates that something is
>> wrong (or at least inefficient) here.
> Then, in the case of aVALID -> VALID transition, where the MFN is changed for the
> same PTE, should something be done with the old MFN (e.g., calling|p2m_put_page()|
> for it), or can freeing the old MFN be delayed until|domain_relinquish_resources() |is called? If so, wouldn’t that lead to a situation where many old MFNs accumulate
> and cannot be re-used until|domain_relinquish_resources()| (or another function that
> explicitly frees pages) is invoked?
> If we only need to care about theVALID -> NOT VALID transition, doesn’t that mean
> |p2m_free_subtree()| should be called only when a removal actually occurs?
I've decided to "simplify" the original condition to:
/*
* In case of a VALID -> INVALID transition, the original PTE should
* always be freed.
*
* In case of a VALID -> VALID transition, the original PTE should be
* freed only if the MFNs are different. If the MFNs are the same
* (i.e., only permissions differ), there is no need to free the
* original PTE.
*/
if ( pte_is_valid(orig_pte) &&
(!pte_is_valid(*entry) ||
!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte))) )
{
I hope it would make more sense.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 4510 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (9 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 10/18] xen/riscv: implement p2m_set_range() Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-19 23:57 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration Oleksii Kurochko
` (6 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
This patch introduces a working implementation of p2m_free_subtree() for RISC-V
based on ARM's implementation of p2m_free_entry(), enabling proper cleanup
of page table entries in the P2M (physical-to-machine) mapping.
Only few things are changed:
- Introduce and use p2m_get_type() to get a type of p2m entry as
RISC-V's PTE doesn't have enough space to store all necessary types so
a type is stored outside PTE. But, at the moment, handle only types
which fit into PTE's bits.
Key additions include:
- p2m_free_subtree(): Recursively frees page table entries at all levels. It
handles both regular and superpage mappings and ensures that TLB entries
are flushed before freeing intermediate tables.
- p2m_put_page() and helpers:
- p2m_put_4k_page(): Clears GFN from xenheap pages if applicable.
- p2m_put_2m_superpage(): Releases foreign page references in a 2MB
superpage.
- p2m_get_type(): Extracts the stored p2m_type from the PTE bits.
- p2m_free_page(): Returns a page to a domain's freelist.
- Introduce p2m_is_foreign() and connected to it things.
Defines XEN_PT_ENTRIES in asm/page.h to simplify loops over page table
entries.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Stray blanks.
- Implement arch_flush_tlb_mask() to make the comment in p2m_put_foreign()
clear and explicit.
- Update the comment above p2m_is_ram() in p2m_put_4k_page() with an explanation
why p2m_is_ram() is used.
- Add a type check inside p2m_put_2m_superpage().
- Swap two conditions around in p2m_free_subtree():
if ( (level == 0) || pte_is_superpage(entry, level) )
- Add ASSERT() inside p2m_free_subtree() to check that level is <= 2; otherwise,
it could consume a lot of time and big memory usage because of recursion.
- Drop page_list_del() before p2m_free_page() as page_list_del() is called
inside p2m_free_page().
- Update p2m_freelist's total_pages when a page is added to p2m_freelist in
paging_free_page().
- Introduce P2M_SUPPORTED_LEVEL_MAPPING and use it in ASSERTs() which check
supported level.
- Use P2M_PAGETABLE_ENTRIES as XEN_PT_ENTRIES
doesn't takeinto into acount that G stage root page table is
extended by 2 bits.
- Update prototype of p2m_put_page() to not have unnecessary changes later.
---
Changes in V3:
- Use p2m_tlb_flush_sync(p2m) instead of p2m_force_tlb_flush_sync() in
p2m_free_subtree().
- Drop p2m_is_valid() implementation as pte_is_valid() is going to be used
instead.
- Drop p2m_is_superpage() and introduce pte_is_superpage() instead.
- s/p2m_free_entry/p2m_free_subtree.
- s/p2m_type_radix_get/p2m_get_type.
- Update implementation of p2m_get_type() to get type both from PTE bits,
other cases will be covered in a separate patch. This requires an
introduction of new P2M_TYPE_PTE_BITS_MASK macros.
- Drop p2m argument of p2m_get_type() as it isn't needed anymore.
- Put cheapest checks first in p2m_is_superpage().
- Use switch() in p2m_put_page().
- Update the comment in p2m_put_foreign_page().
- Code style fixes.
- Move p2m_foreign stuff to this commit.
- Drop p2m argument of p2m_put_page() as itsn't used anymore.
---
Changes in V2:
- New patch. It was a part of 2ma big patch "xen/riscv: implement p2m mapping
functionality" which was splitted to smaller.
- s/p2m_is_superpage/p2me_is_superpage.
---
xen/arch/riscv/include/asm/flushtlb.h | 6 +-
xen/arch/riscv/include/asm/p2m.h | 18 ++-
xen/arch/riscv/include/asm/page.h | 6 +
xen/arch/riscv/include/asm/paging.h | 2 +
xen/arch/riscv/p2m.c | 152 +++++++++++++++++++++++++-
xen/arch/riscv/paging.c | 8 ++
xen/arch/riscv/stubs.c | 5 -
7 files changed, 187 insertions(+), 10 deletions(-)
diff --git a/xen/arch/riscv/include/asm/flushtlb.h b/xen/arch/riscv/include/asm/flushtlb.h
index e70badae0c..ab32311568 100644
--- a/xen/arch/riscv/include/asm/flushtlb.h
+++ b/xen/arch/riscv/include/asm/flushtlb.h
@@ -41,8 +41,10 @@ static inline void page_set_tlbflush_timestamp(struct page_info *page)
BUG_ON("unimplemented");
}
-/* Flush specified CPUs' TLBs */
-void arch_flush_tlb_mask(const cpumask_t *mask);
+static inline void arch_flush_tlb_mask(const cpumask_t *mask)
+{
+ sbi_remote_hfence_gvma(mask, 0, 0);
+}
#endif /* ASM__RISCV__FLUSHTLB_H */
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index 1a43736855..29685c7852 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -106,6 +106,8 @@ typedef enum {
p2m_mmio_direct_io, /* Read/write mapping of genuine Device MMIO area,
PTE_PBMT_IO will be used for such mappings */
p2m_ext_storage, /* Following types'll be stored outsude PTE bits: */
+ p2m_map_foreign_rw, /* Read/write RAM pages from foreign domain */
+ p2m_map_foreign_ro, /* Read-only RAM pages from foreign domain */
/* Sentinel — not a real type, just a marker for comparison */
p2m_first_external = p2m_ext_storage,
@@ -116,15 +118,29 @@ static inline p2m_type_t arch_dt_passthrough_p2m_type(void)
return p2m_mmio_direct_io;
}
+/*
+ * Bits 8 and 9 are reserved for use by supervisor software;
+ * the implementation shall ignore this field.
+ * We are going to use to save in these bits frequently used types to avoid
+ * get/set of a type from radix tree.
+ */
+#define P2M_TYPE_PTE_BITS_MASK 0x300
+
/* We use bitmaps and mask to handle groups of types */
#define p2m_to_mask(t) BIT(t, UL)
/* RAM types, which map to real machine frames */
#define P2M_RAM_TYPES (p2m_to_mask(p2m_ram_rw))
+/* Foreign mappings types */
+#define P2M_FOREIGN_TYPES (p2m_to_mask(p2m_map_foreign_rw) | \
+ p2m_to_mask(p2m_map_foreign_ro))
+
/* Useful predicates */
#define p2m_is_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES)
-#define p2m_is_any_ram(t_) (p2m_to_mask(t_) & P2M_RAM_TYPES)
+#define p2m_is_any_ram(t_) (p2m_to_mask(t_) & \
+ (P2M_RAM_TYPES | P2M_FOREIGN_TYPES))
+#define p2m_is_foreign(t) (p2m_to_mask(t) & P2M_FOREIGN_TYPES)
#include <xen/p2m-common.h>
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
index 66cb192316..cb303af0c0 100644
--- a/xen/arch/riscv/include/asm/page.h
+++ b/xen/arch/riscv/include/asm/page.h
@@ -20,6 +20,7 @@
#define XEN_PT_LEVEL_SIZE(lvl) (_AT(paddr_t, 1) << XEN_PT_LEVEL_SHIFT(lvl))
#define XEN_PT_LEVEL_MAP_MASK(lvl) (~(XEN_PT_LEVEL_SIZE(lvl) - 1))
#define XEN_PT_LEVEL_MASK(lvl) (VPN_MASK << XEN_PT_LEVEL_SHIFT(lvl))
+#define XEN_PT_ENTRIES (_AT(unsigned int, 1) << PAGETABLE_ORDER)
/*
* PTE format:
@@ -182,6 +183,11 @@ static inline bool pte_is_mapping(pte_t p)
return (p.pte & PTE_VALID) && (p.pte & PTE_ACCESS_MASK);
}
+static inline bool pte_is_superpage(pte_t p, unsigned int level)
+{
+ return (level > 0) && pte_is_mapping(p);
+}
+
static inline int clean_and_invalidate_dcache_va_range(const void *p,
unsigned long size)
{
diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/asm/paging.h
index befad14f82..9712aa77c5 100644
--- a/xen/arch/riscv/include/asm/paging.h
+++ b/xen/arch/riscv/include/asm/paging.h
@@ -13,4 +13,6 @@ int paging_freelist_adjust(struct domain *d, unsigned long pages,
int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages);
int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages);
+void paging_free_page(struct domain *d, struct page_info *pg);
+
#endif /* ASM_RISCV_PAGING_H */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index db9f7a77ff..10acfa0a9c 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -98,6 +98,8 @@ void __init gstage_mode_detect(void)
local_hfence_gvma_all();
}
+#define P2M_SUPPORTED_LEVEL_MAPPING 2
+
/*
* Force a synchronous P2M TLB flush.
*
@@ -286,6 +288,16 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
return __map_domain_page(p2m->root + root_table_indx);
}
+static p2m_type_t p2m_get_type(const pte_t pte)
+{
+ p2m_type_t type = MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK);
+
+ if ( type == p2m_ext_storage )
+ panic("unimplemented\n");
+
+ return type;
+}
+
static inline void p2m_write_pte(pte_t *p, pte_t pte, bool clean_pte)
{
write_pte(p, pte);
@@ -342,11 +354,147 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
return P2M_TABLE_MAP_NONE;
}
+static void p2m_put_foreign_page(struct page_info *pg)
+{
+ /*
+ * It’s safe to call put_page() here because arch_flush_tlb_mask()
+ * will be invoked if the page is reallocated before the end of
+ * this loop, which will trigger a flush of the guest TLBs.
+ */
+ put_page(pg);
+}
+
+/* Put any references on the single 4K page referenced by mfn. */
+static void p2m_put_4k_page(mfn_t mfn, p2m_type_t type)
+{
+ /* TODO: Handle other p2m types */
+
+ if ( p2m_is_foreign(type) )
+ {
+ ASSERT(mfn_valid(mfn));
+ p2m_put_foreign_page(mfn_to_page(mfn));
+ }
+}
+
+/* Put any references on the superpage referenced by mfn. */
+static void p2m_put_2m_superpage(mfn_t mfn, p2m_type_t type)
+{
+ struct page_info *pg;
+ unsigned int i;
+
+ /*
+ * TODO: Handle other p2m types, but be aware that any changes to handle
+ * different types should require an update on the relinquish code to
+ * handle preemption.
+ */
+ if ( !p2m_is_foreign(type) )
+ return;
+
+ ASSERT(mfn_valid(mfn));
+
+ pg = mfn_to_page(mfn);
+
+ for ( i = 0; i < P2M_PAGETABLE_ENTRIES(1); i++, pg++ )
+ p2m_put_foreign_page(pg);
+}
+
+/* Put any references on the page referenced by pte. */
+static void p2m_put_page(const pte_t pte, unsigned int level, p2m_type_t p2mt)
+{
+ mfn_t mfn = pte_get_mfn(pte);
+
+ ASSERT(pte_is_valid(pte));
+
+ /*
+ * TODO: Currently we don't handle level 2 super-page, Xen is not
+ * preemptible and therefore some work is needed to handle such
+ * superpages, for which at some point Xen might end up freeing memory
+ * and therefore for such a big mapping it could end up in a very long
+ * operation.
+ */
+ switch ( level )
+ {
+ case 1:
+ return p2m_put_2m_superpage(mfn, p2mt);
+
+ case 0:
+ return p2m_put_4k_page(mfn, p2mt);
+
+ default:
+ assert_failed("Unsupported level");
+ break;
+ }
+}
+
+static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg)
+{
+ page_list_del(pg, &p2m->pages);
+
+ paging_free_page(p2m->domain, pg);
+}
+
/* Free pte sub-tree behind an entry */
static void p2m_free_subtree(struct p2m_domain *p2m,
pte_t entry, unsigned int level)
{
- panic("%s: hasn't been implemented yet\n", __func__);
+ unsigned int i;
+ pte_t *table;
+ mfn_t mfn;
+ struct page_info *pg;
+
+ /*
+ * Check if the level is valid: only 4K - 2M - 1G mappings are supported.
+ * To support levels > 2, the implementation of p2m_free_subtree() would
+ * need to be updated, as the current recursive approach could consume
+ * excessive time and memory.
+ */
+ ASSERT(level <= P2M_SUPPORTED_LEVEL_MAPPING);
+
+ /* Nothing to do if the entry is invalid. */
+ if ( !pte_is_valid(entry) )
+ return;
+
+ if ( (level == 0) || pte_is_superpage(entry, level) )
+ {
+ p2m_type_t p2mt = p2m_get_type(entry);
+
+#ifdef CONFIG_IOREQ_SERVER
+ /*
+ * If this gets called then either the entry was replaced by an entry
+ * with a different base (valid case) or the shattering of a superpage
+ * has failed (error case).
+ * So, at worst, the spurious mapcache invalidation might be sent.
+ */
+ if ( p2m_is_ram(p2m_get_type(p2m, entry)) &&
+ domain_has_ioreq_server(p2m->domain) )
+ ioreq_request_mapcache_invalidate(p2m->domain);
+#endif
+
+ p2m_put_page(entry, level, p2mt);
+
+ return;
+ }
+
+ table = map_domain_page(pte_get_mfn(entry));
+ for ( i = 0; i < P2M_PAGETABLE_ENTRIES(level); i++ )
+ p2m_free_subtree(p2m, table[i], level - 1);
+
+ unmap_domain_page(table);
+
+ /*
+ * Make sure all the references in the TLB have been removed before
+ * freing the intermediate page table.
+ * XXX: Should we defer the free of the page table to avoid the
+ * flush?
+ */
+ p2m_tlb_flush_sync(p2m);
+
+ mfn = pte_get_mfn(entry);
+ ASSERT(mfn_valid(mfn));
+
+ pg = mfn_to_page(mfn);
+
+ p2m_free_page(p2m, pg);
}
/*
@@ -377,7 +525,7 @@ static int p2m_set_entry(struct p2m_domain *p2m,
* Check if the level target is valid: we only support
* 4K - 2M - 1G mapping.
*/
- ASSERT(target <= 2);
+ ASSERT(target <= P2M_SUPPORTED_LEVEL_MAPPING);
table = p2m_get_root_pointer(p2m, gfn);
if ( !table )
diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c
index ed537fee07..049b850e03 100644
--- a/xen/arch/riscv/paging.c
+++ b/xen/arch/riscv/paging.c
@@ -107,6 +107,14 @@ int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages)
return 0;
}
+void paging_free_page(struct domain *d, struct page_info *pg)
+{
+ spin_lock(&d->arch.paging.lock);
+ page_list_add_tail(pg, &d->arch.paging.freelist);
+ ACCESS_ONCE(d->arch.paging.total_pages)++;
+ spin_unlock(&d->arch.paging.lock);
+}
+
/* Domain paging struct initialization. */
int paging_domain_init(struct domain *d)
{
diff --git a/xen/arch/riscv/stubs.c b/xen/arch/riscv/stubs.c
index 1a8c86cd8d..ad6fdbf501 100644
--- a/xen/arch/riscv/stubs.c
+++ b/xen/arch/riscv/stubs.c
@@ -65,11 +65,6 @@ int arch_monitor_domctl_event(struct domain *d,
/* smp.c */
-void arch_flush_tlb_mask(const cpumask_t *mask)
-{
- BUG_ON("unimplemented");
-}
-
void smp_send_event_check_mask(const cpumask_t *mask)
{
BUG_ON("unimplemented");
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers
2025-09-17 21:55 ` [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers Oleksii Kurochko
@ 2025-09-19 23:57 ` Jan Beulich
2025-09-26 15:33 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-19 23:57 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> @@ -342,11 +354,147 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
> return P2M_TABLE_MAP_NONE;
> }
>
> +static void p2m_put_foreign_page(struct page_info *pg)
> +{
> + /*
> + * It’s safe to call put_page() here because arch_flush_tlb_mask()
> + * will be invoked if the page is reallocated before the end of
> + * this loop, which will trigger a flush of the guest TLBs.
> + */
> + put_page(pg);
> +}
What is "this loop" referring to in the comment? There's no loop here.
> +/* Put any references on the page referenced by pte. */
> +static void p2m_put_page(const pte_t pte, unsigned int level, p2m_type_t p2mt)
> +{
> + mfn_t mfn = pte_get_mfn(pte);
> +
> + ASSERT(pte_is_valid(pte));
> +
> + /*
> + * TODO: Currently we don't handle level 2 super-page, Xen is not
> + * preemptible and therefore some work is needed to handle such
> + * superpages, for which at some point Xen might end up freeing memory
> + * and therefore for such a big mapping it could end up in a very long
> + * operation.
> + */
> + switch ( level )
> + {
> + case 1:
> + return p2m_put_2m_superpage(mfn, p2mt);
> +
> + case 0:
> + return p2m_put_4k_page(mfn, p2mt);
> +
> + default:
> + assert_failed("Unsupported level");
I don't think assert_failed() is supposed to be used directly. What's
wrong with using ASSERT_UNREACHABLE() here?
> --- a/xen/arch/riscv/paging.c
> +++ b/xen/arch/riscv/paging.c
> @@ -107,6 +107,14 @@ int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages)
> return 0;
> }
>
> +void paging_free_page(struct domain *d, struct page_info *pg)
> +{
> + spin_lock(&d->arch.paging.lock);
> + page_list_add_tail(pg, &d->arch.paging.freelist);
> + ACCESS_ONCE(d->arch.paging.total_pages)++;
More a question to other REST maintainers than to you: Is this kind of
use of ACCESS_ONCE() okay? By the wording, one might assume a single
memory access, yet only x86 can actually carry out an increment (or
alike) of an item in memory in a single insn.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers
2025-09-19 23:57 ` Jan Beulich
@ 2025-09-26 15:33 ` Oleksii Kurochko
2025-09-28 14:30 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-26 15:33 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 3194 bytes --]
On 9/20/25 1:57 AM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>> @@ -342,11 +354,147 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
>> return P2M_TABLE_MAP_NONE;
>> }
>>
>> +static void p2m_put_foreign_page(struct page_info *pg)
>> +{
>> + /*
>> + * It’s safe to call put_page() here because arch_flush_tlb_mask()
>> + * will be invoked if the page is reallocated before the end of
>> + * this loop, which will trigger a flush of the guest TLBs.
>> + */
>> + put_page(pg);
>> +}
> What is "this loop" referring to in the comment? There's no loop here.
The loop is inside the caller (p2m_put_2m_superpage()):
...
for ( i = 0; i < P2M_PAGETABLE_ENTRIES(1); i++, pg++ )
p2m_put_foreign_page(pg);
Agree, that comment is pretty confusing. I am not sure it is necessary to
mention a specific loop — the comment would still be correct without
referring to "this loop". So I will rewrite the comment as:
/*
* It’s safe to call put_page() here because arch_flush_tlb_mask()
* will be invoked if the page is reallocated, which will trigger a
* flush of the guest TLBs.
*/
>
>> +/* Put any references on the page referenced by pte. */
>> +static void p2m_put_page(const pte_t pte, unsigned int level, p2m_type_t p2mt)
>> +{
>> + mfn_t mfn = pte_get_mfn(pte);
>> +
>> + ASSERT(pte_is_valid(pte));
>> +
>> + /*
>> + * TODO: Currently we don't handle level 2 super-page, Xen is not
>> + * preemptible and therefore some work is needed to handle such
>> + * superpages, for which at some point Xen might end up freeing memory
>> + * and therefore for such a big mapping it could end up in a very long
>> + * operation.
>> + */
>> + switch ( level )
>> + {
>> + case 1:
>> + return p2m_put_2m_superpage(mfn, p2mt);
>> +
>> + case 0:
>> + return p2m_put_4k_page(mfn, p2mt);
>> +
>> + default:
>> + assert_failed("Unsupported level");
> I don't think assert_failed() is supposed to be used directly. What's
> wrong with using ASSERT_UNREACHABLE() here?
Nothing, I just wanted to have some custom massage. I am okay with
ASSERT_UNREACHABLE(), anyway it will print where ASSERT occurred.
>
>> --- a/xen/arch/riscv/paging.c
>> +++ b/xen/arch/riscv/paging.c
>> @@ -107,6 +107,14 @@ int paging_ret_pages_to_domheap(struct domain *d, unsigned int nr_pages)
>> return 0;
>> }
>>
>> +void paging_free_page(struct domain *d, struct page_info *pg)
>> +{
>> + spin_lock(&d->arch.paging.lock);
>> + page_list_add_tail(pg, &d->arch.paging.freelist);
>> + ACCESS_ONCE(d->arch.paging.total_pages)++;
> More a question to other REST maintainers than to you: Is this kind of
> use of ACCESS_ONCE() okay? By the wording, one might assume a single
> memory access, yet only x86 can actually carry out an increment (or
> alike) of an item in memory in a single insn.
I thought that ACCESS_ONCE() is more about preventing compiler optimizations
than about ensuring atomicity.
In this specific case, I don’t think ACCESS_ONCE() is really needed since
a spin lock is already being used.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 4213 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers
2025-09-26 15:33 ` Oleksii Kurochko
@ 2025-09-28 14:30 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-28 14:30 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 26.09.2025 17:33, Oleksii Kurochko wrote:
> On 9/20/25 1:57 AM, Jan Beulich wrote:
>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>> +/* Put any references on the page referenced by pte. */
>>> +static void p2m_put_page(const pte_t pte, unsigned int level, p2m_type_t p2mt)
>>> +{
>>> + mfn_t mfn = pte_get_mfn(pte);
>>> +
>>> + ASSERT(pte_is_valid(pte));
>>> +
>>> + /*
>>> + * TODO: Currently we don't handle level 2 super-page, Xen is not
>>> + * preemptible and therefore some work is needed to handle such
>>> + * superpages, for which at some point Xen might end up freeing memory
>>> + * and therefore for such a big mapping it could end up in a very long
>>> + * operation.
>>> + */
>>> + switch ( level )
>>> + {
>>> + case 1:
>>> + return p2m_put_2m_superpage(mfn, p2mt);
>>> +
>>> + case 0:
>>> + return p2m_put_4k_page(mfn, p2mt);
>>> +
>>> + default:
>>> + assert_failed("Unsupported level");
>> I don't think assert_failed() is supposed to be used directly. What's
>> wrong with using ASSERT_UNREACHABLE() here?
>
> Nothing, I just wanted to have some custom massage. I am okay with
> ASSERT_UNREACHABLE(), anyway it will print where ASSERT occurred.
Just fyi, the (kind of) "canonical" way of having a custom message emitted
from an assertion is ASSERT(!"<message text>").
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (10 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 11/18] xen/riscv: Implement p2m_free_subtree() and related helpers Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-22 16:28 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 13/18] xen/riscv: implement p2m_next_level() Oleksii Kurochko
` (5 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
This patch adds the initial logic for constructing PTEs from MFNs in the RISC-V
p2m subsystem. It includes:
- Implementation of p2m_pte_from_mfn(): Generates a valid PTE using the
given MFN, p2m_type_t, including permission encoding and PBMT attribute
setup.
- New helper p2m_set_permission(): Encodes access rights (r, w, x) into the
PTE based on both p2m type and access permissions.
- p2m_set_type(): Stores the p2m type in PTE's bits. The storage of types,
which don't fit PTE bits, will be implemented separately later.
- Add detection of Svade extension to properly handle a possible page-fault
if A and D bits aren't set.
PBMT type encoding support:
- Introduces an enum pbmt_type_t to represent the PBMT field values.
- Maps types like p2m_mmio_direct_dev to p2m_mmio_direct_io, others default
to pbmt_pma.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- p2m_set_permission() updates:
- Update permissions for p2m_ram_rw case, make it also executable.
- Add pernissions setting for p2m_map_foreign_* types.
- Drop setting peromissions for p2m_ext_storage.
- Only turn off PTE_VALID bit for p2m_invalid, don't touch other bits.
- p2m_pte_from_mfn() updates:
- Update ASSERT(), add a check that mfn isn't INVALID_MFN (1)
explicitly to avoid the case when PADDR_MASK isn't narrow enough to
catch the case (1).
- Drop unnessary check around call of p2m_set_type() as this check
is already included inside p2m_set_type().
- Introduce new p2m type p2m_first_external to detect that passed type
is stored in external storage.
- Add handling of PTE's A and D bits in pm2_set_permission. Also, set
PTE_USER bit. For this cpufeatures.{h and c} were updated to be able
to detect availability of Svade extension.
- Drop grant table related code as it isn't going to be used at the moment.
---
Changes in V3:
- s/p2m_entry_from_mfn/p2m_pte_from_mfn.
- s/pbmt_type_t/pbmt_type.
- s/pbmt_max/pbmt_count.
- s/p2m_type_radix_set/p2m_set_type.
- Rework p2m_set_type() to handle only types which are fited into PTEs bits.
Other types will be covered separately.
Update arguments of p2m_set_type(): there is no any reason for p2m anymore.
- p2m_set_permissions() updates:
- Update the code in p2m_set_permission() for cases p2m_raw_rw and
p2m_mmio_direct_io to set proper type permissions.
- Add cases for p2m_grant_map_rw and p2m_grant_map_ro.
- Use ASSERT_UNEACHABLE() instead of BUG() in switch cases of
p2m_set_permissions.
- Add blank lines non-fall-through case blocks in switch cases.
- Set MFN before permissions are set in p2m_pte_from_mfn().
- Update prototype of p2m_entry_from_mfn().
---
Changes in V2:
- New patch. It was a part of a big patch "xen/riscv: implement p2m mapping
functionality" which was splitted to smaller.
---
xen/arch/riscv/cpufeature.c | 1 +
xen/arch/riscv/include/asm/cpufeature.h | 1 +
xen/arch/riscv/include/asm/page.h | 8 ++
xen/arch/riscv/p2m.c | 97 ++++++++++++++++++++++++-
4 files changed, 103 insertions(+), 4 deletions(-)
diff --git a/xen/arch/riscv/cpufeature.c b/xen/arch/riscv/cpufeature.c
index b846a106a3..02b68aeaa4 100644
--- a/xen/arch/riscv/cpufeature.c
+++ b/xen/arch/riscv/cpufeature.c
@@ -138,6 +138,7 @@ const struct riscv_isa_ext_data __initconst riscv_isa_ext[] = {
RISCV_ISA_EXT_DATA(zbs),
RISCV_ISA_EXT_DATA(smaia),
RISCV_ISA_EXT_DATA(ssaia),
+ RISCV_ISA_EXT_DATA(svade),
RISCV_ISA_EXT_DATA(svpbmt),
};
diff --git a/xen/arch/riscv/include/asm/cpufeature.h b/xen/arch/riscv/include/asm/cpufeature.h
index 768b84b769..5f756c76db 100644
--- a/xen/arch/riscv/include/asm/cpufeature.h
+++ b/xen/arch/riscv/include/asm/cpufeature.h
@@ -37,6 +37,7 @@ enum riscv_isa_ext_id {
RISCV_ISA_EXT_zbs,
RISCV_ISA_EXT_smaia,
RISCV_ISA_EXT_ssaia,
+ RISCV_ISA_EXT_svade,
RISCV_ISA_EXT_svpbmt,
RISCV_ISA_EXT_MAX
};
diff --git a/xen/arch/riscv/include/asm/page.h b/xen/arch/riscv/include/asm/page.h
index cb303af0c0..4fa0556073 100644
--- a/xen/arch/riscv/include/asm/page.h
+++ b/xen/arch/riscv/include/asm/page.h
@@ -74,6 +74,14 @@
#define PTE_SMALL BIT(10, UL)
#define PTE_POPULATE BIT(11, UL)
+enum pbmt_type {
+ pbmt_pma,
+ pbmt_nc,
+ pbmt_io,
+ pbmt_rsvd,
+ pbmt_count,
+};
+
#define PTE_ACCESS_MASK (PTE_READABLE | PTE_WRITABLE | PTE_EXECUTABLE)
#define PTE_PBMT_MASK (PTE_PBMT_NOCACHE | PTE_PBMT_IO)
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index 10acfa0a9c..2d4433360d 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -10,6 +10,7 @@
#include <xen/sched.h>
#include <xen/sections.h>
+#include <asm/cpufeature.h>
#include <asm/csr.h>
#include <asm/flushtlb.h>
#include <asm/paging.h>
@@ -288,6 +289,18 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
return __map_domain_page(p2m->root + root_table_indx);
}
+static int p2m_set_type(pte_t *pte, p2m_type_t t)
+{
+ int rc = 0;
+
+ if ( t > p2m_first_external )
+ panic("unimplemeted\n");
+ else
+ pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
+
+ return rc;
+}
+
static p2m_type_t p2m_get_type(const pte_t pte)
{
p2m_type_t type = MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK);
@@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
p2m_write_pte(p, pte, clean_pte);
}
-static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
+static void p2m_set_permission(pte_t *e, p2m_type_t t)
{
- panic("%s: hasn't been implemented yet\n", __func__);
+ e->pte &= ~PTE_ACCESS_MASK;
+
+ e->pte |= PTE_USER;
+
+ /*
+ * Two schemes to manage the A and D bits are defined:
+ * • The Svade extension: when a virtual page is accessed and the A bit
+ * is clear, or is written and the D bit is clear, a page-fault
+ * exception is raised.
+ * • When the Svade extension is not implemented, the following scheme
+ * applies.
+ * When a virtual page is accessed and the A bit is clear, the PTE is
+ * updated to set the A bit. When the virtual page is written and the
+ * D bit is clear, the PTE is updated to set the D bit. When G-stage
+ * address translation is in use and is not Bare, the G-stage virtual
+ * pages may be accessed or written by implicit accesses to VS-level
+ * memory management data structures, such as page tables.
+ * Thereby to avoid a page-fault in case of Svade is available, it is
+ * necesssary to set A and D bits.
+ */
+ if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) )
+ e->pte |= PTE_ACCESSED | PTE_DIRTY;
+
+ switch ( t )
+ {
+ case p2m_map_foreign_rw:
+ case p2m_mmio_direct_io:
+ e->pte |= PTE_READABLE | PTE_WRITABLE;
+ break;
+
+ case p2m_ram_rw:
+ e->pte |= PTE_ACCESS_MASK;
+ break;
+
+ case p2m_invalid:
+ e->pte &= ~PTE_VALID;
+ break;
+
+ case p2m_map_foreign_ro:
+ e->pte |= PTE_READABLE;
+ break;
+
+ default:
+ ASSERT_UNREACHABLE();
+ break;
+ }
+}
+
+static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
+{
+ pte_t e = (pte_t) { PTE_VALID };
+
+ switch ( t )
+ {
+ case p2m_mmio_direct_io:
+ e.pte |= PTE_PBMT_IO;
+ break;
+
+ default:
+ break;
+ }
+
+ pte_set_mfn(&e, mfn);
+
+ ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK) || mfn_eq(mfn, INVALID_MFN));
+
+ if ( !is_table )
+ {
+ p2m_set_permission(&e, t);
+ p2m_set_type(&e, t);
+ }
+ else
+ /*
+ * According to the spec and table "Encoding of PTE R/W/X fields":
+ * X=W=R=0 -> Pointer to next level of page table.
+ */
+ e.pte &= ~PTE_ACCESS_MASK;
- return (pte_t) { .pte = 0 };
+ return e;
}
#define P2M_TABLE_MAP_NONE 0
@@ -580,7 +669,7 @@ static int p2m_set_entry(struct p2m_domain *p2m,
p2m_clean_pte(entry, p2m->clean_dcache);
else
{
- pte_t pte = p2m_pte_from_mfn(mfn, t);
+ pte_t pte = p2m_pte_from_mfn(mfn, t, false);
p2m_write_pte(entry, pte, p2m->clean_dcache);
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration
2025-09-17 21:55 ` [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration Oleksii Kurochko
@ 2025-09-22 16:28 ` Jan Beulich
2025-09-29 13:30 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-22 16:28 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> @@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
> p2m_write_pte(p, pte, clean_pte);
> }
>
> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
> +static void p2m_set_permission(pte_t *e, p2m_type_t t)
> {
> - panic("%s: hasn't been implemented yet\n", __func__);
> + e->pte &= ~PTE_ACCESS_MASK;
> +
> + e->pte |= PTE_USER;
> +
> + /*
> + * Two schemes to manage the A and D bits are defined:
> + * • The Svade extension: when a virtual page is accessed and the A bit
> + * is clear, or is written and the D bit is clear, a page-fault
> + * exception is raised.
> + * • When the Svade extension is not implemented, the following scheme
> + * applies.
> + * When a virtual page is accessed and the A bit is clear, the PTE is
> + * updated to set the A bit. When the virtual page is written and the
> + * D bit is clear, the PTE is updated to set the D bit. When G-stage
> + * address translation is in use and is not Bare, the G-stage virtual
> + * pages may be accessed or written by implicit accesses to VS-level
> + * memory management data structures, such as page tables.
> + * Thereby to avoid a page-fault in case of Svade is available, it is
> + * necesssary to set A and D bits.
> + */
> + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) )
> + e->pte |= PTE_ACCESSED | PTE_DIRTY;
All of this depending on menvcfg.ADUE anyway, is this really needed? Isn't
machine mode software responsible for dealing with this kind of page faults
(just like the hypervisor is reponsible for dealing with ones resulting
from henvcfg.ADUE being clear)?
> + switch ( t )
> + {
> + case p2m_map_foreign_rw:
> + case p2m_mmio_direct_io:
> + e->pte |= PTE_READABLE | PTE_WRITABLE;
> + break;
> +
> + case p2m_ram_rw:
> + e->pte |= PTE_ACCESS_MASK;
> + break;
> +
> + case p2m_invalid:
> + e->pte &= ~PTE_VALID;
> + break;
> +
> + case p2m_map_foreign_ro:
> + e->pte |= PTE_READABLE;
> + break;
> +
> + default:
> + ASSERT_UNREACHABLE();
> + break;
> + }
> +}
> +
> +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
> +{
> + pte_t e = (pte_t) { PTE_VALID };
> +
> + switch ( t )
> + {
> + case p2m_mmio_direct_io:
> + e.pte |= PTE_PBMT_IO;
> + break;
Shouldn't this be limited to the !is_table case (just like you have it ...
> + default:
> + break;
> + }
> +
> + pte_set_mfn(&e, mfn);
> +
> + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK) || mfn_eq(mfn, INVALID_MFN));
> +
> + if ( !is_table )
> + {
> + p2m_set_permission(&e, t);
... here? Or else at least ASSERT(!is_table) up there? Personally I think
all of this !is_table stuff would best be done here.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration
2025-09-22 16:28 ` Jan Beulich
@ 2025-09-29 13:30 ` Oleksii Kurochko
2025-10-07 13:09 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-29 13:30 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 3743 bytes --]
On 9/22/25 6:28 PM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>> @@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
>> p2m_write_pte(p, pte, clean_pte);
>> }
>>
>> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
>> +static void p2m_set_permission(pte_t *e, p2m_type_t t)
>> {
>> - panic("%s: hasn't been implemented yet\n", __func__);
>> + e->pte &= ~PTE_ACCESS_MASK;
>> +
>> + e->pte |= PTE_USER;
>> +
>> + /*
>> + * Two schemes to manage the A and D bits are defined:
>> + * • The Svade extension: when a virtual page is accessed and the A bit
>> + * is clear, or is written and the D bit is clear, a page-fault
>> + * exception is raised.
>> + * • When the Svade extension is not implemented, the following scheme
>> + * applies.
>> + * When a virtual page is accessed and the A bit is clear, the PTE is
>> + * updated to set the A bit. When the virtual page is written and the
>> + * D bit is clear, the PTE is updated to set the D bit. When G-stage
>> + * address translation is in use and is not Bare, the G-stage virtual
>> + * pages may be accessed or written by implicit accesses to VS-level
>> + * memory management data structures, such as page tables.
>> + * Thereby to avoid a page-fault in case of Svade is available, it is
>> + * necesssary to set A and D bits.
>> + */
>> + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) )
>> + e->pte |= PTE_ACCESSED | PTE_DIRTY;
> All of this depending on menvcfg.ADUE anyway, is this really needed? Isn't
> machine mode software responsible for dealing with this kind of page faults
> (just like the hypervisor is reponsible for dealing with ones resulting
> from henvcfg.ADUE being clear)?
In general, I think you are right.
In this case, though, I just wanted to avoid unnecessary page faults for now.
My understanding is that having such faults handled by the hypervisor can indeed
be useful, for example to track which pages are being accessed. However, since we
currently don’t track page usage, handling these traps would only result in
setting the A and D bits and then returning control to the guest.
To avoid this overhead, I chose to set the bits up front.
>
>> + switch ( t )
>> + {
>> + case p2m_map_foreign_rw:
>> + case p2m_mmio_direct_io:
>> + e->pte |= PTE_READABLE | PTE_WRITABLE;
>> + break;
>> +
>> + case p2m_ram_rw:
>> + e->pte |= PTE_ACCESS_MASK;
>> + break;
>> +
>> + case p2m_invalid:
>> + e->pte &= ~PTE_VALID;
>> + break;
>> +
>> + case p2m_map_foreign_ro:
>> + e->pte |= PTE_READABLE;
>> + break;
>> +
>> + default:
>> + ASSERT_UNREACHABLE();
>> + break;
>> + }
>> +}
>> +
>> +static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
>> +{
>> + pte_t e = (pte_t) { PTE_VALID };
>> +
>> + switch ( t )
>> + {
>> + case p2m_mmio_direct_io:
>> + e.pte |= PTE_PBMT_IO;
>> + break;
> Shouldn't this be limited to the !is_table case (just like you have it ...
>
>> + default:
>> + break;
>> + }
>> +
>> + pte_set_mfn(&e, mfn);
>> +
>> + ASSERT(!(mfn_to_maddr(mfn) & ~PADDR_MASK) || mfn_eq(mfn, INVALID_MFN));
>> +
>> + if ( !is_table )
>> + {
>> + p2m_set_permission(&e, t);
> ... here? Or else at least ASSERT(!is_table) up there? Personally I think
> all of this !is_table stuff would best be done here.
Agree, that this should be done only for leaf PTEs.
I think that I will move that inside p2m_set_permissions() where
p2m_mmio_direct_io case is handled.
Thanks.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 4619 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration
2025-09-29 13:30 ` Oleksii Kurochko
@ 2025-10-07 13:09 ` Jan Beulich
2025-10-09 9:21 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-10-07 13:09 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 29.09.2025 15:30, Oleksii Kurochko wrote:
>
> On 9/22/25 6:28 PM, Jan Beulich wrote:
>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>> @@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
>>> p2m_write_pte(p, pte, clean_pte);
>>> }
>>>
>>> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
>>> +static void p2m_set_permission(pte_t *e, p2m_type_t t)
>>> {
>>> - panic("%s: hasn't been implemented yet\n", __func__);
>>> + e->pte &= ~PTE_ACCESS_MASK;
>>> +
>>> + e->pte |= PTE_USER;
>>> +
>>> + /*
>>> + * Two schemes to manage the A and D bits are defined:
>>> + * • The Svade extension: when a virtual page is accessed and the A bit
>>> + * is clear, or is written and the D bit is clear, a page-fault
>>> + * exception is raised.
>>> + * • When the Svade extension is not implemented, the following scheme
>>> + * applies.
>>> + * When a virtual page is accessed and the A bit is clear, the PTE is
>>> + * updated to set the A bit. When the virtual page is written and the
>>> + * D bit is clear, the PTE is updated to set the D bit. When G-stage
>>> + * address translation is in use and is not Bare, the G-stage virtual
>>> + * pages may be accessed or written by implicit accesses to VS-level
>>> + * memory management data structures, such as page tables.
>>> + * Thereby to avoid a page-fault in case of Svade is available, it is
>>> + * necesssary to set A and D bits.
>>> + */
>>> + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) )
>>> + e->pte |= PTE_ACCESSED | PTE_DIRTY;
>> All of this depending on menvcfg.ADUE anyway, is this really needed? Isn't
>> machine mode software responsible for dealing with this kind of page faults
>> (just like the hypervisor is reponsible for dealing with ones resulting
>> from henvcfg.ADUE being clear)?
>
> In general, I think you are right.
>
> In this case, though, I just wanted to avoid unnecessary page faults for now.
> My understanding is that having such faults handled by the hypervisor can indeed
> be useful, for example to track which pages are being accessed. However, since we
> currently don’t track page usage, handling these traps would only result in
> setting the A and D bits and then returning control to the guest.
Yet that still be be machine-mode software aiui. By always setting the bits we'd
undermine whatever purpose _they_ have enabled the extension for, wouldn't we?
> To avoid this overhead, I chose to set the bits up front.
Irrespective to the answer to the question above, if you mean to do so, I think
all of this needs explaining better in the comment.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration
2025-10-07 13:09 ` Jan Beulich
@ 2025-10-09 9:21 ` Oleksii Kurochko
2025-10-09 12:06 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-10-09 9:21 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 4481 bytes --]
On 10/7/25 3:09 PM, Jan Beulich wrote:
> On 29.09.2025 15:30, Oleksii Kurochko wrote:
>> On 9/22/25 6:28 PM, Jan Beulich wrote:
>>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>>> @@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
>>>> p2m_write_pte(p, pte, clean_pte);
>>>> }
>>>>
>>>> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
>>>> +static void p2m_set_permission(pte_t *e, p2m_type_t t)
>>>> {
>>>> - panic("%s: hasn't been implemented yet\n", __func__);
>>>> + e->pte &= ~PTE_ACCESS_MASK;
>>>> +
>>>> + e->pte |= PTE_USER;
>>>> +
>>>> + /*
>>>> + * Two schemes to manage the A and D bits are defined:
>>>> + * • The Svade extension: when a virtual page is accessed and the A bit
>>>> + * is clear, or is written and the D bit is clear, a page-fault
>>>> + * exception is raised.
>>>> + * • When the Svade extension is not implemented, the following scheme
>>>> + * applies.
>>>> + * When a virtual page is accessed and the A bit is clear, the PTE is
>>>> + * updated to set the A bit. When the virtual page is written and the
>>>> + * D bit is clear, the PTE is updated to set the D bit. When G-stage
>>>> + * address translation is in use and is not Bare, the G-stage virtual
>>>> + * pages may be accessed or written by implicit accesses to VS-level
>>>> + * memory management data structures, such as page tables.
>>>> + * Thereby to avoid a page-fault in case of Svade is available, it is
>>>> + * necesssary to set A and D bits.
>>>> + */
>>>> + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) )
>>>> + e->pte |= PTE_ACCESSED | PTE_DIRTY;
>>> All of this depending on menvcfg.ADUE anyway, is this really needed? Isn't
>>> machine mode software responsible for dealing with this kind of page faults
>>> (just like the hypervisor is reponsible for dealing with ones resulting
>>> from henvcfg.ADUE being clear)?
>> In general, I think you are right.
>>
>> In this case, though, I just wanted to avoid unnecessary page faults for now.
>> My understanding is that having such faults handled by the hypervisor can indeed
>> be useful, for example to track which pages are being accessed. However, since we
>> currently don’t track page usage, handling these traps would only result in
>> setting the A and D bits and then returning control to the guest.
> Yet that still be be machine-mode software aiui. By always setting the bits we'd
> undermine whatever purpose _they_ have enabled the extension for, wouldn't we?
It’s a good point, and from an architectural perspective, it’s possible that
machine-mode software might want to handle page faults.
However, looking at OpenSBI, it delegates (otherwise all traps/interrupts by
default are going to machine-mode) page faults [1] to lower modes, and I expect
that other machine-mode software does the same (but of course there is no such
guarantee).
Therefore, considering that OpenSBI delegates page faults to lower modes and
does not set the A and D bits for p2m (guest) PTEs, this will result in a page
fault being handled by the hypervisor. As a result, we don’t affect the behavior
of machine-mode software at all.
If we want to avoid depending on how OpenSBI or other machine-mode software is
implemented, we might instead want to have our own page fault handler in Xen,
and then set the A and D bits within this handler.
Do you think it would be better to do in this way from the start? If yes, then
we also want drop setting of A and D bits for Xen's PTEs [3] to allow M-mode to
handle S/HS-mode page faults.
Interestingly, OpenSBI doesn’t allow hypervisor mode to decide whether to
support Svade or not [2]. By doing so, we can’t set|henvcfg.adue = 1| to disable
it as menvcfg.adue=0 has more power, which is not very flexible.
[1]https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_hart.c#L209
[2]https://github.com/riscv-software-src/opensbi/blob/master/lib/sbi/sbi_hart.c#L168
[3]https://gitlab.com/xen-project/xen/-/blob/staging/xen/arch/riscv/pt.c?ref_type=heads#L343
>> To avoid this overhead, I chose to set the bits up front.
> Irrespective to the answer to the question above, if you mean to do so, I think
> all of this needs explaining better in the comment.
Sure, I will add the comment if the current one approach of setting A and D bits
will be chosen.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 6071 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration
2025-10-09 9:21 ` Oleksii Kurochko
@ 2025-10-09 12:06 ` Jan Beulich
2025-10-10 8:29 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-10-09 12:06 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 09.10.2025 11:21, Oleksii Kurochko wrote:
>
> On 10/7/25 3:09 PM, Jan Beulich wrote:
>> On 29.09.2025 15:30, Oleksii Kurochko wrote:
>>> On 9/22/25 6:28 PM, Jan Beulich wrote:
>>>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>>>> @@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
>>>>> p2m_write_pte(p, pte, clean_pte);
>>>>> }
>>>>>
>>>>> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
>>>>> +static void p2m_set_permission(pte_t *e, p2m_type_t t)
>>>>> {
>>>>> - panic("%s: hasn't been implemented yet\n", __func__);
>>>>> + e->pte &= ~PTE_ACCESS_MASK;
>>>>> +
>>>>> + e->pte |= PTE_USER;
>>>>> +
>>>>> + /*
>>>>> + * Two schemes to manage the A and D bits are defined:
>>>>> + * • The Svade extension: when a virtual page is accessed and the A bit
>>>>> + * is clear, or is written and the D bit is clear, a page-fault
>>>>> + * exception is raised.
>>>>> + * • When the Svade extension is not implemented, the following scheme
>>>>> + * applies.
>>>>> + * When a virtual page is accessed and the A bit is clear, the PTE is
>>>>> + * updated to set the A bit. When the virtual page is written and the
>>>>> + * D bit is clear, the PTE is updated to set the D bit. When G-stage
>>>>> + * address translation is in use and is not Bare, the G-stage virtual
>>>>> + * pages may be accessed or written by implicit accesses to VS-level
>>>>> + * memory management data structures, such as page tables.
>>>>> + * Thereby to avoid a page-fault in case of Svade is available, it is
>>>>> + * necesssary to set A and D bits.
>>>>> + */
>>>>> + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) )
>>>>> + e->pte |= PTE_ACCESSED | PTE_DIRTY;
>>>> All of this depending on menvcfg.ADUE anyway, is this really needed? Isn't
>>>> machine mode software responsible for dealing with this kind of page faults
>>>> (just like the hypervisor is reponsible for dealing with ones resulting
>>>> from henvcfg.ADUE being clear)?
>>> In general, I think you are right.
>>>
>>> In this case, though, I just wanted to avoid unnecessary page faults for now.
>>> My understanding is that having such faults handled by the hypervisor can indeed
>>> be useful, for example to track which pages are being accessed. However, since we
>>> currently don’t track page usage, handling these traps would only result in
>>> setting the A and D bits and then returning control to the guest.
>> Yet that still be be machine-mode software aiui. By always setting the bits we'd
>> undermine whatever purpose _they_ have enabled the extension for, wouldn't we?
>
> It’s a good point, and from an architectural perspective, it’s possible that
> machine-mode software might want to handle page faults.
> However, looking at OpenSBI, it delegates (otherwise all traps/interrupts by
> default are going to machine-mode) page faults [1] to lower modes, and I expect
> that other machine-mode software does the same (but of course there is no such
> guarantee).
>
> Therefore, considering that OpenSBI delegates page faults to lower modes and
> does not set the A and D bits for p2m (guest) PTEs, this will result in a page
> fault being handled by the hypervisor. As a result, we don’t affect the behavior
> of machine-mode software at all.
>
> If we want to avoid depending on how OpenSBI or other machine-mode software is
> implemented, we might instead want to have our own page fault handler in Xen,
> and then set the A and D bits within this handler.
Won't Xen need its own page fault handler anyway?
> Do you think it would be better to do in this way from the start? If yes, then
> we also want drop setting of A and D bits for Xen's PTEs [3] to allow M-mode to
> handle S/HS-mode page faults.
What I don't really understand is what the intended use of that extension is.
Surely every entity should be responsible for its own A/D bits, with lower
layers coming into play only when certain things need e.g. emulating. This
lack of understanding on my part extends to ...
> Interestingly, OpenSBI doesn’t allow hypervisor mode to decide whether to
> support Svade or not [2]. By doing so, we can’t set|henvcfg.adue = 1| to disable
> it as menvcfg.adue=0 has more power, which is not very flexible.
... this point, which I was also wondering about before.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration
2025-10-09 12:06 ` Jan Beulich
@ 2025-10-10 8:29 ` Oleksii Kurochko
0 siblings, 0 replies; 62+ messages in thread
From: Oleksii Kurochko @ 2025-10-10 8:29 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 5805 bytes --]
On 10/9/25 2:06 PM, Jan Beulich wrote:
> On 09.10.2025 11:21, Oleksii Kurochko wrote:
>> On 10/7/25 3:09 PM, Jan Beulich wrote:
>>> On 29.09.2025 15:30, Oleksii Kurochko wrote:
>>>> On 9/22/25 6:28 PM, Jan Beulich wrote:
>>>>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>>>>> @@ -318,11 +331,87 @@ static inline void p2m_clean_pte(pte_t *p, bool clean_pte)
>>>>>> p2m_write_pte(p, pte, clean_pte);
>>>>>> }
>>>>>>
>>>>>> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t)
>>>>>> +static void p2m_set_permission(pte_t *e, p2m_type_t t)
>>>>>> {
>>>>>> - panic("%s: hasn't been implemented yet\n", __func__);
>>>>>> + e->pte &= ~PTE_ACCESS_MASK;
>>>>>> +
>>>>>> + e->pte |= PTE_USER;
>>>>>> +
>>>>>> + /*
>>>>>> + * Two schemes to manage the A and D bits are defined:
>>>>>> + * • The Svade extension: when a virtual page is accessed and the A bit
>>>>>> + * is clear, or is written and the D bit is clear, a page-fault
>>>>>> + * exception is raised.
>>>>>> + * • When the Svade extension is not implemented, the following scheme
>>>>>> + * applies.
>>>>>> + * When a virtual page is accessed and the A bit is clear, the PTE is
>>>>>> + * updated to set the A bit. When the virtual page is written and the
>>>>>> + * D bit is clear, the PTE is updated to set the D bit. When G-stage
>>>>>> + * address translation is in use and is not Bare, the G-stage virtual
>>>>>> + * pages may be accessed or written by implicit accesses to VS-level
>>>>>> + * memory management data structures, such as page tables.
>>>>>> + * Thereby to avoid a page-fault in case of Svade is available, it is
>>>>>> + * necesssary to set A and D bits.
>>>>>> + */
>>>>>> + if ( riscv_isa_extension_available(NULL, RISCV_ISA_EXT_svade) )
>>>>>> + e->pte |= PTE_ACCESSED | PTE_DIRTY;
>>>>> All of this depending on menvcfg.ADUE anyway, is this really needed? Isn't
>>>>> machine mode software responsible for dealing with this kind of page faults
>>>>> (just like the hypervisor is reponsible for dealing with ones resulting
>>>>> from henvcfg.ADUE being clear)?
>>>> In general, I think you are right.
>>>>
>>>> In this case, though, I just wanted to avoid unnecessary page faults for now.
>>>> My understanding is that having such faults handled by the hypervisor can indeed
>>>> be useful, for example to track which pages are being accessed. However, since we
>>>> currently don’t track page usage, handling these traps would only result in
>>>> setting the A and D bits and then returning control to the guest.
>>> Yet that still be be machine-mode software aiui. By always setting the bits we'd
>>> undermine whatever purpose _they_ have enabled the extension for, wouldn't we?
>> It’s a good point, and from an architectural perspective, it’s possible that
>> machine-mode software might want to handle page faults.
>> However, looking at OpenSBI, it delegates (otherwise all traps/interrupts by
>> default are going to machine-mode) page faults [1] to lower modes, and I expect
>> that other machine-mode software does the same (but of course there is no such
>> guarantee).
>>
>> Therefore, considering that OpenSBI delegates page faults to lower modes and
>> does not set the A and D bits for p2m (guest) PTEs, this will result in a page
>> fault being handled by the hypervisor. As a result, we don’t affect the behavior
>> of machine-mode software at all.
>>
>> If we want to avoid depending on how OpenSBI or other machine-mode software is
>> implemented, we might instead want to have our own page fault handler in Xen,
>> and then set the A and D bits within this handler.
> Won't Xen need its own page fault handler anyway?
Of course, it will.
I just meant that it won’t need it solely for the purpose of setting the A and
D bits.
Considering that Svade is mandatory for RVAxx profiles, and that at some point
we may want to implement certain optimizations (mentioned below), it would make
sense to handle the A/D bits in the page fault handler.
However, for now, for the sake of simplicity and given that none of the
optimizations mentioned below are currently implemented and OpenSBI delegates
page fault handling to hypervisor so OpenSBI isn't planning to deal with A/D
bits, I think we can set the A/D bits during PTE creation with a comment
explaining why it’s done this way, as suggested before.
Later, when additional optimizations that rely on A/D bits are needed, we can
remove this initial setting and add proper A/D handling in the page fault
handler.
>
>> Do you think it would be better to do in this way from the start? If yes, then
>> we also want drop setting of A and D bits for Xen's PTEs [3] to allow M-mode to
>> handle S/HS-mode page faults.
> What I don't really understand is what the intended use of that extension is.
I think this is mainly for software-managed PTE A/D bit updates, which could be
useful for several use cases such as demand paging, cache flushing optimizations,
and memory access tracking.
Also, from a hardware perspective, it’s probably simpler to let software manage
the PTE A/D bits (using the Svade extension) rather than implementing the
Svadu extension for hardware-managed updates.
~ Oleksii
> Surely every entity should be responsible for its own A/D bits, with lower
> layers coming into play only when certain things need e.g. emulating. This
> lack of understanding on my part extends to ...
>
>> Interestingly, OpenSBI doesn’t allow hypervisor mode to decide whether to
>> support Svade or not [2]. By doing so, we can’t set|henvcfg.adue = 1| to disable
>> it as menvcfg.adue=0 has more power, which is not very flexible.
> ... this point, which I was also wondering about before.
[-- Attachment #2: Type: text/html, Size: 7298 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 13/18] xen/riscv: implement p2m_next_level()
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (11 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 12/18] xen/riscv: Implement p2m_pte_from_mfn() and support PBMT configuration Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-22 17:35 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 14/18] xen/riscv: Implement superpage splitting for p2m mappings Oleksii Kurochko
` (4 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Implement the p2m_next_level() function, which enables traversal and dynamic
allocation of intermediate levels (if necessary) in the RISC-V
p2m (physical-to-machine) page table hierarchy.
To support this, the following helpers are introduced:
- page_to_p2m_table(): Constructs non-leaf PTEs pointing to next-level page
tables with correct attributes.
- p2m_alloc_page(): Allocates page table pages, supporting both hardware and
guest domains.
- p2m_create_table(): Allocates and initializes a new page table page and
installs it into the hierarchy.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- make `page` argument of page_to_p2m_table pointer-to-const.
- Move p2m_next_level()'s local variable `ret` to the more narrow space where
it is really used.
- Drop stale ASSERT() in p2m_next_level().
- Stray blank after * in declaration of paging_alloc_page().
- Decrease p2m_freelist.total_pages when a page is taken from the p2m freelist.
---
Changes in V3:
- s/p2me_is_mapping/p2m_is_mapping to be in syc with other p2m_is_*() functions.
- clear_and_clean_page() in p2m_create_table() instead of clear_page() to be
sure that page is cleared and d-cache is flushed for it.
- Move ASSERT(level != 0) in p2m_next_level() ahead of trying to allocate a
page table.
- Update p2m_create_table() to allocate metadata page to store p2m type in it
for each entry of page table.
- Introduce paging_alloc_page() and use it inside p2m_alloc_page().
- Add allocated page to p2m->pages list in p2m_alloc_page() to simplify
a caller code a little bit.
- Drop p2m_is_mapping() and use pte_is_mapping() instead as P2M PTE's valid
bit doesn't have another purpose anymore.
- Update an implementation and prototype of page_to_p2m_table(), it is enough
to pass only a page as an argument.
---
Changes in V2:
- New patch. It was a part of a big patch "xen/riscv: implement p2m mapping
functionality" which was splitted to smaller.
- s/p2m_is_mapping/p2m_is_mapping.
---
xen/arch/riscv/include/asm/paging.h | 2 +
xen/arch/riscv/p2m.c | 79 ++++++++++++++++++++++++++++-
xen/arch/riscv/paging.c | 12 +++++
3 files changed, 91 insertions(+), 2 deletions(-)
diff --git a/xen/arch/riscv/include/asm/paging.h b/xen/arch/riscv/include/asm/paging.h
index 9712aa77c5..69cb414962 100644
--- a/xen/arch/riscv/include/asm/paging.h
+++ b/xen/arch/riscv/include/asm/paging.h
@@ -15,4 +15,6 @@ int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages);
void paging_free_page(struct domain *d, struct page_info *pg);
+struct page_info * paging_alloc_page(struct domain *d);
+
#endif /* ASM_RISCV_PAGING_H */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index 2d4433360d..bf4945e99f 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -414,6 +414,48 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
return e;
}
+/* Generate table entry with correct attributes. */
+static pte_t page_to_p2m_table(const struct page_info *page)
+{
+ /*
+ * p2m_invalid will be ignored inside p2m_pte_from_mfn() as is_table is
+ * set to true and p2m_type_t shouldn't be applied for PTEs which
+ * describe an intermidiate table.
+ */
+ return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, true);
+}
+
+static struct page_info *p2m_alloc_page(struct p2m_domain *p2m)
+{
+ struct page_info *pg = paging_alloc_page(p2m->domain);
+
+ if ( pg )
+ page_list_add(pg, &p2m->pages);
+
+ return pg;
+}
+
+/*
+ * Allocate a new page table page with an extra metadata page and hook it
+ * in via the given entry.
+ */
+static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
+{
+ struct page_info *page;
+
+ ASSERT(!pte_is_valid(*entry));
+
+ page = p2m_alloc_page(p2m);
+ if ( page == NULL )
+ return -ENOMEM;
+
+ clear_and_clean_page(page, p2m->clean_dcache);
+
+ p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache);
+
+ return 0;
+}
+
#define P2M_TABLE_MAP_NONE 0
#define P2M_TABLE_MAP_NOMEM 1
#define P2M_TABLE_SUPER_PAGE 2
@@ -438,9 +480,42 @@ static int p2m_next_level(struct p2m_domain *p2m, bool alloc_tbl,
unsigned int level, pte_t **table,
unsigned int offset)
{
- panic("%s: hasn't been implemented yet\n", __func__);
+ pte_t *entry;
+ mfn_t mfn;
+
+ /* The function p2m_next_level() is never called at the last level */
+ ASSERT(level != 0);
+
+ entry = *table + offset;
+
+ if ( !pte_is_valid(*entry) )
+ {
+ int ret;
+
+ if ( !alloc_tbl )
+ return P2M_TABLE_MAP_NONE;
+
+ ret = p2m_create_table(p2m, entry);
+ if ( ret )
+ return P2M_TABLE_MAP_NOMEM;
+ }
+
+ if ( pte_is_mapping(*entry) )
+ return P2M_TABLE_SUPER_PAGE;
+
+ mfn = mfn_from_pte(*entry);
+
+ unmap_domain_page(*table);
+
+ /*
+ * TODO: There's an inefficiency here:
+ * In p2m_create_table(), the page is mapped to clear it.
+ * Then that mapping is torn down in p2m_create_table(),
+ * only to be re-established here.
+ */
+ *table = map_domain_page(mfn);
- return P2M_TABLE_MAP_NONE;
+ return P2M_TABLE_NORMAL;
}
static void p2m_put_foreign_page(struct page_info *pg)
diff --git a/xen/arch/riscv/paging.c b/xen/arch/riscv/paging.c
index 049b850e03..803b026f34 100644
--- a/xen/arch/riscv/paging.c
+++ b/xen/arch/riscv/paging.c
@@ -115,6 +115,18 @@ void paging_free_page(struct domain *d, struct page_info *pg)
spin_unlock(&d->arch.paging.lock);
}
+struct page_info *paging_alloc_page(struct domain *d)
+{
+ struct page_info *pg;
+
+ spin_lock(&d->arch.paging.lock);
+ pg = page_list_remove_head(&d->arch.paging.freelist);
+ ACCESS_ONCE(d->arch.paging.total_pages)--;
+ spin_unlock(&d->arch.paging.lock);
+
+ return pg;
+}
+
/* Domain paging struct initialization. */
int paging_domain_init(struct domain *d)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 13/18] xen/riscv: implement p2m_next_level()
2025-09-17 21:55 ` [PATCH v4 13/18] xen/riscv: implement p2m_next_level() Oleksii Kurochko
@ 2025-09-22 17:35 ` Jan Beulich
2025-09-29 14:23 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-22 17:35 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> Implement the p2m_next_level() function, which enables traversal and dynamic
> allocation of intermediate levels (if necessary) in the RISC-V
> p2m (physical-to-machine) page table hierarchy.
>
> To support this, the following helpers are introduced:
> - page_to_p2m_table(): Constructs non-leaf PTEs pointing to next-level page
> tables with correct attributes.
> - p2m_alloc_page(): Allocates page table pages, supporting both hardware and
> guest domains.
> - p2m_create_table(): Allocates and initializes a new page table page and
> installs it into the hierarchy.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes in V4:
> - make `page` argument of page_to_p2m_table pointer-to-const.
> - Move p2m_next_level()'s local variable `ret` to the more narrow space where
> it is really used.
> - Drop stale ASSERT() in p2m_next_level().
> - Stray blank after * in declaration of paging_alloc_page().
When you deal with comments like this, can you please make sure you
apply them to at least a patch as a whole, if not the entire series?
I notice ...
> --- a/xen/arch/riscv/include/asm/paging.h
> +++ b/xen/arch/riscv/include/asm/paging.h
> @@ -15,4 +15,6 @@ int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages);
>
> void paging_free_page(struct domain *d, struct page_info *pg);
>
> +struct page_info * paging_alloc_page(struct domain *d);
... there's still a stray blank here. With this dropped:
Acked-by: Jan Beulich <jbeulich@suse.com>
I have one other question, though:
> +/*
> + * Allocate a new page table page with an extra metadata page and hook it
> + * in via the given entry.
> + */
> +static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
> +{
> + struct page_info *page;
> +
> + ASSERT(!pte_is_valid(*entry));
Isn't this going to get in the way of splitting superpages? The caller
will need to initialize *entry just for this assertion to not trigger.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 13/18] xen/riscv: implement p2m_next_level()
2025-09-22 17:35 ` Jan Beulich
@ 2025-09-29 14:23 ` Oleksii Kurochko
0 siblings, 0 replies; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-29 14:23 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2446 bytes --]
On 9/22/25 7:35 PM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>> Implement the p2m_next_level() function, which enables traversal and dynamic
>> allocation of intermediate levels (if necessary) in the RISC-V
>> p2m (physical-to-machine) page table hierarchy.
>>
>> To support this, the following helpers are introduced:
>> - page_to_p2m_table(): Constructs non-leaf PTEs pointing to next-level page
>> tables with correct attributes.
>> - p2m_alloc_page(): Allocates page table pages, supporting both hardware and
>> guest domains.
>> - p2m_create_table(): Allocates and initializes a new page table page and
>> installs it into the hierarchy.
>>
>> Signed-off-by: Oleksii Kurochko<oleksii.kurochko@gmail.com>
>> ---
>> Changes in V4:
>> - make `page` argument of page_to_p2m_table pointer-to-const.
>> - Move p2m_next_level()'s local variable `ret` to the more narrow space where
>> it is really used.
>> - Drop stale ASSERT() in p2m_next_level().
>> - Stray blank after * in declaration of paging_alloc_page().
> When you deal with comments like this, can you please make sure you
> apply them to at least a patch as a whole, if not the entire series?
> I notice ...
>
>> --- a/xen/arch/riscv/include/asm/paging.h
>> +++ b/xen/arch/riscv/include/asm/paging.h
>> @@ -15,4 +15,6 @@ int paging_ret_pages_to_freelist(struct domain *d, unsigned int nr_pages);
>>
>> void paging_free_page(struct domain *d, struct page_info *pg);
>>
>> +struct page_info * paging_alloc_page(struct domain *d);
> ... there's still a stray blank here. With this dropped:
> Acked-by: Jan Beulich<jbeulich@suse.com>
Thanks.
> I have one other question, though:
>
>> +/*
>> + * Allocate a new page table page with an extra metadata page and hook it
>> + * in via the given entry.
>> + */
>> +static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
>> +{
>> + struct page_info *page;
>> +
>> + ASSERT(!pte_is_valid(*entry));
> Isn't this going to get in the way of splitting superpages? The caller
> will need to initialize *entry just for this assertion to not trigger.
The superpage splitting function doesn’t use|p2m_create_table()|. It calls
|p2m_alloc_table()|, then fills the table, and finally updates the entry
using|p2m_write_pte()|. So this shouldn’t be an issue.
Ohh, I just noticed, the comment should be updated, since an extra metadata
page is no longer allocated here.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 3698 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 14/18] xen/riscv: Implement superpage splitting for p2m mappings
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (12 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 13/18] xen/riscv: implement p2m_next_level() Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-22 17:55 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 15/18] xen/riscv: implement put_page() Oleksii Kurochko
` (3 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Add support for down large memory mappings ("superpages") in the RISC-V
p2m mapping so that smaller, more precise mappings ("finer-grained entries")
can be inserted into lower levels of the page table hierarchy.
To implement that the following is done:
- Introduce p2m_split_superpage(): Recursively shatters a superpage into
smaller page table entries down to the target level, preserving original
permissions and attributes.
- p2m_set_entry() updated to invoke superpage splitting when inserting
entries at lower levels within a superpage-mapped region.
This implementation is based on the ARM code, with modifications to the part
that follows the BBM (break-before-make) approach, some parts are simplified
as according to RISC-V spec:
It is permitted for multiple address-translation cache entries to co-exist
for the same address. This represents the fact that in a conventional
TLB hierarchy, it is possible for multiple entries to match a single
address if, for example, a page is upgraded to a superpage without first
clearing the original non-leaf PTE’s valid bit and executing an SFENCE.VMA
with rs1=x0, or if multiple TLBs exist in parallel at a given level of the
hierarchy. In this case, just as if an SFENCE.VMA is not executed between
a write to the memory-management tables and subsequent implicit read of the
same address: it is unpredictable whether the old non-leaf PTE or the new
leaf PTE is used, but the behavior is otherwise well defined.
In contrast to the Arm architecture, where BBM is mandatory and failing to
use it in some cases can lead to CPU instability, RISC-V guarantees
stability, and the behavior remains safe — though unpredictable in terms of
which translation will be used.
Additionally, the page table walk logic has been adjusted, as ARM uses the
opposite level numbering compared to RISC-V.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- s/number of levels/level numbering in the commit message.
- s/permissions/attributes.
- Remove redundant comment in p2m_split_superpage() about page
splitting.
- Use P2M_PAGETABLE_ENTRIES as XEN_PT_ENTRIES
doesn't takeinto into acount that G stage root page table is
extended by 2 bits.
- Use earlier introduced P2M_LEVEL_ORDER().
---
Changes in V3:
- Move page_list_add(page, &p2m->pages) inside p2m_alloc_page().
- Use 'unsigned long' for local vairiable 'i' in p2m_split_superpage().
- Update the comment above if ( next_level != target ) in p2m_split_superpage().
- Reverse cycle to iterate through page table levels in p2m_set_entry().
- Update p2m_split_superpage() with the same changes which are done in the
patch "P2M: Don't try to free the existing PTE if we can't allocate a new table".
---
Changes in V2:
- New patch. It was a part of a big patch "xen/riscv: implement p2m mapping
functionality" which was splitted to smaller.
- Update the commit above the cycle which creates new page table as
RISC-V travserse page tables in an opposite to ARM order.
- RISC-V doesn't require BBM so there is no needed for invalidating
and TLB flushing before updating PTE.
---
xen/arch/riscv/p2m.c | 114 ++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 113 insertions(+), 1 deletion(-)
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index bf4945e99f..1577b09b15 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -661,6 +661,87 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
p2m_free_page(p2m, pg);
}
+static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
+ unsigned int level, unsigned int target,
+ const unsigned int *offsets)
+{
+ struct page_info *page;
+ unsigned long i;
+ pte_t pte, *table;
+ bool rv = true;
+
+ /* Convenience aliases */
+ mfn_t mfn = pte_get_mfn(*entry);
+ unsigned int next_level = level - 1;
+ unsigned int level_order = P2M_LEVEL_ORDER(next_level);
+
+ /*
+ * This should only be called with target != level and the entry is
+ * a superpage.
+ */
+ ASSERT(level > target);
+ ASSERT(pte_is_superpage(*entry, level));
+
+ page = p2m_alloc_page(p2m);
+ if ( !page )
+ {
+ /*
+ * The caller is in charge to free the sub-tree.
+ * As we didn't manage to allocate anything, just tell the
+ * caller there is nothing to free by invalidating the PTE.
+ */
+ memset(entry, 0, sizeof(*entry));
+ return false;
+ }
+
+ table = __map_domain_page(page);
+
+ for ( i = 0; i < P2M_PAGETABLE_ENTRIES(next_level); i++ )
+ {
+ pte_t *new_entry = table + i;
+
+ /*
+ * Use the content of the superpage entry and override
+ * the necessary fields. So the correct attributes are kept.
+ */
+ pte = *entry;
+ pte_set_mfn(&pte, mfn_add(mfn, i << level_order));
+
+ write_pte(new_entry, pte);
+ }
+
+ /*
+ * Shatter superpage in the page to the level we want to make the
+ * changes.
+ * This is done outside the loop to avoid checking the offset
+ * for every entry to know whether the entry should be shattered.
+ */
+ if ( next_level != target )
+ rv = p2m_split_superpage(p2m, table + offsets[next_level],
+ level - 1, target, offsets);
+
+ if ( p2m->clean_dcache )
+ clean_dcache_va_range(table, PAGE_SIZE);
+
+ /*
+ * TODO: an inefficiency here: the caller almost certainly wants to map
+ * the same page again, to update the one entry that caused the
+ * request to shatter the page.
+ */
+ unmap_domain_page(table);
+
+ /*
+ * Even if we failed, we should (according to the current implemetation
+ * of a way how sub-tree is freed if p2m_split_superpage hasn't been
+ * finished fully) install the newly allocated PTE
+ * entry.
+ * The caller will be in charge to free the sub-tree.
+ */
+ p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache);
+
+ return rv;
+}
+
/*
* Insert an entry in the p2m. This should be called with a mapping
* equal to a page/superpage.
@@ -729,7 +810,38 @@ static int p2m_set_entry(struct p2m_domain *p2m,
*/
if ( level > target )
{
- panic("Shattering isn't implemented\n");
+ /* We need to split the original page. */
+ pte_t split_pte = *entry;
+
+ ASSERT(pte_is_superpage(*entry, level));
+
+ if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets) )
+ {
+ /* Free the allocated sub-tree */
+ p2m_free_subtree(p2m, split_pte, level);
+
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ p2m_write_pte(entry, split_pte, p2m->clean_dcache);
+
+ p2m->need_flush = true;
+
+ /* Then move to the level we want to make real changes */
+ for ( ; level > target; level-- )
+ {
+ rc = p2m_next_level(p2m, true, level, &table, offsets[level]);
+
+ /*
+ * The entry should be found and either be a table
+ * or a superpage if level 0 is not targeted
+ */
+ ASSERT(rc == P2M_TABLE_NORMAL ||
+ (rc == P2M_TABLE_SUPER_PAGE && target > 0));
+ }
+
+ entry = table + offsets[level];
}
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 14/18] xen/riscv: Implement superpage splitting for p2m mappings
2025-09-17 21:55 ` [PATCH v4 14/18] xen/riscv: Implement superpage splitting for p2m mappings Oleksii Kurochko
@ 2025-09-22 17:55 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-22 17:55 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> Add support for down large memory mappings ("superpages") in the RISC-V
> p2m mapping so that smaller, more precise mappings ("finer-grained entries")
> can be inserted into lower levels of the page table hierarchy.
>
> To implement that the following is done:
> - Introduce p2m_split_superpage(): Recursively shatters a superpage into
> smaller page table entries down to the target level, preserving original
> permissions and attributes.
> - p2m_set_entry() updated to invoke superpage splitting when inserting
> entries at lower levels within a superpage-mapped region.
>
> This implementation is based on the ARM code, with modifications to the part
> that follows the BBM (break-before-make) approach, some parts are simplified
> as according to RISC-V spec:
> It is permitted for multiple address-translation cache entries to co-exist
> for the same address. This represents the fact that in a conventional
> TLB hierarchy, it is possible for multiple entries to match a single
> address if, for example, a page is upgraded to a superpage without first
> clearing the original non-leaf PTE’s valid bit and executing an SFENCE.VMA
> with rs1=x0, or if multiple TLBs exist in parallel at a given level of the
> hierarchy. In this case, just as if an SFENCE.VMA is not executed between
> a write to the memory-management tables and subsequent implicit read of the
> same address: it is unpredictable whether the old non-leaf PTE or the new
> leaf PTE is used, but the behavior is otherwise well defined.
> In contrast to the Arm architecture, where BBM is mandatory and failing to
> use it in some cases can lead to CPU instability, RISC-V guarantees
> stability, and the behavior remains safe — though unpredictable in terms of
> which translation will be used.
>
> Additionally, the page table walk logic has been adjusted, as ARM uses the
> opposite level numbering compared to RISC-V.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 15/18] xen/riscv: implement put_page()
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (13 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 14/18] xen/riscv: Implement superpage splitting for p2m mappings Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-22 19:54 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 16/18] xen/riscv: implement mfn_valid() and page reference, ownership handling helpers Oleksii Kurochko
` (2 subsequent siblings)
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Implement put_page(), as it will be used by p2m_put_*-related code.
Although CONFIG_STATIC_MEMORY has not yet been introduced for RISC-V,
a stub for PGC_static is added to avoid cluttering the code of
put_page() with #ifdefs.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Update the comment message:
s/p2m_put_code/p2m_put_*-related code.
s/put_page_nr/put_page.
---
xen/arch/riscv/include/asm/mm.h | 7 +++++++
xen/arch/riscv/mm.c | 25 ++++++++++++++++++++-----
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index dd8cdc9782..0503c92e6c 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -264,6 +264,13 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
/* Page is Xen heap? */
#define _PGC_xen_heap PG_shift(2)
#define PGC_xen_heap PG_mask(1, 2)
+#ifdef CONFIG_STATIC_MEMORY
+/* Page is static memory */
+#define _PGC_static PG_shift(3)
+#define PGC_static PG_mask(1, 3)
+#else
+#define PGC_static 0
+#endif
/* Page is broken? */
#define _PGC_broken PG_shift(7)
#define PGC_broken PG_mask(1, 7)
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 1ef015f179..3cac16f1b7 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -362,11 +362,6 @@ unsigned long __init calc_phys_offset(void)
return phys_offset;
}
-void put_page(struct page_info *page)
-{
- BUG_ON("unimplemented");
-}
-
void arch_dump_shared_mem_info(void)
{
BUG_ON("unimplemented");
@@ -627,3 +622,23 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
if ( sync_icache )
invalidate_icache();
}
+
+void put_page(struct page_info *page)
+{
+ unsigned long nx, x, y = page->count_info;
+
+ do {
+ ASSERT((y & PGC_count_mask) >= 1);
+ x = y;
+ nx = x - 1;
+ }
+ while ( unlikely((y = cmpxchg(&page->count_info, x, nx)) != x) );
+
+ if ( unlikely((nx & PGC_count_mask) == 0) )
+ {
+ if ( unlikely(nx & PGC_static) )
+ free_domstatic_page(page);
+ else
+ free_domheap_page(page);
+ }
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 15/18] xen/riscv: implement put_page()
2025-09-17 21:55 ` [PATCH v4 15/18] xen/riscv: implement put_page() Oleksii Kurochko
@ 2025-09-22 19:54 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-22 19:54 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> Implement put_page(), as it will be used by p2m_put_*-related code.
>
> Although CONFIG_STATIC_MEMORY has not yet been introduced for RISC-V,
> a stub for PGC_static is added to avoid cluttering the code of
> put_page() with #ifdefs.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
with ...
> @@ -627,3 +622,23 @@ void flush_page_to_ram(unsigned long mfn, bool sync_icache)
> if ( sync_icache )
> invalidate_icache();
> }
> +
> +void put_page(struct page_info *page)
> +{
> + unsigned long nx, x, y = page->count_info;
> +
> + do {
> + ASSERT((y & PGC_count_mask) >= 1);
> + x = y;
> + nx = x - 1;
> + }
> + while ( unlikely((y = cmpxchg(&page->count_info, x, nx)) != x) );
... style corrected here (just like for "do" the figure brace here also doesn't
want to go onto its own line).
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 16/18] xen/riscv: implement mfn_valid() and page reference, ownership handling helpers
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (14 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 15/18] xen/riscv: implement put_page() Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-22 20:02 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN Oleksii Kurochko
2025-09-17 21:55 ` [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type Oleksii Kurochko
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Implement the mfn_valid() macro to verify whether a given MFN is valid by
checking that it falls within the range [start_page, max_page).
These bounds are initialized based on the start and end addresses of RAM.
As part of this patch, start_page is introduced and initialized with the
PFN of the first RAM page.
Also, initialize pdx_group_valid() by calling set_pdx_range() when
memory banks are being mapped.
Also, after providing a non-stub implementation of the mfn_valid() macro,
the following compilation errors started to occur:
riscv64-linux-gnu-ld: prelink.o: in function `alloc_heap_pages':
/build/xen/common/page_alloc.c:1054: undefined reference to `page_is_offlinable'
riscv64-linux-gnu-ld: /build/xen/common/page_alloc.c:1035: undefined reference to `page_is_offlinable'
riscv64-linux-gnu-ld: prelink.o: in function `reserve_offlined_page':
/build/xen/common/page_alloc.c:1151: undefined reference to `page_is_offlinable'
riscv64-linux-gnu-ld: ./.xen-syms.0: hidden symbol `page_is_offlinable' isn't defined
riscv64-linux-gnu-ld: final link failed: bad value
make[2]: *** [arch/riscv/Makefile:28: xen-syms] Error 1
To resolve these errors, the following functions have also been introduced,
based on their Arm counterparts:
- page_get_owner_and_reference() and its variant to safely acquire a
reference to a page and retrieve its owner.
- Implement page_is_offlinable() to return false for RISC-V.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Rebase the patch on top of patch series "[PATCH v2 0/2] constrain page_is_ram_type() to x86".
- Add implementation of page_is_offlinable() instead of page_is_ram().
- Update the commit message.
---
Changes in V3:
- Update defintion of mfn_valid().
- Use __ro_after_init for variable start_page.
- Drop ASSERT_UNREACHABLE() in page_get_owner_and_nr_reference().
- Update the comment inside do/while in page_get_owner_and_nr_reference().
- Define _PGC_static and drop "#ifdef CONFIG_STATIC_MEMORY" in put_page_nr().
- Initialize pdx_group_valid() by calling set_pdx_range() when memory banks are mapped.
- Drop page_get_owner_and_nr_reference() and implement page_get_owner_and_reference()
without reusing of a page_get_owner_and_nr_reference() to avoid potential dead code.
- Move defintion of get_page() to "xen/riscv: add support of page lookup by GFN", where
it is really used.
---
Changes in V2:
- New patch.
---
xen/arch/riscv/include/asm/mm.h | 9 +++++++--
xen/arch/riscv/mm.c | 33 +++++++++++++++++++++++++++++++++
2 files changed, 40 insertions(+), 2 deletions(-)
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 0503c92e6c..1b16809749 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -5,6 +5,7 @@
#include <public/xen.h>
#include <xen/bug.h>
+#include <xen/compiler.h>
#include <xen/const.h>
#include <xen/mm-frame.h>
#include <xen/pdx.h>
@@ -300,8 +301,12 @@ static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
#define page_get_owner(p) (p)->v.inuse.domain
#define page_set_owner(p, d) ((p)->v.inuse.domain = (d))
-/* TODO: implement */
-#define mfn_valid(mfn) ({ (void)(mfn); 0; })
+extern unsigned long start_page;
+
+#define mfn_valid(mfn) ({ \
+ unsigned long tmp_mfn = mfn_x(mfn); \
+ likely((tmp_mfn >= start_page)) && likely(__mfn_valid(tmp_mfn)); \
+})
#define domain_set_alloc_bitsize(d) ((void)(d))
#define domain_clamp_alloc_bitsize(d, b) ((void)(d), (b))
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 3cac16f1b7..8c6e8075f3 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -521,6 +521,8 @@ static void __init setup_directmap_mappings(unsigned long base_mfn,
#error setup_{directmap,frametable}_mapping() should be implemented for RV_32
#endif
+unsigned long __ro_after_init start_page;
+
/*
* Setup memory management
*
@@ -570,9 +572,13 @@ void __init setup_mm(void)
ram_end = max(ram_end, bank_end);
setup_directmap_mappings(PFN_DOWN(bank_start), PFN_DOWN(bank_size));
+
+ set_pdx_range(paddr_to_pfn(bank_start), paddr_to_pfn(bank_end));
}
setup_frametable_mappings(ram_start, ram_end);
+
+ start_page = PFN_DOWN(ram_start);
max_page = PFN_DOWN(ram_end);
}
@@ -642,3 +648,30 @@ void put_page(struct page_info *page)
free_domheap_page(page);
}
}
+
+bool page_is_offlinable(mfn_t mfn)
+{
+ return false;
+}
+
+struct domain *page_get_owner_and_reference(struct page_info *page)
+{
+ unsigned long x, y = page->count_info;
+ struct domain *owner;
+
+ do {
+ x = y;
+ /*
+ * Count == 0: Page is not allocated, so we cannot take a reference.
+ * Count == -1: Reference count would wrap, which is invalid.
+ */
+ if ( unlikely(((x + 1) & PGC_count_mask) <= 1) )
+ return NULL;
+ }
+ while ( (y = cmpxchg(&page->count_info, x, x + 1)) != x );
+
+ owner = page_get_owner(page);
+ ASSERT(owner);
+
+ return owner;
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 16/18] xen/riscv: implement mfn_valid() and page reference, ownership handling helpers
2025-09-17 21:55 ` [PATCH v4 16/18] xen/riscv: implement mfn_valid() and page reference, ownership handling helpers Oleksii Kurochko
@ 2025-09-22 20:02 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-09-22 20:02 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> Implement the mfn_valid() macro to verify whether a given MFN is valid by
> checking that it falls within the range [start_page, max_page).
> These bounds are initialized based on the start and end addresses of RAM.
>
> As part of this patch, start_page is introduced and initialized with the
> PFN of the first RAM page.
> Also, initialize pdx_group_valid() by calling set_pdx_range() when
> memory banks are being mapped.
>
> Also, after providing a non-stub implementation of the mfn_valid() macro,
> the following compilation errors started to occur:
> riscv64-linux-gnu-ld: prelink.o: in function `alloc_heap_pages':
> /build/xen/common/page_alloc.c:1054: undefined reference to `page_is_offlinable'
> riscv64-linux-gnu-ld: /build/xen/common/page_alloc.c:1035: undefined reference to `page_is_offlinable'
> riscv64-linux-gnu-ld: prelink.o: in function `reserve_offlined_page':
> /build/xen/common/page_alloc.c:1151: undefined reference to `page_is_offlinable'
> riscv64-linux-gnu-ld: ./.xen-syms.0: hidden symbol `page_is_offlinable' isn't defined
> riscv64-linux-gnu-ld: final link failed: bad value
> make[2]: *** [arch/riscv/Makefile:28: xen-syms] Error 1
>
> To resolve these errors, the following functions have also been introduced,
> based on their Arm counterparts:
> - page_get_owner_and_reference() and its variant to safely acquire a
> reference to a page and retrieve its owner.
> - Implement page_is_offlinable() to return false for RISC-V.
>
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
with two cosmetic adjustments:
> @@ -642,3 +648,30 @@ void put_page(struct page_info *page)
> free_domheap_page(page);
> }
> }
> +
> +bool page_is_offlinable(mfn_t mfn)
> +{
> + return false;
> +}
I think this wants to move elsewhere, or ...
> +struct domain *page_get_owner_and_reference(struct page_info *page)
... this wants to move up, such that the "get" and "put" logic are next
to each other.
> +{
> + unsigned long x, y = page->count_info;
> + struct domain *owner;
> +
> + do {
> + x = y;
> + /*
> + * Count == 0: Page is not allocated, so we cannot take a reference.
> + * Count == -1: Reference count would wrap, which is invalid.
> + */
> + if ( unlikely(((x + 1) & PGC_count_mask) <= 1) )
> + return NULL;
> + }
> + while ( (y = cmpxchg(&page->count_info, x, x + 1)) != x );
This again wants the figure brace placement corrected.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (15 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 16/18] xen/riscv: implement mfn_valid() and page reference, ownership handling helpers Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-22 20:46 ` Jan Beulich
2025-09-17 21:55 ` [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type Oleksii Kurochko
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
Introduce helper functions for safely querying the P2M (physical-to-machine)
mapping:
- add p2m_read_lock(), p2m_read_unlock(), and p2m_is_locked() for managing
P2M lock state.
- Implement p2m_get_entry() to retrieve mapping details for a given GFN,
including MFN, page order, and validity.
- Add p2m_lookup() to encapsulate read-locked MFN retrieval.
- Introduce p2m_get_page_from_gfn() to convert a GFN into a page_info
pointer, acquiring a reference to the page if valid.
- Introduce get_page().
Implementations are based on Arm's functions with some minor modifications:
- p2m_get_entry():
- Reverse traversal of page tables, as RISC-V uses the opposite level
numbering compared to Arm.
- Removed the return of p2m_access_t from p2m_get_entry() since
mem_access_settings is not introduced for RISC-V.
- Updated BUILD_BUG_ON() to check using the level 0 mask, which corresponds
to Arm's THIRD_MASK.
- Replaced open-coded bit shifts with the BIT() macro.
- Other minor changes, such as using RISC-V-specific functions to validate
P2M PTEs, and replacing Arm-specific GUEST_* macros with their RISC-V
equivalents.
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Update prototype of p2m_is_locked() to return bool and accept pointer-to-const.
- Correct the comment above p2m_get_entry().
- Drop the check "BUILD_BUG_ON(XEN_PT_LEVEL_MAP_MASK(0) != PAGE_MASK);" inside
p2m_get_entry() as it is stale and it was needed to sure that 4k page(s) are
used on L3 (in Arm terms) what is true for RISC-V. (if not special extension
are used). It was another reason for Arm to have it (and I copied it to RISC-V),
but it isn't true for RISC-V. (some details could be found in response to the
patch).
- Style fixes.
- Add explanatory comment what the loop inside "gfn is higher then the highest
p2m mapping" does. Move this loop to separate function check_outside_boundary()
to cover both boundaries (lower_mapped_gfn and max_mapped_gfn).
- There is not need to allocate a page table as it is expected that
p2m_get_entry() normally would be called after a corresponding p2m_set_entry()
was called. So change 'true' to 'false' in a page table walking loop inside
p2m_get_entry().
- Correct handling of p2m_is_foreign case inside p2m_get_page_from_gfn().
- Introduce and use P2M_LEVEL_MASK instead of XEN_PT_LEVEL_MASK as it isn't take
into account two extra bits for root table in case of P2M.
- Drop stale item from "change in v3" - Add is_p2m_foreign() macro and connected stuff.
- Add p2m_read_(un)lock().
---
Changes in V3:
- Change struct domain *d argument of p2m_get_page_from_gfn() to
struct p2m_domain.
- Update the comment above p2m_get_entry().
- s/_t/p2mt for local variable in p2m_get_entry().
- Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn)
to define offsets array.
- Code style fixes.
- Update a check of rc code from p2m_next_level() in p2m_get_entry()
and drop "else" case.
- Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL.
- Use struct p2m_domain instead of struct domain for p2m_lookup() and
p2m_get_page_from_gfn().
- Move defintion of get_page() from "xen/riscv: implement mfn_valid() and page reference, ownership handling helpers"
---
Changes in V2:
- New patch.
---
xen/arch/riscv/include/asm/p2m.h | 24 ++++
xen/arch/riscv/mm.c | 13 +++
xen/arch/riscv/p2m.c | 186 +++++++++++++++++++++++++++++++
3 files changed, 223 insertions(+)
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index 29685c7852..2d0b0375d5 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -44,6 +44,12 @@ extern unsigned int gstage_root_level;
#define P2M_PAGETABLE_ENTRIES(lvl) \
(BIT(PAGETABLE_ORDER + P2M_ROOT_EXTRA_BITS(lvl), UL))
+#define GFN_MASK(lvl) (P2M_PAGETABLE_ENTRIES(lvl) - 1UL)
+
+#define P2M_LEVEL_SHIFT(lvl) (P2M_LEVEL_ORDER(lvl) + PAGE_SHIFT)
+
+#define P2M_LEVEL_MASK(lvl) (GFN_MASK(lvl) << P2M_LEVEL_SHIFT(lvl))
+
#define paddr_bits PADDR_BITS
/* Get host p2m table */
@@ -229,6 +235,24 @@ static inline bool p2m_is_write_locked(struct p2m_domain *p2m)
unsigned long construct_hgatp(struct p2m_domain *p2m, uint16_t vmid);
+static inline void p2m_read_lock(struct p2m_domain *p2m)
+{
+ read_lock(&p2m->lock);
+}
+
+static inline void p2m_read_unlock(struct p2m_domain *p2m)
+{
+ read_unlock(&p2m->lock);
+}
+
+static inline bool p2m_is_locked(const struct p2m_domain *p2m)
+{
+ return rw_is_locked(&p2m->lock);
+}
+
+struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
+ p2m_type_t *t);
+
#endif /* ASM__RISCV__P2M_H */
/*
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 8c6e8075f3..e34b1b674a 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -675,3 +675,16 @@ struct domain *page_get_owner_and_reference(struct page_info *page)
return owner;
}
+
+bool get_page(struct page_info *page, const struct domain *domain)
+{
+ const struct domain *owner = page_get_owner_and_reference(page);
+
+ if ( likely(owner == domain) )
+ return true;
+
+ if ( owner != NULL )
+ put_page(page);
+
+ return false;
+}
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index 1577b09b15..a5ea61fe61 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -978,3 +978,189 @@ int map_regions_p2mt(struct domain *d,
return rc;
}
+
+
+/*
+ * p2m_get_entry() should always return the correct order value, even if an
+ * entry is not present (i.e. the GFN is outside the range):
+ * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn]). (1)
+ *
+ * This ensures that callers of p2m_get_entry() can determine what range of
+ * address space would be altered by a corresponding p2m_set_entry().
+ * Also, it would help to avoid cost page walks for GFNs outside range (1).
+ *
+ * Therefore, this function returns true for GFNs outside range (1), and in
+ * that case the corresponding level is returned via the level_out argument.
+ * Otherwise, it returns false and p2m_get_entry() performs a page walk to
+ * find the proper entry.
+ */
+static bool check_outside_boundary(gfn_t gfn, gfn_t boundary, bool is_lower,
+ unsigned int *level_out)
+{
+ unsigned int level;
+
+ if ( (is_lower && gfn_x(gfn) < gfn_x(boundary)) ||
+ (!is_lower && gfn_x(gfn) > gfn_x(boundary)) )
+ {
+ for ( level = P2M_ROOT_LEVEL; level; level-- )
+ {
+ unsigned long mask = PFN_DOWN(P2M_LEVEL_MASK(level));
+
+ if ( (is_lower && ((gfn_x(gfn) & mask) < gfn_x(boundary))) ||
+ (!is_lower && ((gfn_x(gfn) & mask) > gfn_x(boundary))) )
+ {
+ *level_out = level;
+ return true;
+ }
+ }
+ }
+
+ return false;
+}
+
+/*
+ * Get the details of a given gfn.
+ *
+ * If the entry is present, the associated MFN will be returned and the
+ * p2m type of the mapping.
+ * The page_order will correspond to the order of the mapping in the page
+ * table (i.e it could be a superpage).
+ *
+ * If the entry is not present, INVALID_MFN will be returned and the
+ * page_order will be set according to the order of the invalid range.
+ *
+ * valid will contain the value of bit[0] (e.g valid bit) of the
+ * entry.
+ */
+static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+ p2m_type_t *t,
+ unsigned int *page_order,
+ bool *valid)
+{
+ unsigned int level = 0;
+ pte_t entry, *table;
+ int rc;
+ mfn_t mfn = INVALID_MFN;
+ DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
+
+ ASSERT(p2m_is_locked(p2m));
+
+ if ( valid )
+ *valid = false;
+
+ if ( check_outside_boundary(gfn, p2m->lowest_mapped_gfn, true, &level) )
+ goto out;
+
+ if ( check_outside_boundary(gfn, p2m->max_mapped_gfn, false, &level) )
+ goto out;
+
+ table = p2m_get_root_pointer(p2m, gfn);
+
+ /*
+ * The table should always be non-NULL because the gfn is below
+ * p2m->max_mapped_gfn and the root table pages are always present.
+ */
+ if ( !table )
+ {
+ ASSERT_UNREACHABLE();
+ level = P2M_ROOT_LEVEL;
+ goto out;
+ }
+
+ for ( level = P2M_ROOT_LEVEL; level; level-- )
+ {
+ rc = p2m_next_level(p2m, false, level, &table, offsets[level]);
+ if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
+ goto out_unmap;
+
+ if ( rc != P2M_TABLE_NORMAL )
+ break;
+ }
+
+ entry = table[offsets[level]];
+
+ if ( pte_is_valid(entry) )
+ {
+ if ( t )
+ *t = p2m_get_type(entry);
+
+ mfn = pte_get_mfn(entry);
+ /*
+ * The entry may point to a superpage. Find the MFN associated
+ * to the GFN.
+ */
+ mfn = mfn_add(mfn,
+ gfn_x(gfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1));
+
+ if ( valid )
+ *valid = pte_is_valid(entry);
+ }
+
+ out_unmap:
+ unmap_domain_page(table);
+
+ out:
+ if ( page_order )
+ *page_order = P2M_LEVEL_ORDER(level);
+
+ return mfn;
+}
+
+static mfn_t p2m_lookup(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t)
+{
+ mfn_t mfn;
+
+ p2m_read_lock(p2m);
+ mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL);
+ p2m_read_unlock(p2m);
+
+ return mfn;
+}
+
+struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
+ p2m_type_t *t)
+{
+ struct page_info *page;
+ p2m_type_t p2mt = p2m_invalid;
+ mfn_t mfn;
+
+ p2m_read_lock(p2m);
+ mfn = p2m_lookup(p2m, gfn, t);
+
+ if ( !mfn_valid(mfn) )
+ {
+ p2m_read_unlock(p2m);
+ return NULL;
+ }
+
+ if ( t )
+ p2mt = *t;
+
+ page = mfn_to_page(mfn);
+
+ /*
+ * get_page won't work on foreign mapping because the page doesn't
+ * belong to the current domain.
+ */
+ if ( unlikely(p2m_is_foreign(p2mt)) )
+ {
+ const struct domain *fdom = page_get_owner_and_reference(page);
+
+ p2m_read_unlock(p2m);
+
+ if ( fdom )
+ {
+ if ( likely(fdom != p2m->domain) )
+ return page;
+
+ ASSERT_UNREACHABLE();
+ put_page(page);
+ }
+
+ return NULL;
+ }
+
+ p2m_read_unlock(p2m);
+
+ return get_page(page, p2m->domain) ? page : NULL;
+}
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN
2025-09-17 21:55 ` [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN Oleksii Kurochko
@ 2025-09-22 20:46 ` Jan Beulich
2025-09-30 15:37 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-22 20:46 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -978,3 +978,189 @@ int map_regions_p2mt(struct domain *d,
>
> return rc;
> }
> +
> +
Nit: No double blank lines please.
> +/*
> + * p2m_get_entry() should always return the correct order value, even if an
> + * entry is not present (i.e. the GFN is outside the range):
> + * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn]). (1)
> + *
> + * This ensures that callers of p2m_get_entry() can determine what range of
> + * address space would be altered by a corresponding p2m_set_entry().
> + * Also, it would help to avoid cost page walks for GFNs outside range (1).
> + *
> + * Therefore, this function returns true for GFNs outside range (1), and in
> + * that case the corresponding level is returned via the level_out argument.
> + * Otherwise, it returns false and p2m_get_entry() performs a page walk to
> + * find the proper entry.
> + */
> +static bool check_outside_boundary(gfn_t gfn, gfn_t boundary, bool is_lower,
> + unsigned int *level_out)
> +{
> + unsigned int level;
> +
> + if ( (is_lower && gfn_x(gfn) < gfn_x(boundary)) ||
> + (!is_lower && gfn_x(gfn) > gfn_x(boundary)) )
I understand people write things this way, but personally I find it confusing
to read. Why not simply use a conditional operator here (and again below):
if ( is_lower ? gfn_x(gfn) < gfn_x(boundary)
: gfn_x(gfn) > gfn_x(boundary) )
> + {
> + for ( level = P2M_ROOT_LEVEL; level; level-- )
> + {
> + unsigned long mask = PFN_DOWN(P2M_LEVEL_MASK(level));
Don't you need to accumulate the mask to use across loop iterations here
(or calculate it accordingly)? Else ...
> + if ( (is_lower && ((gfn_x(gfn) & mask) < gfn_x(boundary))) ||
> + (!is_lower && ((gfn_x(gfn) & mask) > gfn_x(boundary))) )
... here you'll compare some middle part of the original GFN against the
boundary.
> + {
> + *level_out = level;
> + return true;
> + }
> + }
> + }
> +
> + return false;
> +}
> +
> +/*
> + * Get the details of a given gfn.
> + *
> + * If the entry is present, the associated MFN will be returned and the
> + * p2m type of the mapping.
> + * The page_order will correspond to the order of the mapping in the page
> + * table (i.e it could be a superpage).
> + *
> + * If the entry is not present, INVALID_MFN will be returned and the
> + * page_order will be set according to the order of the invalid range.
> + *
> + * valid will contain the value of bit[0] (e.g valid bit) of the
> + * entry.
> + */
> +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
> + p2m_type_t *t,
> + unsigned int *page_order,
> + bool *valid)
> +{
> + unsigned int level = 0;
> + pte_t entry, *table;
> + int rc;
> + mfn_t mfn = INVALID_MFN;
> + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
> +
> + ASSERT(p2m_is_locked(p2m));
> +
> + if ( valid )
> + *valid = false;
Wouldn't you better similarly set *t to some "default" value?
> + if ( check_outside_boundary(gfn, p2m->lowest_mapped_gfn, true, &level) )
> + goto out;
> +
> + if ( check_outside_boundary(gfn, p2m->max_mapped_gfn, false, &level) )
> + goto out;
> +
> + table = p2m_get_root_pointer(p2m, gfn);
> +
> + /*
> + * The table should always be non-NULL because the gfn is below
> + * p2m->max_mapped_gfn and the root table pages are always present.
> + */
> + if ( !table )
> + {
> + ASSERT_UNREACHABLE();
> + level = P2M_ROOT_LEVEL;
> + goto out;
> + }
> +
> + for ( level = P2M_ROOT_LEVEL; level; level-- )
> + {
> + rc = p2m_next_level(p2m, false, level, &table, offsets[level]);
> + if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
> + goto out_unmap;
Getting back P2M_TABLE_MAP_NOMEM here is a bug, not really a loop exit
condition.
> + if ( rc != P2M_TABLE_NORMAL )
> + break;
> + }
> +
> + entry = table[offsets[level]];
> +
> + if ( pte_is_valid(entry) )
> + {
> + if ( t )
> + *t = p2m_get_type(entry);
> +
> + mfn = pte_get_mfn(entry);
> + /*
> + * The entry may point to a superpage. Find the MFN associated
> + * to the GFN.
> + */
> + mfn = mfn_add(mfn,
> + gfn_x(gfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1));
May want to assert that the respective bits of "mfn" are actually clear
before this calculation.
> + if ( valid )
> + *valid = pte_is_valid(entry);
> + }
> +
> + out_unmap:
> + unmap_domain_page(table);
> +
> + out:
> + if ( page_order )
> + *page_order = P2M_LEVEL_ORDER(level);
> +
> + return mfn;
> +}
> +
> +static mfn_t p2m_lookup(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t)
> +{
> + mfn_t mfn;
> +
> + p2m_read_lock(p2m);
> + mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL);
Seeing the two NULLs here I wonder: What use is the "valid" parameter of that
function? And what use is the function here when it doesn't also return the
order? IOW I'm not sure having this helper is actually worthwhile. This is
even more so that ...
> + p2m_read_unlock(p2m);
> +
> + return mfn;
> +}
> +
> +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
> + p2m_type_t *t)
> +{
> + struct page_info *page;
> + p2m_type_t p2mt = p2m_invalid;
> + mfn_t mfn;
> +
> + p2m_read_lock(p2m);
> + mfn = p2m_lookup(p2m, gfn, t);
... there's a locking problem here: You cannot acquire a read lock in a
nested fashion - that's a recipe for a deadlock when between the first
acquire and the 2nd acquire attempt another CPU tries to acquire the
lock for writing (which will result in no further readers being allowed
in). It wasn't all that long ago that in the security team we actually
audited the code base for the absence of such a pattern.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN
2025-09-22 20:46 ` Jan Beulich
@ 2025-09-30 15:37 ` Oleksii Kurochko
2025-10-07 13:14 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-30 15:37 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 7854 bytes --]
On 9/22/25 10:46 PM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -978,3 +978,189 @@ int map_regions_p2mt(struct domain *d,
>>
>> return rc;
>> }
>> +
>> +
> Nit: No double blank lines please.
>
>> +/*
>> + * p2m_get_entry() should always return the correct order value, even if an
>> + * entry is not present (i.e. the GFN is outside the range):
>> + * [p2m->lowest_mapped_gfn, p2m->max_mapped_gfn]). (1)
>> + *
>> + * This ensures that callers of p2m_get_entry() can determine what range of
>> + * address space would be altered by a corresponding p2m_set_entry().
>> + * Also, it would help to avoid cost page walks for GFNs outside range (1).
>> + *
>> + * Therefore, this function returns true for GFNs outside range (1), and in
>> + * that case the corresponding level is returned via the level_out argument.
>> + * Otherwise, it returns false and p2m_get_entry() performs a page walk to
>> + * find the proper entry.
>> + */
>> +static bool check_outside_boundary(gfn_t gfn, gfn_t boundary, bool is_lower,
>> + unsigned int *level_out)
>> +{
>> + unsigned int level;
>> +
>> + if ( (is_lower && gfn_x(gfn) < gfn_x(boundary)) ||
>> + (!is_lower && gfn_x(gfn) > gfn_x(boundary)) )
> I understand people write things this way, but personally I find it confusing
> to read. Why not simply use a conditional operator here (and again below):
>
> if ( is_lower ? gfn_x(gfn) < gfn_x(boundary)
> : gfn_x(gfn) > gfn_x(boundary) )
I am okay with both options. If you think the second one is more readable then I
will use it.
>> + {
>> + for ( level = P2M_ROOT_LEVEL; level; level-- )
>> + {
>> + unsigned long mask = PFN_DOWN(P2M_LEVEL_MASK(level));
> Don't you need to accumulate the mask to use across loop iterations here
> (or calculate it accordingly)? Else ...
>
>> + if ( (is_lower && ((gfn_x(gfn) & mask) < gfn_x(boundary))) ||
>> + (!is_lower && ((gfn_x(gfn) & mask) > gfn_x(boundary))) )
> ... here you'll compare some middle part of the original GFN against the
> boundary.
Agree, accumulation of the mask should be done here.
>> + {
>> + *level_out = level;
>> + return true;
>> + }
>> + }
>> + }
>> +
>> + return false;
>> +}
>> +
>> +/*
>> + * Get the details of a given gfn.
>> + *
>> + * If the entry is present, the associated MFN will be returned and the
>> + * p2m type of the mapping.
>> + * The page_order will correspond to the order of the mapping in the page
>> + * table (i.e it could be a superpage).
>> + *
>> + * If the entry is not present, INVALID_MFN will be returned and the
>> + * page_order will be set according to the order of the invalid range.
>> + *
>> + * valid will contain the value of bit[0] (e.g valid bit) of the
>> + * entry.
>> + */
>> +static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
>> + p2m_type_t *t,
>> + unsigned int *page_order,
>> + bool *valid)
>> +{
>> + unsigned int level = 0;
>> + pte_t entry, *table;
>> + int rc;
>> + mfn_t mfn = INVALID_MFN;
>> + DECLARE_OFFSETS(offsets, gfn_to_gaddr(gfn));
>> +
>> + ASSERT(p2m_is_locked(p2m));
>> +
>> + if ( valid )
>> + *valid = false;
> Wouldn't you better similarly set *t to some "default" value?
I think it makes sense. I will set it to p2m_invalid.
>> + if ( check_outside_boundary(gfn, p2m->lowest_mapped_gfn, true, &level) )
>> + goto out;
>> +
>> + if ( check_outside_boundary(gfn, p2m->max_mapped_gfn, false, &level) )
>> + goto out;
>> +
>> + table = p2m_get_root_pointer(p2m, gfn);
>> +
>> + /*
>> + * The table should always be non-NULL because the gfn is below
>> + * p2m->max_mapped_gfn and the root table pages are always present.
>> + */
>> + if ( !table )
>> + {
>> + ASSERT_UNREACHABLE();
>> + level = P2M_ROOT_LEVEL;
>> + goto out;
>> + }
>> +
>> + for ( level = P2M_ROOT_LEVEL; level; level-- )
>> + {
>> + rc = p2m_next_level(p2m, false, level, &table, offsets[level]);
>> + if ( (rc == P2M_TABLE_MAP_NONE) || (rc == P2M_TABLE_MAP_NOMEM) )
>> + goto out_unmap;
> Getting back P2M_TABLE_MAP_NOMEM here is a bug, not really a loop exit
> condition.
Oh, I agree. With the second argument set to|false|,|rc = P2M_TABLE_MAP_NOMEM|
will never be returned, so it can simply be dropped.
>
>> + if ( rc != P2M_TABLE_NORMAL )
>> + break;
>> + }
>> +
>> + entry = table[offsets[level]];
>> +
>> + if ( pte_is_valid(entry) )
>> + {
>> + if ( t )
>> + *t = p2m_get_type(entry);
>> +
>> + mfn = pte_get_mfn(entry);
>> + /*
>> + * The entry may point to a superpage. Find the MFN associated
>> + * to the GFN.
>> + */
>> + mfn = mfn_add(mfn,
>> + gfn_x(gfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1));
> May want to assert that the respective bits of "mfn" are actually clear
> before this calculation.
ASSERT(!(mfn & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)));
Do you mean something like that?
I am not 100% sure that there is really need for that as page-fault exception
is raised if the PA is insufficienlty aligned:
Any level of PTE may be a leaf PTE, so in addition to 4 KiB pages, Sv39 supports
2 MiB megapages and 1 GiB gigapages, each of which must be virtually and
physically aligned to a boundary equal to its size. A page-fault exception is
raised if the physical address is insufficiently aligned.
>
>> + if ( valid )
>> + *valid = pte_is_valid(entry);
>> + }
>> +
>> + out_unmap:
>> + unmap_domain_page(table);
>> +
>> + out:
>> + if ( page_order )
>> + *page_order = P2M_LEVEL_ORDER(level);
>> +
>> + return mfn;
>> +}
>> +
>> +static mfn_t p2m_lookup(struct p2m_domain *p2m, gfn_t gfn, p2m_type_t *t)
>> +{
>> + mfn_t mfn;
>> +
>> + p2m_read_lock(p2m);
>> + mfn = p2m_get_entry(p2m, gfn, t, NULL, NULL);
> Seeing the two NULLs here I wonder: What use is the "valid" parameter of that
> function?
`valid` parameter isn't really needed anymore. It was needed when I had a copy
of `valid` bit with real (in PTE) valid bit set to 0 to track which one pages
are used.
I will drop `valid` parameter.
> And what use is the function here when it doesn't also return the
> order?
It could be used for gfn_to_mfn(), but p2m_get_entry() could be used too, just
not need to forget to wrap by p2m_read_(un)lock() each time when p2m_get_entry()
is used. Probably, it makes sense to put p2m_read_(un)lock() inside p2m_get_entry().
I think we can leave only p2m_get_entry() and drop p2m_lookup().
> IOW I'm not sure having this helper is actually worthwhile. This is
> even more so that ...
>> + p2m_read_unlock(p2m);
>> +
>> + return mfn;
>> +}
>> +
>> +struct page_info *p2m_get_page_from_gfn(struct p2m_domain *p2m, gfn_t gfn,
>> + p2m_type_t *t)
>> +{
>> + struct page_info *page;
>> + p2m_type_t p2mt = p2m_invalid;
>> + mfn_t mfn;
>> +
>> + p2m_read_lock(p2m);
>> + mfn = p2m_lookup(p2m, gfn, t);
> ... there's a locking problem here: You cannot acquire a read lock in a
> nested fashion - that's a recipe for a deadlock when between the first
> acquire and the 2nd acquire attempt another CPU tries to acquire the
> lock for writing (which will result in no further readers being allowed
> in). It wasn't all that long ago that in the security team we actually
> audited the code base for the absence of such a pattern.
Oh, missed such case. Thanks for explanation and review.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 10417 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN
2025-09-30 15:37 ` Oleksii Kurochko
@ 2025-10-07 13:14 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-10-07 13:14 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 30.09.2025 17:37, Oleksii Kurochko wrote:
> On 9/22/25 10:46 PM, Jan Beulich wrote:
>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>> + if ( rc != P2M_TABLE_NORMAL )
>>> + break;
>>> + }
>>> +
>>> + entry = table[offsets[level]];
>>> +
>>> + if ( pte_is_valid(entry) )
>>> + {
>>> + if ( t )
>>> + *t = p2m_get_type(entry);
>>> +
>>> + mfn = pte_get_mfn(entry);
>>> + /*
>>> + * The entry may point to a superpage. Find the MFN associated
>>> + * to the GFN.
>>> + */
>>> + mfn = mfn_add(mfn,
>>> + gfn_x(gfn) & (BIT(P2M_LEVEL_ORDER(level), UL) - 1));
>> May want to assert that the respective bits of "mfn" are actually clear
>> before this calculation.
>
> ASSERT(!(mfn & (BIT(P2M_LEVEL_ORDER(level), UL) - 1)));
> Do you mean something like that?
Yes.
> I am not 100% sure that there is really need for that as page-fault exception
> is raised if the PA is insufficienlty aligned:
> Any level of PTE may be a leaf PTE, so in addition to 4 KiB pages, Sv39 supports
> 2 MiB megapages and 1 GiB gigapages, each of which must be virtually and
> physically aligned to a boundary equal to its size. A page-fault exception is
> raised if the physical address is insufficiently aligned.
But that would be raised only when a page walk encounters such a PTE. You may
be altering a PTE here which never was involved in a page walk, though.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-09-17 21:55 [PATCH v4 00/18 for 4.22] xen/riscv: introduce p2m functionality Oleksii Kurochko
` (16 preceding siblings ...)
2025-09-17 21:55 ` [PATCH v4 17/18] xen/riscv: add support of page lookup by GFN Oleksii Kurochko
@ 2025-09-17 21:55 ` Oleksii Kurochko
2025-09-22 22:41 ` Jan Beulich
17 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-09-17 21:55 UTC (permalink / raw)
To: xen-devel
Cc: Oleksii Kurochko, Alistair Francis, Bob Eshleman, Connor Davis,
Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
Julien Grall, Roger Pau Monné, Stefano Stabellini
RISC-V's PTE has only two available bits that can be used to store the P2M
type. This is insufficient to represent all the current RISC-V P2M types.
Therefore, some P2M types must be stored outside the PTE bits.
To address this, a metadata table is introduced to store P2M types that
cannot fit in the PTE itself. Not all P2M types are stored in the
metadata table—only those that require it.
The metadata table is linked to the intermediate page table via the
`struct page_info`'s v.md.metadata field of the corresponding intermediate
page.
Such pages are allocated with MEMF_no_owner, which allows us to use
the v field for the purpose of storing the metadata table.
To simplify the allocation and linking of intermediate and metadata page
tables, `p2m_{alloc,free}_table()` functions are implemented.
These changes impact `p2m_split_superpage()`, since when a superpage is
split, it is necessary to update the metadata table of the new
intermediate page table — if the entry being split has its P2M type set
to `p2m_ext_storage` in its `P2M_TYPES` bits. In addition to updating
the metadata of the new intermediate page table, the corresponding entry
in the metadata for the original superpage is invalidated.
Also, update p2m_{get,set}_type to work with P2M types which don't fit
into PTE bits.
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes in V4:
- Add Suggested-by: Jan Beulich <jbeulich@suse.com>.
- Update the comment above declation of md structure inside struct page_info to:
"Page is used as an intermediate P2M page table".
- Allocate metadata table on demand to save some memory. (1)
- Rework p2m_set_type():
- Add allocatation of metadata page only if needed.
- Move a check what kind of type we are handling inside p2m_set_type().
- Move mapping of metadata page inside p2m_get_type() as it is needed only
in case if PTE's type is equal to p2m_ext_storage.
- Add some description to p2m_get_type() function.
- Drop blank after return type of p2m_alloc_table().
- Drop allocation of metadata page inside p2m_alloc_table becaues of (1).
- Fix p2m_free_table() to free metadata page only if it was allocated.
---
Changes in V3:
- Add is_p2m_foreign() macro and connected stuff.
- Change struct domain *d argument of p2m_get_page_from_gfn() to
struct p2m_domain.
- Update the comment above p2m_get_entry().
- s/_t/p2mt for local variable in p2m_get_entry().
- Drop local variable addr in p2m_get_entry() and use gfn_to_gaddr(gfn)
to define offsets array.
- Code style fixes.
- Update a check of rc code from p2m_next_level() in p2m_get_entry()
and drop "else" case.
- Do not call p2m_get_type() if p2m_get_entry()'s t argument is NULL.
- Use struct p2m_domain instead of struct domain for p2m_lookup() and
p2m_get_page_from_gfn().
- Move defintion of get_page() from "xen/riscv: implement mfn_valid() and page reference, ownership handling helpers"
---
Changes in V2:
- New patch.
---
xen/arch/riscv/include/asm/mm.h | 9 ++
xen/arch/riscv/p2m.c | 247 +++++++++++++++++++++++++++-----
2 files changed, 218 insertions(+), 38 deletions(-)
diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 1b16809749..1464119b6f 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -149,6 +149,15 @@ struct page_info
/* Order-size of the free chunk this page is the head of. */
unsigned int order;
} free;
+
+ /* Page is used as an intermediate P2M page table */
+ struct {
+ /*
+ * Pointer to a page which store metadata for an intermediate page
+ * table.
+ */
+ struct page_info *metadata;
+ } md;
} v;
union {
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index a5ea61fe61..14809bd089 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -16,6 +16,16 @@
#include <asm/paging.h>
#include <asm/riscv_encoding.h>
+/*
+ * P2M PTE context is used only when a PTE's P2M type is p2m_ext_storage.
+ * In this case, the P2M type is stored separately in the metadata page.
+ */
+struct p2m_pte_ctx {
+ struct page_info *pt_page; /* Page table page containing the PTE. */
+ unsigned int index; /* Index of the PTE within that page. */
+ unsigned int level; /* Paging level at which the PTE resides. */
+};
+
unsigned long __ro_after_init gstage_mode;
unsigned int __ro_after_init gstage_root_level;
@@ -289,24 +299,98 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
return __map_domain_page(p2m->root + root_table_indx);
}
-static int p2m_set_type(pte_t *pte, p2m_type_t t)
+static struct page_info * p2m_alloc_table(struct p2m_domain *p2m);
+
+/*
+ * `pte` – PTE entry for which the type `t` will be stored.
+ *
+ * If `t` is `p2m_ext_storage`, both `ctx` and `p2m` must be provided;
+ * otherwise, they may be NULL.
+ */
+static void p2m_set_type(pte_t *pte, const p2m_type_t t,
+ struct p2m_pte_ctx *ctx,
+ struct p2m_domain *p2m)
{
- int rc = 0;
+ /*
+ * For the root page table (16 KB in size), we need to select the correct
+ * metadata table, since allocations are 4 KB each. In total, there are
+ * 4 tables of 4 KB each.
+ * For none-root page table index of ->pt_page[] will be always 0 as
+ * index won't be higher then 511. ASSERT() below verifies that.
+ */
+ struct page_info **md_pg =
+ &ctx->pt_page[ctx->index / PAGETABLE_ENTRIES].v.md.metadata;
+ pte_t *metadata = NULL;
+
+ /* Be sure that an index correspondent to page level is passed. */
+ ASSERT(ctx->index <= P2M_PAGETABLE_ENTRIES(ctx->level));
+
+ if ( !*md_pg && (t >= p2m_first_external) )
+ {
+ /*
+ * Ensure that when `t` is stored outside the PTE bits
+ * (i.e. `t == p2m_ext_storage` or higher),
+ * both `ctx` and `p2m` are provided.
+ */
+ ASSERT(p2m && ctx);
- if ( t > p2m_first_external )
- panic("unimplemeted\n");
- else
+ if ( ctx->level <= P2M_SUPPORTED_LEVEL_MAPPING )
+ {
+ struct domain *d = p2m->domain;
+
+ *md_pg = p2m_alloc_table(p2m);
+ if ( !*md_pg )
+ {
+ printk("%s: can't allocate extra memory for dom%d\n",
+ __func__, d->domain_id);
+ domain_crash(d);
+ }
+ }
+ else
+ /*
+ * It is not legal to set a type for an entry which shouldn't
+ * be mapped.
+ */
+ ASSERT_UNREACHABLE();
+ }
+
+ if ( *md_pg )
+ metadata = __map_domain_page(*md_pg);
+
+ if ( t < p2m_first_external )
+ {
pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
- return rc;
+ if ( metadata )
+ metadata[ctx->index].pte = p2m_invalid;
+ }
+ else
+ {
+ pte->pte |= MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK);
+
+ metadata[ctx->index].pte = t;
+ }
+
+ if ( metadata )
+ unmap_domain_page(metadata);
}
-static p2m_type_t p2m_get_type(const pte_t pte)
+/*
+ * `pte` -> PTE entry that stores the PTE's type.
+ *
+ * If the PTE's type is `p2m_ext_storage`, `ctx` should be provided;
+ * otherwise it could be NULL.
+ */
+static p2m_type_t p2m_get_type(const pte_t pte, const struct p2m_pte_ctx *ctx)
{
p2m_type_t type = MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK);
if ( type == p2m_ext_storage )
- panic("unimplemented\n");
+ {
+ pte_t *md = __map_domain_page(ctx->pt_page->v.md.metadata);
+ type = md[ctx->index].pte;
+ unmap_domain_page(ctx->pt_page->v.md.metadata);
+ }
return type;
}
@@ -381,7 +465,10 @@ static void p2m_set_permission(pte_t *e, p2m_type_t t)
}
}
-static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
+static pte_t p2m_pte_from_mfn(const mfn_t mfn, const p2m_type_t t,
+ struct p2m_pte_ctx *p2m_pte_ctx,
+ const bool is_table,
+ struct p2m_domain *p2m)
{
pte_t e = (pte_t) { PTE_VALID };
@@ -402,7 +489,7 @@ static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
if ( !is_table )
{
p2m_set_permission(&e, t);
- p2m_set_type(&e, t);
+ p2m_set_type(&e, t, p2m_pte_ctx, p2m);
}
else
/*
@@ -421,8 +508,13 @@ static pte_t page_to_p2m_table(const struct page_info *page)
* p2m_invalid will be ignored inside p2m_pte_from_mfn() as is_table is
* set to true and p2m_type_t shouldn't be applied for PTEs which
* describe an intermidiate table.
+ * That it also a reason why `p2m_pte_ctx` argument is NULL as a type
+ * isn't set for p2m tables.
+ * As p2m_pte_from_mfn()'s last argument is necessary only when a type
+ * should be set. For p2m table we don't set a type, so it is okay
+ * to have NULL for this argument.
*/
- return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, true);
+ return p2m_pte_from_mfn(page_to_mfn(page), p2m_invalid, NULL, true, NULL);
}
static struct page_info *p2m_alloc_page(struct p2m_domain *p2m)
@@ -435,22 +527,47 @@ static struct page_info *p2m_alloc_page(struct p2m_domain *p2m)
return pg;
}
+static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg);
+
+/*
+ * Allocate a page table with an additional extra page to store
+ * metadata for each entry of the page table.
+ * Link this metadata page to page table page's list field.
+ */
+static struct page_info *p2m_alloc_table(struct p2m_domain *p2m)
+{
+ struct page_info *page_tbl = p2m_alloc_page(p2m);
+
+ if ( !page_tbl )
+ return NULL;
+
+ clear_and_clean_page(page_tbl, p2m->clean_dcache);
+
+ return page_tbl;
+}
+
+/*
+ * Free page table's page and metadata page linked to page table's page.
+ */
+static void p2m_free_table(struct p2m_domain *p2m, struct page_info *tbl_pg)
+{
+ ASSERT(tbl_pg->v.md.metadata);
+
+ if ( tbl_pg->v.md.metadata )
+ p2m_free_page(p2m, tbl_pg->v.md.metadata);
+ p2m_free_page(p2m, tbl_pg);
+}
+
/*
* Allocate a new page table page with an extra metadata page and hook it
* in via the given entry.
*/
static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
{
- struct page_info *page;
+ struct page_info *page = p2m_alloc_table(p2m);
ASSERT(!pte_is_valid(*entry));
- page = p2m_alloc_page(p2m);
- if ( page == NULL )
- return -ENOMEM;
-
- clear_and_clean_page(page, p2m->clean_dcache);
-
p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache);
return 0;
@@ -599,12 +716,14 @@ static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg)
/* Free pte sub-tree behind an entry */
static void p2m_free_subtree(struct p2m_domain *p2m,
- pte_t entry, unsigned int level)
+ pte_t entry,
+ const struct p2m_pte_ctx *p2m_pte_ctx)
{
unsigned int i;
pte_t *table;
mfn_t mfn;
struct page_info *pg;
+ unsigned int level = p2m_pte_ctx->level;
/*
* Check if the level is valid: only 4K - 2M - 1G mappings are supported.
@@ -620,7 +739,7 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
if ( (level == 0) || pte_is_superpage(entry, level) )
{
- p2m_type_t p2mt = p2m_get_type(entry);
+ p2m_type_t p2mt = p2m_get_type(entry, p2m_pte_ctx);
#ifdef CONFIG_IOREQ_SERVER
/*
@@ -629,7 +748,7 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
* has failed (error case).
* So, at worst, the spurious mapcache invalidation might be sent.
*/
- if ( p2m_is_ram(p2m_get_type(p2m, entry)) &&
+ if ( p2m_is_ram(p2mt) &&
domain_has_ioreq_server(p2m->domain) )
ioreq_request_mapcache_invalidate(p2m->domain);
#endif
@@ -639,9 +758,21 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
return;
}
- table = map_domain_page(pte_get_mfn(entry));
+ mfn = pte_get_mfn(entry);
+ ASSERT(mfn_valid(mfn));
+ table = map_domain_page(mfn);
+ pg = mfn_to_page(mfn);
+
for ( i = 0; i < P2M_PAGETABLE_ENTRIES(level); i++ )
- p2m_free_subtree(p2m, table[i], level - 1);
+ {
+ struct p2m_pte_ctx tmp_ctx = {
+ .pt_page = pg,
+ .index = i,
+ .level = level -1
+ };
+
+ p2m_free_subtree(p2m, table[i], &tmp_ctx);
+ }
unmap_domain_page(table);
@@ -653,17 +784,13 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
*/
p2m_tlb_flush_sync(p2m);
- mfn = pte_get_mfn(entry);
- ASSERT(mfn_valid(mfn));
-
- pg = mfn_to_page(mfn);
-
- p2m_free_page(p2m, pg);
+ p2m_free_table(p2m, pg);
}
static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
unsigned int level, unsigned int target,
- const unsigned int *offsets)
+ const unsigned int *offsets,
+ struct page_info *tbl_pg)
{
struct page_info *page;
unsigned long i;
@@ -682,7 +809,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
ASSERT(level > target);
ASSERT(pte_is_superpage(*entry, level));
- page = p2m_alloc_page(p2m);
+ page = p2m_alloc_table(p2m);
if ( !page )
{
/*
@@ -707,6 +834,22 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
pte = *entry;
pte_set_mfn(&pte, mfn_add(mfn, i << level_order));
+ if ( MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK) == p2m_ext_storage )
+ {
+ struct p2m_pte_ctx p2m_pte_ctx = {
+ .pt_page = tbl_pg,
+ .index = offsets[level],
+ };
+
+ p2m_type_t old_type = p2m_get_type(pte, &p2m_pte_ctx);
+
+ p2m_pte_ctx.pt_page = page;
+ p2m_pte_ctx.index = i;
+ p2m_pte_ctx.level = level;
+
+ p2m_set_type(&pte, old_type, &p2m_pte_ctx, p2m);
+ }
+
write_pte(new_entry, pte);
}
@@ -718,7 +861,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
*/
if ( next_level != target )
rv = p2m_split_superpage(p2m, table + offsets[next_level],
- level - 1, target, offsets);
+ level - 1, target, offsets, page);
if ( p2m->clean_dcache )
clean_dcache_va_range(table, PAGE_SIZE);
@@ -812,13 +955,21 @@ static int p2m_set_entry(struct p2m_domain *p2m,
{
/* We need to split the original page. */
pte_t split_pte = *entry;
+ struct page_info *tbl_pg = virt_to_page(table);
ASSERT(pte_is_superpage(*entry, level));
- if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets) )
+ if ( !p2m_split_superpage(p2m, &split_pte, level, target, offsets,
+ tbl_pg) )
{
+ struct p2m_pte_ctx tmp_ctx = {
+ .pt_page = tbl_pg,
+ .index = offsets[level],
+ .level = level,
+ };
+
/* Free the allocated sub-tree */
- p2m_free_subtree(p2m, split_pte, level);
+ p2m_free_subtree(p2m, split_pte, &tmp_ctx);
rc = -ENOMEM;
goto out;
@@ -856,7 +1007,13 @@ static int p2m_set_entry(struct p2m_domain *p2m,
p2m_clean_pte(entry, p2m->clean_dcache);
else
{
- pte_t pte = p2m_pte_from_mfn(mfn, t, false);
+ struct p2m_pte_ctx tmp_ctx = {
+ .pt_page = virt_to_page(table),
+ .index = offsets[level],
+ .level = level,
+ };
+
+ pte_t pte = p2m_pte_from_mfn(mfn, t, &tmp_ctx, false, p2m);
p2m_write_pte(entry, pte, p2m->clean_dcache);
@@ -892,7 +1049,15 @@ static int p2m_set_entry(struct p2m_domain *p2m,
if ( pte_is_valid(orig_pte) &&
(!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
(removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
- p2m_free_subtree(p2m, orig_pte, level);
+ {
+ struct p2m_pte_ctx tmp_ctx = {
+ .pt_page = virt_to_page(table),
+ .index = offsets[level],
+ .level = level,
+ };
+
+ p2m_free_subtree(p2m, orig_pte, &tmp_ctx);
+ }
out:
unmap_domain_page(table);
@@ -979,7 +1144,6 @@ int map_regions_p2mt(struct domain *d,
return rc;
}
-
/*
* p2m_get_entry() should always return the correct order value, even if an
* entry is not present (i.e. the GFN is outside the range):
@@ -1082,7 +1246,14 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
if ( pte_is_valid(entry) )
{
if ( t )
- *t = p2m_get_type(entry);
+ {
+ struct p2m_pte_ctx p2m_pte_ctx = {
+ .pt_page = virt_to_page(table),
+ .index = offsets[level],
+ };
+
+ *t = p2m_get_type(entry,&p2m_pte_ctx);
+ }
mfn = pte_get_mfn(entry);
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 62+ messages in thread* Re: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-09-17 21:55 ` [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type Oleksii Kurochko
@ 2025-09-22 22:41 ` Jan Beulich
2025-10-01 16:00 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-09-22 22:41 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 17.09.2025 23:55, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/mm.h
> +++ b/xen/arch/riscv/include/asm/mm.h
> @@ -149,6 +149,15 @@ struct page_info
> /* Order-size of the free chunk this page is the head of. */
> unsigned int order;
> } free;
> +
> + /* Page is used as an intermediate P2M page table */
> + struct {
> + /*
> + * Pointer to a page which store metadata for an intermediate page
> + * table.
> + */
> + struct page_info *metadata;
Any reason for this to not be "page" or "pg"? The metadata aspect is already
covered ...
> + } md;
... by the "md" here.
> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -16,6 +16,16 @@
> #include <asm/paging.h>
> #include <asm/riscv_encoding.h>
>
> +/*
> + * P2M PTE context is used only when a PTE's P2M type is p2m_ext_storage.
> + * In this case, the P2M type is stored separately in the metadata page.
> + */
> +struct p2m_pte_ctx {
> + struct page_info *pt_page; /* Page table page containing the PTE. */
> + unsigned int index; /* Index of the PTE within that page. */
> + unsigned int level; /* Paging level at which the PTE resides. */
> +};
> +
> unsigned long __ro_after_init gstage_mode;
> unsigned int __ro_after_init gstage_root_level;
>
> @@ -289,24 +299,98 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain *p2m, gfn_t gfn)
> return __map_domain_page(p2m->root + root_table_indx);
> }
>
> -static int p2m_set_type(pte_t *pte, p2m_type_t t)
> +static struct page_info * p2m_alloc_table(struct p2m_domain *p2m);
Nit: Stray blank again.
> +/*
> + * `pte` – PTE entry for which the type `t` will be stored.
> + *
> + * If `t` is `p2m_ext_storage`, both `ctx` and `p2m` must be provided;
> + * otherwise, they may be NULL.
> + */
> +static void p2m_set_type(pte_t *pte, const p2m_type_t t,
> + struct p2m_pte_ctx *ctx,
> + struct p2m_domain *p2m)
> {
> - int rc = 0;
> + /*
> + * For the root page table (16 KB in size), we need to select the correct
> + * metadata table, since allocations are 4 KB each. In total, there are
> + * 4 tables of 4 KB each.
> + * For none-root page table index of ->pt_page[] will be always 0 as
> + * index won't be higher then 511. ASSERT() below verifies that.
> + */
> + struct page_info **md_pg =
> + &ctx->pt_page[ctx->index / PAGETABLE_ENTRIES].v.md.metadata;
> + pte_t *metadata = NULL;
> +
> + /* Be sure that an index correspondent to page level is passed. */
> + ASSERT(ctx->index <= P2M_PAGETABLE_ENTRIES(ctx->level));
Doesn't this need to be < ?
> + if ( !*md_pg && (t >= p2m_first_external) )
> + {
> + /*
> + * Ensure that when `t` is stored outside the PTE bits
> + * (i.e. `t == p2m_ext_storage` or higher),
> + * both `ctx` and `p2m` are provided.
> + */
> + ASSERT(p2m && ctx);
Imo this would want to be checked whenever t > p2m_first_external, no
matter whether a metadata page was already allocated.
> - if ( t > p2m_first_external )
> - panic("unimplemeted\n");
> - else
> + if ( ctx->level <= P2M_SUPPORTED_LEVEL_MAPPING )
> + {
> + struct domain *d = p2m->domain;
> +
> + *md_pg = p2m_alloc_table(p2m);
> + if ( !*md_pg )
> + {
> + printk("%s: can't allocate extra memory for dom%d\n",
> + __func__, d->domain_id);
> + domain_crash(d);
> + }
> + }
> + else
> + /*
> + * It is not legal to set a type for an entry which shouldn't
> + * be mapped.
> + */
> + ASSERT_UNREACHABLE();
Something not being legal doesn't mean it can't happen. Imo in this case
BUG_ON() (in place of the if() above) would be better.
> + }
> +
> + if ( *md_pg )
> + metadata = __map_domain_page(*md_pg);
> +
> + if ( t < p2m_first_external )
> + {
> pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
>
> - return rc;
> + if ( metadata )
> + metadata[ctx->index].pte = p2m_invalid;
> + }
> + else
> + {
> + pte->pte |= MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK);
> +
> + metadata[ctx->index].pte = t;
Afaict metadata can still be NULL when you get here.
> + }
> +
> + if ( metadata )
> + unmap_domain_page(metadata);
According to the x86 implementation, passing NULL here ought to be fine,
so no if() needed.
> }
>
> -static p2m_type_t p2m_get_type(const pte_t pte)
> +/*
> + * `pte` -> PTE entry that stores the PTE's type.
> + *
> + * If the PTE's type is `p2m_ext_storage`, `ctx` should be provided;
> + * otherwise it could be NULL.
> + */
> +static p2m_type_t p2m_get_type(const pte_t pte, const struct p2m_pte_ctx *ctx)
> {
> p2m_type_t type = MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK);
>
> if ( type == p2m_ext_storage )
> - panic("unimplemented\n");
> + {
> + pte_t *md = __map_domain_page(ctx->pt_page->v.md.metadata);
Pointer-to-const?
> + type = md[ctx->index].pte;
> + unmap_domain_page(ctx->pt_page->v.md.metadata);
I'm pretty sure you want to pass md here, not the pointer you passed
into __map_domain_page().
> @@ -381,7 +465,10 @@ static void p2m_set_permission(pte_t *e, p2m_type_t t)
> }
> }
>
> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
> +static pte_t p2m_pte_from_mfn(const mfn_t mfn, const p2m_type_t t,
> + struct p2m_pte_ctx *p2m_pte_ctx,
> + const bool is_table,
Do you really need both "is_table" and the context pointer? Couldn't
the "is intermediate page table" case be identified by a NULL context
and/or p2m pointer?
Also why "const" all of the sudden?
> @@ -435,22 +527,47 @@ static struct page_info *p2m_alloc_page(struct p2m_domain *p2m)
> return pg;
> }
>
> +static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg);
> +
> +/*
> + * Allocate a page table with an additional extra page to store
> + * metadata for each entry of the page table.
Isn't this stale now? At which point the question is whether ...
> + * Link this metadata page to page table page's list field.
> + */
> +static struct page_info *p2m_alloc_table(struct p2m_domain *p2m)
> +{
> + struct page_info *page_tbl = p2m_alloc_page(p2m);
> +
> + if ( !page_tbl )
> + return NULL;
> +
> + clear_and_clean_page(page_tbl, p2m->clean_dcache);
> +
> + return page_tbl;
> +}
... the function is needed in the first place.
> +/*
> + * Free page table's page and metadata page linked to page table's page.
> + */
> +static void p2m_free_table(struct p2m_domain *p2m, struct page_info *tbl_pg)
> +{
> + ASSERT(tbl_pg->v.md.metadata);
Why, when you no longer unconditionally alloc that page?
> + if ( tbl_pg->v.md.metadata )
> + p2m_free_page(p2m, tbl_pg->v.md.metadata);
> + p2m_free_page(p2m, tbl_pg);
> +}
> +
> /*
> * Allocate a new page table page with an extra metadata page and hook it
> * in via the given entry.
> */
This comment looks to have been inapplicable already when it was introduced.
> static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
> {
> - struct page_info *page;
> + struct page_info *page = p2m_alloc_table(p2m);
>
> ASSERT(!pte_is_valid(*entry));
>
> - page = p2m_alloc_page(p2m);
> - if ( page == NULL )
> - return -ENOMEM;
> -
> - clear_and_clean_page(page, p2m->clean_dcache);
> -
> p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache);
>
> return 0;
As per above I don't think any change is needed here.
> @@ -629,7 +748,7 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
> * has failed (error case).
> * So, at worst, the spurious mapcache invalidation might be sent.
> */
> - if ( p2m_is_ram(p2m_get_type(p2m, entry)) &&
> + if ( p2m_is_ram(p2mt) &&
> domain_has_ioreq_server(p2m->domain) )
> ioreq_request_mapcache_invalidate(p2m->domain);
> #endif
This change wants making right in the earlier patch, where "p2mt" is
being introduced.
> @@ -639,9 +758,21 @@ static void p2m_free_subtree(struct p2m_domain *p2m,
> return;
> }
>
> - table = map_domain_page(pte_get_mfn(entry));
> + mfn = pte_get_mfn(entry);
> + ASSERT(mfn_valid(mfn));
> + table = map_domain_page(mfn);
> + pg = mfn_to_page(mfn);
> +
> for ( i = 0; i < P2M_PAGETABLE_ENTRIES(level); i++ )
> - p2m_free_subtree(p2m, table[i], level - 1);
> + {
> + struct p2m_pte_ctx tmp_ctx = {
> + .pt_page = pg,
> + .index = i,
> + .level = level -1
Nit: Missing blank after - . Also it is generally better to end such
initialization with a comma.
> @@ -707,6 +834,22 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
> pte = *entry;
> pte_set_mfn(&pte, mfn_add(mfn, i << level_order));
>
> + if ( MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK) == p2m_ext_storage )
> + {
> + struct p2m_pte_ctx p2m_pte_ctx = {
> + .pt_page = tbl_pg,
> + .index = offsets[level],
> + };
Assuming using "level" is correct here (which it looks like it is), ...
> + p2m_type_t old_type = p2m_get_type(pte, &p2m_pte_ctx);
... can't this move ahead of the loop?
> + p2m_pte_ctx.pt_page = page;
> + p2m_pte_ctx.index = i;
> + p2m_pte_ctx.level = level;
Whereas - doesn't this need to be "next_level"?
> @@ -718,7 +861,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
> */
> if ( next_level != target )
> rv = p2m_split_superpage(p2m, table + offsets[next_level],
> - level - 1, target, offsets);
> + level - 1, target, offsets, page);
And btw (alredy in the earlier patch introducing this code) - why isn't
it "next_level" here, instead of "level - 1" (if already you have that
variable)?
> @@ -812,13 +955,21 @@ static int p2m_set_entry(struct p2m_domain *p2m,
> {
> /* We need to split the original page. */
> pte_t split_pte = *entry;
> + struct page_info *tbl_pg = virt_to_page(table);
This isn't valid on a pointer obtained from map_domain_page().
> @@ -892,7 +1049,15 @@ static int p2m_set_entry(struct p2m_domain *p2m,
> if ( pte_is_valid(orig_pte) &&
> (!mfn_eq(pte_get_mfn(*entry), pte_get_mfn(orig_pte)) ||
> (removing_mapping && mfn_eq(pte_get_mfn(*entry), _mfn(0)))) )
> - p2m_free_subtree(p2m, orig_pte, level);
> + {
> + struct p2m_pte_ctx tmp_ctx = {
> + .pt_page = virt_to_page(table),
> + .index = offsets[level],
> + .level = level,
Nit: Indentation is off here.
> @@ -1082,7 +1246,14 @@ static mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
> if ( pte_is_valid(entry) )
> {
> if ( t )
> - *t = p2m_get_type(entry);
> + {
> + struct p2m_pte_ctx p2m_pte_ctx = {
> + .pt_page = virt_to_page(table),
> + .index = offsets[level],
> + };
> +
> + *t = p2m_get_type(entry,&p2m_pte_ctx);
Nit: Blank after comma please.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-09-22 22:41 ` Jan Beulich
@ 2025-10-01 16:00 ` Oleksii Kurochko
2025-10-07 13:25 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-10-01 16:00 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 10003 bytes --]
On 9/23/25 12:41 AM, Jan Beulich wrote:
> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>
>> +/*
>> + * `pte` – PTE entry for which the type `t` will be stored.
>> + *
>> + * If `t` is `p2m_ext_storage`, both `ctx` and `p2m` must be provided;
>> + * otherwise, they may be NULL.
>> + */
>> +static void p2m_set_type(pte_t *pte, const p2m_type_t t,
>> + struct p2m_pte_ctx *ctx,
>> + struct p2m_domain *p2m)
>> {
>> - int rc = 0;
>> + /*
>> + * For the root page table (16 KB in size), we need to select the correct
>> + * metadata table, since allocations are 4 KB each. In total, there are
>> + * 4 tables of 4 KB each.
>> + * For none-root page table index of ->pt_page[] will be always 0 as
>> + * index won't be higher then 511. ASSERT() below verifies that.
>> + */
>> + struct page_info **md_pg =
>> + &ctx->pt_page[ctx->index / PAGETABLE_ENTRIES].v.md.metadata;
>> + pte_t *metadata = NULL;
>> +
>> + /* Be sure that an index correspondent to page level is passed. */
>> + ASSERT(ctx->index <= P2M_PAGETABLE_ENTRIES(ctx->level));
> Doesn't this need to be < ?
Yeah, it should be <.
>
>> + if ( !*md_pg && (t >= p2m_first_external) )
>> + {
>> + /*
>> + * Ensure that when `t` is stored outside the PTE bits
>> + * (i.e. `t == p2m_ext_storage` or higher),
>> + * both `ctx` and `p2m` are provided.
>> + */
>> + ASSERT(p2m && ctx);
> Imo this would want to be checked whenever t > p2m_first_external, no
> matter whether a metadata page was already allocated.
I think that|ctx| should be checked before this|if| condition, since it is
used to obtain the proper metadata page.
The check for|p2m| can remain inside the|if| condition, as it is essentially
only needed for allocating a metadata page.
>
>> - if ( t > p2m_first_external )
>> - panic("unimplemeted\n");
>> - else
>> + if ( ctx->level <= P2M_SUPPORTED_LEVEL_MAPPING )
>> + {
>> + struct domain *d = p2m->domain;
>> +
>> + *md_pg = p2m_alloc_table(p2m);
>> + if ( !*md_pg )
>> + {
>> + printk("%s: can't allocate extra memory for dom%d\n",
>> + __func__, d->domain_id);
>> + domain_crash(d);
>> + }
>> + }
>> + else
>> + /*
>> + * It is not legal to set a type for an entry which shouldn't
>> + * be mapped.
>> + */
>> + ASSERT_UNREACHABLE();
> Something not being legal doesn't mean it can't happen. Imo in this case
> BUG_ON() (in place of the if() above) would be better.
>
>> + }
>> +
>> + if ( *md_pg )
>> + metadata = __map_domain_page(*md_pg);
>> +
>> + if ( t < p2m_first_external )
>> + {
>> pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
>>
>> - return rc;
>> + if ( metadata )
>> + metadata[ctx->index].pte = p2m_invalid;
>> + }
>> + else
>> + {
>> + pte->pte |= MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK);
>> +
>> + metadata[ctx->index].pte = t;
> Afaict metadata can still be NULL when you get here.
It shouldn't be, because when this line is executed, the metadata page already
exists or was allocated at the start of p2m_set_type().
>
>> + }
>> +
>> + if ( metadata )
>> + unmap_domain_page(metadata);
> According to the x86 implementation, passing NULL here ought to be fine,
> so no if() needed.
With the current one implementation for RISC-V (CONFIG_ARCH_MAP_DOMAIN_PAGE=n so
unmap_domain_page() does nothing), it is fine too.
>
>> }
>>
>> -static p2m_type_t p2m_get_type(const pte_t pte)
>> +/*
>> + * `pte` -> PTE entry that stores the PTE's type.
>> + *
>> + * If the PTE's type is `p2m_ext_storage`, `ctx` should be provided;
>> + * otherwise it could be NULL.
>> + */
>> +static p2m_type_t p2m_get_type(const pte_t pte, const struct p2m_pte_ctx *ctx)
>> {
>> p2m_type_t type = MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK);
>>
>> if ( type == p2m_ext_storage )
>> - panic("unimplemented\n");
>> + {
>> + pte_t *md = __map_domain_page(ctx->pt_page->v.md.metadata);
> Pointer-to-const?
>
>> + type = md[ctx->index].pte;
>> + unmap_domain_page(ctx->pt_page->v.md.metadata);
> I'm pretty sure you want to pass md here, not the pointer you passed
> into __map_domain_page().
Oh, right. It should be `md`.
>
>> @@ -381,7 +465,10 @@ static void p2m_set_permission(pte_t *e, p2m_type_t t)
>> }
>> }
>>
>> -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t, bool is_table)
>> +static pte_t p2m_pte_from_mfn(const mfn_t mfn, const p2m_type_t t,
>> + struct p2m_pte_ctx *p2m_pte_ctx,
>> + const bool is_table,
> Do you really need both "is_table" and the context pointer? Couldn't
> the "is intermediate page table" case be identified by a NULL context
> and/or p2m pointer?
Good point. I will drop is_table.
>
> Also why "const" all of the sudden?
Because nothing of that is going to be changed in p2m_pte_from_mfn(). To have
diff clearer, I can revert these changes.
>
>> @@ -435,22 +527,47 @@ static struct page_info *p2m_alloc_page(struct p2m_domain *p2m)
>> return pg;
>> }
>>
>> + * Link this metadata page to page table page's list field.
>> + */
>> +static struct page_info *p2m_alloc_table(struct p2m_domain *p2m)
>> +{
>> + struct page_info *page_tbl = p2m_alloc_page(p2m);
>> +
>> + if ( !page_tbl )
>> + return NULL;
>> +
>> + clear_and_clean_page(page_tbl, p2m->clean_dcache);
>> +
>> + return page_tbl;
>> +}
> ... the function is needed in the first place.
On one hand, it may not seem strictly necessary, but on the
other hand, without it we would need to repeat the pattern of
allocating, clearing, and cleaning a page each time a page table
is allocated. At the moment, I prefer to keep it.
But considering another your comment below ...
>
>> +/*
>> + * Free page table's page and metadata page linked to page table's page.
>> + */
>> +static void p2m_free_table(struct p2m_domain *p2m, struct page_info *tbl_pg)
>> +{
>> + ASSERT(tbl_pg->v.md.metadata);
> Why, when you no longer unconditionally alloc that page?
Agree, there is no need for this ASSERT() as "lazy allocation" is used for
metadata.
>> static int p2m_create_table(struct p2m_domain *p2m, pte_t *entry)
>> {
>> - struct page_info *page;
>> + struct page_info *page = p2m_alloc_table(p2m);
>>
>> ASSERT(!pte_is_valid(*entry));
>>
>> - page = p2m_alloc_page(p2m);
>> - if ( page == NULL )
>> - return -ENOMEM;
>> -
>> - clear_and_clean_page(page, p2m->clean_dcache);
>> -
>> p2m_write_pte(entry, page_to_p2m_table(page), p2m->clean_dcache);
>>
>> return 0;
> As per above I don't think any change is needed here.
There are some places in the code where it isn’t necessary to immediately
write the address of a newly allocated page table page into a PTE:
- During superpage splitting: a new page is first allocated for the new
page table, then it is filled, and only afterwards is the PTE updated
with the new page table address.
- In p2m_set_type(): when a table is allocated for storing metadata
(although I think p2m_alloc_page() would work fine here as well),
there is no need to update any PTE correspondingly.
...
So, I think I can agree that p2m_alloc_table() isn’t really needed.
It should be sufficient to move the clear_and_clean_page(page_tbl, p2m->clean_dcache)
call from p2m_alloc_table() into p2m_alloc_page(), and then just use
p2m_alloc_page() everywhere.
Does the last paragraph make sense?
>> @@ -707,6 +834,22 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
>> pte = *entry;
>> pte_set_mfn(&pte, mfn_add(mfn, i << level_order));
>>
>> + if ( MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK) == p2m_ext_storage )
>> + {
>> + struct p2m_pte_ctx p2m_pte_ctx = {
>> + .pt_page = tbl_pg,
>> + .index = offsets[level],
>> + };
> Assuming using "level" is correct here (which it looks like it is), ...
>
>> + p2m_type_t old_type = p2m_get_type(pte, &p2m_pte_ctx);
> ... can't this move ahead of the loop?
Considering that old_type is expected to be the same for all new PTEs, I think
we can move that ahead of the loop. I'll do that.
>
>> + p2m_pte_ctx.pt_page = page;
>> + p2m_pte_ctx.index = i;
>> + p2m_pte_ctx.level = level;
> Whereas - doesn't this need to be "next_level"?
Yes, it should be next_level.
>
>> @@ -718,7 +861,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, pte_t *entry,
>> */
>> if ( next_level != target )
>> rv = p2m_split_superpage(p2m, table + offsets[next_level],
>> - level - 1, target, offsets);
>> + level - 1, target, offsets, page);
> And btw (alredy in the earlier patch introducing this code) - why isn't
> it "next_level" here, instead of "level - 1" (if already you have that
> variable)?
Missed to update that part. It should next_level used instead of level - 1.
>
>> @@ -812,13 +955,21 @@ static int p2m_set_entry(struct p2m_domain *p2m,
>> {
>> /* We need to split the original page. */
>> pte_t split_pte = *entry;
>> + struct page_info *tbl_pg = virt_to_page(table);
> This isn't valid on a pointer obtained from map_domain_page().
Oh, sure — virt_to_page() and page_to_virt() should be used only for Xen
heap addresses.
By the way, do we have any documentation, comments, or notes describing
what should be allocated and from where?
Since map_domain_page() returns an address from the direct map region,
should we instead use maddr_to_page(virt_to_maddr(table))?
Thanks for review.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 14953 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-10-01 16:00 ` Oleksii Kurochko
@ 2025-10-07 13:25 ` Jan Beulich
2025-10-09 11:34 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-10-07 13:25 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 01.10.2025 18:00, Oleksii Kurochko wrote:
> On 9/23/25 12:41 AM, Jan Beulich wrote:
>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>
>>> +/*
>>> + * `pte` – PTE entry for which the type `t` will be stored.
>>> + *
>>> + * If `t` is `p2m_ext_storage`, both `ctx` and `p2m` must be provided;
>>> + * otherwise, they may be NULL.
>>> + */
>>> +static void p2m_set_type(pte_t *pte, const p2m_type_t t,
>>> + struct p2m_pte_ctx *ctx,
>>> + struct p2m_domain *p2m)
>>> {
>>> - int rc = 0;
>>> + /*
>>> + * For the root page table (16 KB in size), we need to select the correct
>>> + * metadata table, since allocations are 4 KB each. In total, there are
>>> + * 4 tables of 4 KB each.
>>> + * For none-root page table index of ->pt_page[] will be always 0 as
>>> + * index won't be higher then 511. ASSERT() below verifies that.
>>> + */
>>> + struct page_info **md_pg =
>>> + &ctx->pt_page[ctx->index / PAGETABLE_ENTRIES].v.md.metadata;
>>> + pte_t *metadata = NULL;
>>> +
>>> + /* Be sure that an index correspondent to page level is passed. */
>>> + ASSERT(ctx->index <= P2M_PAGETABLE_ENTRIES(ctx->level));
>> Doesn't this need to be < ?
>
> Yeah, it should be <.
>
>>
>>> + if ( !*md_pg && (t >= p2m_first_external) )
>>> + {
>>> + /*
>>> + * Ensure that when `t` is stored outside the PTE bits
>>> + * (i.e. `t == p2m_ext_storage` or higher),
>>> + * both `ctx` and `p2m` are provided.
>>> + */
>>> + ASSERT(p2m && ctx);
>> Imo this would want to be checked whenever t > p2m_first_external, no
>> matter whether a metadata page was already allocated.
>
> I think that|ctx| should be checked before this|if| condition, since it is
> used to obtain the proper metadata page.
>
> The check for|p2m| can remain inside the|if| condition, as it is essentially
> only needed for allocating a metadata page.
That is, you want to allow callers to pass in NULL for the "p2m" parameter?
Isn't this going to be risky?
>>> - if ( t > p2m_first_external )
>>> - panic("unimplemeted\n");
>>> - else
>>> + if ( ctx->level <= P2M_SUPPORTED_LEVEL_MAPPING )
>>> + {
>>> + struct domain *d = p2m->domain;
>>> +
>>> + *md_pg = p2m_alloc_table(p2m);
>>> + if ( !*md_pg )
>>> + {
>>> + printk("%s: can't allocate extra memory for dom%d\n",
>>> + __func__, d->domain_id);
>>> + domain_crash(d);
>>> + }
>>> + }
>>> + else
>>> + /*
>>> + * It is not legal to set a type for an entry which shouldn't
>>> + * be mapped.
>>> + */
>>> + ASSERT_UNREACHABLE();
>> Something not being legal doesn't mean it can't happen. Imo in this case
>> BUG_ON() (in place of the if() above) would be better.
>>
>>> + }
>>> +
>>> + if ( *md_pg )
>>> + metadata = __map_domain_page(*md_pg);
Not this conditional assignment for ...
>>> + if ( t < p2m_first_external )
>>> + {
>>> pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
>>>
>>> - return rc;
>>> + if ( metadata )
>>> + metadata[ctx->index].pte = p2m_invalid;
>>> + }
>>> + else
>>> + {
>>> + pte->pte |= MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK);
>>> +
>>> + metadata[ctx->index].pte = t;
>> Afaict metadata can still be NULL when you get here.
>
> It shouldn't be, because when this line is executed, the metadata page already
> exists or was allocated at the start of p2m_set_type().
... this reply of yours. And the condition there can be false, in case you
took the domain_crash() path.
>>> @@ -812,13 +955,21 @@ static int p2m_set_entry(struct p2m_domain *p2m,
>>> {
>>> /* We need to split the original page. */
>>> pte_t split_pte = *entry;
>>> + struct page_info *tbl_pg = virt_to_page(table);
>> This isn't valid on a pointer obtained from map_domain_page().
>
> Oh, sure — virt_to_page() and page_to_virt() should be used only for Xen
> heap addresses.
>
> By the way, do we have any documentation, comments, or notes describing
> what should be allocated and from where?
>
> Since map_domain_page() returns an address from the direct map region,
> should we instead use maddr_to_page(virt_to_maddr(table))?
How would that be any better? Even if right now you only build RISC-V code
with map_domain_page() having a trivial expansion, you will want to avoid
any assumptions along those lines. Or else you could avoid the use of that
abstraction altogether. It exists so when you need to support memory
amounts beyond what the directmap can cover, you can provide a suitable
implementation of the function and be done. You (or whoever else) in
particular shouldn't be required to go audit all the places where
map_domain_page() (and the pointers it returns) is (are) used.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-10-07 13:25 ` Jan Beulich
@ 2025-10-09 11:34 ` Oleksii Kurochko
2025-10-09 12:10 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-10-09 11:34 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 5993 bytes --]
On 10/7/25 3:25 PM, Jan Beulich wrote:
> On 01.10.2025 18:00, Oleksii Kurochko wrote:
>> On 9/23/25 12:41 AM, Jan Beulich wrote:
>>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>>
>>>> +/*
>>>> + * `pte` – PTE entry for which the type `t` will be stored.
>>>> + *
>>>> + * If `t` is `p2m_ext_storage`, both `ctx` and `p2m` must be provided;
>>>> + * otherwise, they may be NULL.
>>>> + */
>>>> +static void p2m_set_type(pte_t *pte, const p2m_type_t t,
>>>> + struct p2m_pte_ctx *ctx,
>>>> + struct p2m_domain *p2m)
>>>> {
>>>> - int rc = 0;
>>>> + /*
>>>> + * For the root page table (16 KB in size), we need to select the correct
>>>> + * metadata table, since allocations are 4 KB each. In total, there are
>>>> + * 4 tables of 4 KB each.
>>>> + * For none-root page table index of ->pt_page[] will be always 0 as
>>>> + * index won't be higher then 511. ASSERT() below verifies that.
>>>> + */
>>>> + struct page_info **md_pg =
>>>> + &ctx->pt_page[ctx->index / PAGETABLE_ENTRIES].v.md.metadata;
>>>> + pte_t *metadata = NULL;
>>>> +
>>>> + /* Be sure that an index correspondent to page level is passed. */
>>>> + ASSERT(ctx->index <= P2M_PAGETABLE_ENTRIES(ctx->level));
>>> Doesn't this need to be < ?
>> Yeah, it should be <.
>>
>>>> + if ( !*md_pg && (t >= p2m_first_external) )
>>>> + {
>>>> + /*
>>>> + * Ensure that when `t` is stored outside the PTE bits
>>>> + * (i.e. `t == p2m_ext_storage` or higher),
>>>> + * both `ctx` and `p2m` are provided.
>>>> + */
>>>> + ASSERT(p2m && ctx);
>>> Imo this would want to be checked whenever t > p2m_first_external, no
>>> matter whether a metadata page was already allocated.
>> I think that|ctx| should be checked before this|if| condition, since it is
>> used to obtain the proper metadata page.
>>
>> The check for|p2m| can remain inside the|if| condition, as it is essentially
>> only needed for allocating a metadata page.
> That is, you want to allow callers to pass in NULL for the "p2m" parameter?
> Isn't this going to be risky?
With the current implementation it is not risky and initially I thought that p2m
could be passed NULL for the types which are used for types stored within PTE
as for that type p2m argument isn't really needed.
But just to be sure that something won't be broker in future changes let move
ASSERT(p2m) at the top of the function.
>
>>>> - if ( t > p2m_first_external )
>>>> - panic("unimplemeted\n");
>>>> - else
>>>> + if ( ctx->level <= P2M_SUPPORTED_LEVEL_MAPPING )
>>>> + {
>>>> + struct domain *d = p2m->domain;
>>>> +
>>>> + *md_pg = p2m_alloc_table(p2m);
>>>> + if ( !*md_pg )
>>>> + {
>>>> + printk("%s: can't allocate extra memory for dom%d\n",
>>>> + __func__, d->domain_id);
>>>> + domain_crash(d);
>>>> + }
>>>> + }
>>>> + else
>>>> + /*
>>>> + * It is not legal to set a type for an entry which shouldn't
>>>> + * be mapped.
>>>> + */
>>>> + ASSERT_UNREACHABLE();
>>> Something not being legal doesn't mean it can't happen. Imo in this case
>>> BUG_ON() (in place of the if() above) would be better.
>>>
>>>> + }
>>>> +
>>>> + if ( *md_pg )
>>>> + metadata = __map_domain_page(*md_pg);
> Not this conditional assignment for ...
>
>>>> + if ( t < p2m_first_external )
>>>> + {
>>>> pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
>>>>
>>>> - return rc;
>>>> + if ( metadata )
>>>> + metadata[ctx->index].pte = p2m_invalid;
>>>> + }
>>>> + else
>>>> + {
>>>> + pte->pte |= MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK);
>>>> +
>>>> + metadata[ctx->index].pte = t;
>>> Afaict metadata can still be NULL when you get here.
>> It shouldn't be, because when this line is executed, the metadata page already
>> exists or was allocated at the start of p2m_set_type().
> ... this reply of yours. And the condition there can be false, in case you
> took the domain_crash() path.
Oh, right, for some reason, I thought we didn’t return from|domain_crash()|.
I’m curious whether calling|domain_crash()| might break something, as some useful
data could be freed and negatively affect the internals of|map_regions_p2mt()|.
It might make more sense to use|panic()| here instead.
Do you have any thoughts or suggestions on this?
>
>>>> @@ -812,13 +955,21 @@ static int p2m_set_entry(struct p2m_domain *p2m,
>>>> {
>>>> /* We need to split the original page. */
>>>> pte_t split_pte = *entry;
>>>> + struct page_info *tbl_pg = virt_to_page(table);
>>> This isn't valid on a pointer obtained from map_domain_page().
>> Oh, sure — virt_to_page() and page_to_virt() should be used only for Xen
>> heap addresses.
>>
>> By the way, do we have any documentation, comments, or notes describing
>> what should be allocated and from where?
>>
>> Since map_domain_page() returns an address from the direct map region,
>> should we instead use maddr_to_page(virt_to_maddr(table))?
> How would that be any better? Even if right now you only build RISC-V code
> with map_domain_page() having a trivial expansion, you will want to avoid
> any assumptions along those lines. Or else you could avoid the use of that
> abstraction altogether. It exists so when you need to support memory
> amounts beyond what the directmap can cover, you can provide a suitable
> implementation of the function and be done. You (or whoever else) in
> particular shouldn't be required to go audit all the places where
> map_domain_page() (and the pointers it returns) is (are) used.
Then|domain_page_map_to_mfn()| is the appropriate function to use, and to
get a page,|mfn_to_page(domain_page_map_to_mfn(virt)) should be called.|
Thanks.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 8564 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-10-09 11:34 ` Oleksii Kurochko
@ 2025-10-09 12:10 ` Jan Beulich
2025-10-10 8:42 ` Oleksii Kurochko
0 siblings, 1 reply; 62+ messages in thread
From: Jan Beulich @ 2025-10-09 12:10 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 09.10.2025 13:34, Oleksii Kurochko wrote:
> On 10/7/25 3:25 PM, Jan Beulich wrote:
>> On 01.10.2025 18:00, Oleksii Kurochko wrote:
>>> On 9/23/25 12:41 AM, Jan Beulich wrote:
>>>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>>>> + if ( *md_pg )
>>>>> + metadata = __map_domain_page(*md_pg);
>> Not this conditional assignment for ...
>>
>>>>> + if ( t < p2m_first_external )
>>>>> + {
>>>>> pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
>>>>>
>>>>> - return rc;
>>>>> + if ( metadata )
>>>>> + metadata[ctx->index].pte = p2m_invalid;
>>>>> + }
>>>>> + else
>>>>> + {
>>>>> + pte->pte |= MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK);
>>>>> +
>>>>> + metadata[ctx->index].pte = t;
>>>> Afaict metadata can still be NULL when you get here.
>>> It shouldn't be, because when this line is executed, the metadata page already
>>> exists or was allocated at the start of p2m_set_type().
>> ... this reply of yours. And the condition there can be false, in case you
>> took the domain_crash() path.
>
> Oh, right, for some reason, I thought we didn’t return from|domain_crash()|.
> I’m curious whether calling|domain_crash()| might break something, as some useful
> data could be freed and negatively affect the internals of|map_regions_p2mt()|.
>
> It might make more sense to use|panic()| here instead.
> Do you have any thoughts or suggestions on this?
domain_crash() is generally preferable over crashing the system as a whole.
I don't follow what negative effects you're alluding to. Did you look at
what domain_crash() does? It doesn't start tearing down the domain, that'll
still need invoking from the toolstack. A crashed domain will stay around
with all its resources allocated.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-10-09 12:10 ` Jan Beulich
@ 2025-10-10 8:42 ` Oleksii Kurochko
2025-10-10 11:00 ` Jan Beulich
0 siblings, 1 reply; 62+ messages in thread
From: Oleksii Kurochko @ 2025-10-10 8:42 UTC (permalink / raw)
To: Jan Beulich
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
[-- Attachment #1: Type: text/plain, Size: 2578 bytes --]
On 10/9/25 2:10 PM, Jan Beulich wrote:
> On 09.10.2025 13:34, Oleksii Kurochko wrote:
>> On 10/7/25 3:25 PM, Jan Beulich wrote:
>>> On 01.10.2025 18:00, Oleksii Kurochko wrote:
>>>> On 9/23/25 12:41 AM, Jan Beulich wrote:
>>>>> On 17.09.2025 23:55, Oleksii Kurochko wrote:
>>>>>> + if ( *md_pg )
>>>>>> + metadata = __map_domain_page(*md_pg);
>>> Not this conditional assignment for ...
>>>
>>>>>> + if ( t < p2m_first_external )
>>>>>> + {
>>>>>> pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK);
>>>>>>
>>>>>> - return rc;
>>>>>> + if ( metadata )
>>>>>> + metadata[ctx->index].pte = p2m_invalid;
>>>>>> + }
>>>>>> + else
>>>>>> + {
>>>>>> + pte->pte |= MASK_INSR(p2m_ext_storage, P2M_TYPE_PTE_BITS_MASK);
>>>>>> +
>>>>>> + metadata[ctx->index].pte = t;
>>>>> Afaict metadata can still be NULL when you get here.
>>>> It shouldn't be, because when this line is executed, the metadata page already
>>>> exists or was allocated at the start of p2m_set_type().
>>> ... this reply of yours. And the condition there can be false, in case you
>>> took the domain_crash() path.
>> Oh, right, for some reason, I thought we didn’t return from|domain_crash()|.
>> I’m curious whether calling|domain_crash()| might break something, as some useful
>> data could be freed and negatively affect the internals of|map_regions_p2mt()|.
>>
>> It might make more sense to use|panic()| here instead.
>> Do you have any thoughts or suggestions on this?
> domain_crash() is generally preferable over crashing the system as a whole.
> I don't follow what negative effects you're alluding to. Did you look at
> what domain_crash() does? It doesn't start tearing down the domain, that'll
> still need invoking from the toolstack. A crashed domain will stay around
> with all its resources allocated.
I was confused by|arch_domain_shutdown()|, which is called somewhere inside
|domain_crash()|, since the function name suggests that some resource cleanup
might happen there. There’s also no comment explaining what
|arch_domain_shutdown()| is expected to do or not to do.
However, since it’s an architecture-specific function, we can control its
behavior for a given architecture.
So, if it doesn’t actually start tearing down the domain, I don’t see any
other negative effects.
Anyway, if|domain_crash()| is called, I’m not really sure we need to set
PTE type afterward. We could simply add a|return;| right after the
|domain_crash()| call and so we won't have NULL pointer deference.
Thanks.
~ Oleksii
[-- Attachment #2: Type: text/html, Size: 4326 bytes --]
^ permalink raw reply [flat|nested] 62+ messages in thread* Re: [PATCH v4 18/18] xen/riscv: introduce metadata table to store P2M type
2025-10-10 8:42 ` Oleksii Kurochko
@ 2025-10-10 11:00 ` Jan Beulich
0 siblings, 0 replies; 62+ messages in thread
From: Jan Beulich @ 2025-10-10 11:00 UTC (permalink / raw)
To: Oleksii Kurochko
Cc: Alistair Francis, Bob Eshleman, Connor Davis, Andrew Cooper,
Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
Stefano Stabellini, xen-devel
On 10.10.2025 10:42, Oleksii Kurochko wrote:
> Anyway, if|domain_crash()| is called, I’m not really sure we need to set
> PTE type afterward. We could simply add a|return;| right after the
> |domain_crash()| call and so we won't have NULL pointer deference.
That's indeed preferable, so long as it won't cause other issues in the
caller.
Jan
^ permalink raw reply [flat|nested] 62+ messages in thread