All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS
@ 2026-02-12 16:21 Oleksii Kurochko
  2026-02-12 16:21 ` [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn() Oleksii Kurochko
                   ` (5 more replies)
  0 siblings, 6 replies; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-12 16:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Romain Caritey, Oleksii Kurochko, Alistair Francis, Connor Davis,
	Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
	Julien Grall, Roger Pau Monné, Stefano Stabellini,
	Bertrand Marquis, Volodymyr Babchuk

Introduce necessary things to enable DOMAIN_BUILD_HELPERS config for RISC-V.

Generally it is indepenent patch series from [1] and [2] but depends on which
patches will go first it could be some merge conflicts.

At the moment, patch series is rebased on top of [2]:
  staging -> [1] -> [2] -> current patch series.

[1] https://lore.kernel.org/xen-devel/cover.1770650552.git.oleksii.kurochko@gmail.com/T/#t
[2] https://lore.kernel.org/xen-devel/cover.1770739000.git.oleksii.kurochko@gmail.com/T/#t

Oleksii Kurochko (6):
  xen/riscv: implement get_page_from_gfn()
  xen/riscv: implement copy_to_guest_phys()
  xen/riscv: add zImage kernel loading support
  xen: move declaration of fw_unreserved_regions() to common header
  xen: move domain_use_host_layout() to common header
  xen/riscv: enable DOMAIN_BUILD_HELPERS

 xen/arch/arm/include/asm/domain.h         |  14 --
 xen/arch/arm/include/asm/setup.h          |   3 -
 xen/arch/riscv/Kconfig                    |   1 +
 xen/arch/riscv/Makefile                   |   2 +
 xen/arch/riscv/guestcopy.c                | 112 ++++++++++++++++
 xen/arch/riscv/include/asm/config.h       |  13 ++
 xen/arch/riscv/include/asm/guest_access.h |   7 +
 xen/arch/riscv/include/asm/p2m.h          |  11 +-
 xen/arch/riscv/kernel.c                   | 156 ++++++++++++++++++++++
 xen/arch/riscv/p2m.c                      |  34 +++++
 xen/include/public/arch-riscv.h           |   8 ++
 xen/include/xen/bootinfo.h                |   4 +
 xen/include/xen/domain.h                  |  16 +++
 13 files changed, 358 insertions(+), 23 deletions(-)
 create mode 100644 xen/arch/riscv/guestcopy.c
 create mode 100644 xen/arch/riscv/kernel.c

-- 
2.52.0



^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn()
  2026-02-12 16:21 [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
@ 2026-02-12 16:21 ` Oleksii Kurochko
  2026-02-16 12:38   ` Jan Beulich
  2026-02-12 16:21 ` [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys() Oleksii Kurochko
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-12 16:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Romain Caritey, Oleksii Kurochko, Alistair Francis, Connor Davis,
	Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
	Julien Grall, Roger Pau Monné, Stefano Stabellini

Provide a RISC-V implementation of get_page_from_gfn(), matching the
semantics used by other architectures.

For translated guests, this is implemented as a wrapper around
p2m_get_page_from_gfn(). For DOMID_XEN, which is not auto-translated,
provide a 1:1 RAM/MMIO mapping and perform the required validation and
reference counting.

The function is implemented out-of-line rather than as a static inline,
to avoid header ordering issues where struct domain is incomplete when
asm/p2m.h is included, leading to build failures:
  In file included from ./arch/riscv/include/asm/domain.h:10,
                   from ./include/xen/domain.h:16,
                   from ./include/xen/sched.h:11,
                   from ./include/xen/event.h:12,
                   from common/cpu.c:3:
  ./arch/riscv/include/asm/p2m.h: In function 'get_page_from_gfn':
  ./arch/riscv/include/asm/p2m.h:50:33: error: invalid use of undefined type 'struct domain'
     50 | #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
        |                                 ^~
  ./arch/riscv/include/asm/p2m.h:180:38: note: in expansion of macro 'p2m_get_hostp2m'
    180 |         return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
        |                                      ^~~~~~~~~~~~~~~
  make[2]: *** [Rules.mk:253: common/cpu.o] Error 1
  make[1]: *** [build.mk:72: common] Error 2
  make: *** [Makefile:623: xen] Error 2

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Does it make sense to make this function almost fully generic?

It looks like most of the logic here is architecture-independent and identical
across architectures, except for the following points:

1. ```
   if ( likely(d != dom_xen) )
   ```

   This could be made generic by introducing paging_mode_translate() for ARM
   and defining it as `(d != dom_xen)` there.

2. ```
   if ( t )
       *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
   ```

   Here, only `p2m_mmio_direct_io` appears to be architecture-specific. This
   could be abstracted via a helper such as `dom_io_p2m_type()` and used here
   instead.
---
---
 xen/arch/riscv/include/asm/p2m.h |  8 ++------
 xen/arch/riscv/p2m.c             | 28 ++++++++++++++++++++++++++++
 2 files changed, 30 insertions(+), 6 deletions(-)

diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index 0cdd3dc44683..c68494593fd9 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -171,12 +171,8 @@ typedef unsigned int p2m_query_t;
 #define P2M_ALLOC    (1u<<0)   /* Populate PoD and paged-out entries */
 #define P2M_UNSHARE  (1u<<1)   /* Break CoW sharing */
 
-static inline struct page_info *get_page_from_gfn(
-    struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q)
-{
-    BUG_ON("unimplemented");
-    return NULL;
-}
+struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
+                                    p2m_type_t *t, p2m_query_t q);
 
 static inline void memory_type_changed(struct domain *d)
 {
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index 275c38020ae2..f5b03e1e3264 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -1557,3 +1557,31 @@ void p2m_handle_vmenter(void)
         flush_tlb_guest_local();
     }
 }
+
+struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
+                                    p2m_type_t *t, p2m_query_t q)
+{
+    struct page_info *page;
+
+    /*
+     * Special case for DOMID_XEN as it is the only domain so far that is
+     * not auto-translated.
+     */
+    if ( likely(d != dom_xen) )
+        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
+
+    /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
+
+    if ( t )
+        *t = p2m_invalid;
+
+    page = mfn_to_page(_mfn(gfn));
+
+    if ( !mfn_valid(_mfn(gfn)) || !get_page(page, d) )
+        return NULL;
+
+    if ( t )
+        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
+
+    return page;
+}
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys()
  2026-02-12 16:21 [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
  2026-02-12 16:21 ` [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn() Oleksii Kurochko
@ 2026-02-12 16:21 ` Oleksii Kurochko
  2026-02-16 14:57   ` Jan Beulich
  2026-02-12 16:21 ` [PATCH v1 3/6] xen/riscv: add zImage kernel loading support Oleksii Kurochko
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-12 16:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Romain Caritey, Oleksii Kurochko, Alistair Francis, Connor Davis,
	Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
	Julien Grall, Roger Pau Monné, Stefano Stabellini

Introduce copy_to_guest_phys() for RISC-V, based on the Arm implementation.

Add a generic copy_guest() helper for copying to and from guest physical
(and potentially virtual addresses in the future), and implement
translate_get_page() to translate a guest physical address into a struct
page_info via the domain p2m.

Compared to the Arm code:
- Drop COPY_flush_dcache(), as no such use cases exist on RISC-V.
- Do not implement the linear mapping case, which is currently unused.
- Use PAGE_OFFSET() to initialize the local offset variable in copy_guest().

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/Makefile                   |   1 +
 xen/arch/riscv/guestcopy.c                | 112 ++++++++++++++++++++++
 xen/arch/riscv/include/asm/guest_access.h |   7 ++
 3 files changed, 120 insertions(+)
 create mode 100644 xen/arch/riscv/guestcopy.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 7439d029cc45..90210799e038 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -3,6 +3,7 @@ obj-y += cpufeature.o
 obj-y += domain.o
 obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
 obj-y += entry.o
+obj-y += guestcopy.o
 obj-y += imsic.o
 obj-y += intc.o
 obj-y += irq.o
diff --git a/xen/arch/riscv/guestcopy.c b/xen/arch/riscv/guestcopy.c
new file mode 100644
index 000000000000..19b681c30b1b
--- /dev/null
+++ b/xen/arch/riscv/guestcopy.c
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#include <xen/domain_page.h>
+#include <xen/page-size.h>
+#include <xen/sched.h>
+#include <xen/string.h>
+
+#include <asm/guest_access.h>
+
+#define COPY_from_guest     (0U << 0)
+#define COPY_to_guest       (1U << 0)
+#define COPY_ipa            (0U << 1)
+#define COPY_linear         (1U << 1)
+
+typedef union
+{
+    struct
+    {
+        struct vcpu *v;
+    } gva;
+
+    struct
+    {
+        struct domain *d;
+    } gpa;
+} copy_info_t;
+
+#define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
+#define GPA_INFO(domain) ((copy_info_t) { .gpa = { domain } })
+
+static struct page_info *translate_get_page(copy_info_t info, uint64_t addr,
+                                            bool linear, bool write)
+{
+    p2m_type_t p2mt;
+    struct page_info *page;
+
+    if ( linear )
+        BUG_ON("unimplemeted\n");
+
+    page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
+
+    if ( !page )
+        return NULL;
+
+    if ( !p2m_is_ram(p2mt) )
+    {
+        put_page(page);
+        return NULL;
+    }
+
+    return page;
+}
+
+static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
+                                copy_info_t info, unsigned int flags)
+{
+    unsigned int offset = PAGE_OFFSET(addr);
+
+    BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
+    BUILD_BUG_ON((sizeof(addr)) < sizeof(paddr_t));
+
+    while ( len )
+    {
+        void *p;
+        unsigned int size = min(len, (unsigned int)PAGE_SIZE - offset);
+        struct page_info *page;
+
+        page = translate_get_page(info, addr, flags & COPY_linear,
+                                  flags & COPY_to_guest);
+        if ( page == NULL )
+            return len;
+
+        p = __map_domain_page(page);
+        p += offset;
+        if ( flags & COPY_to_guest )
+        {
+            /*
+             * buf will be NULL when the caller request to zero the
+             * guest memory.
+             */
+            if ( buf )
+                memcpy(p, buf, size);
+            else
+                memset(p, 0, size);
+        }
+        else
+            memcpy(buf, p, size);
+
+        unmap_domain_page(p - offset);
+        put_page(page);
+        len -= size;
+        buf += size;
+        addr += size;
+
+        /*
+         * After the first iteration, guest virtual address is correctly
+         * aligned to PAGE_SIZE.
+         */
+        offset = 0;
+    }
+
+    return 0;
+}
+
+unsigned long copy_to_guest_phys(struct domain *d,
+                                 paddr_t gpa,
+                                 void *buf,
+                                 unsigned int len)
+{
+    return copy_guest(buf, gpa, len, GPA_INFO(d),
+                      COPY_to_guest | COPY_ipa);
+}
diff --git a/xen/arch/riscv/include/asm/guest_access.h b/xen/arch/riscv/include/asm/guest_access.h
index 7cd51fbbdead..024e29b4c9f9 100644
--- a/xen/arch/riscv/include/asm/guest_access.h
+++ b/xen/arch/riscv/include/asm/guest_access.h
@@ -2,6 +2,10 @@
 #ifndef ASM__RISCV__GUEST_ACCESS_H
 #define ASM__RISCV__GUEST_ACCESS_H
 
+#include <xen/types.h>
+
+struct domain;
+
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len);
 unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
 unsigned long raw_clear_guest(void *to, unsigned int len);
@@ -18,6 +22,9 @@ unsigned long raw_clear_guest(void *to, unsigned int len);
 #define guest_handle_okay(hnd, nr) (1)
 #define guest_handle_subrange_okay(hnd, first, last) (1)
 
+unsigned long copy_to_guest_phys(struct domain *d, paddr_t gpa, void *buf,
+                                 unsigned int len);
+
 #endif /* ASM__RISCV__GUEST_ACCESS_H */
 /*
  * Local variables:
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 3/6] xen/riscv: add zImage kernel loading support
  2026-02-12 16:21 [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
  2026-02-12 16:21 ` [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn() Oleksii Kurochko
  2026-02-12 16:21 ` [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys() Oleksii Kurochko
@ 2026-02-12 16:21 ` Oleksii Kurochko
  2026-02-16 16:31   ` Jan Beulich
  2026-02-12 16:21 ` [PATCH v1 4/6] xen: move declaration of fw_unreserved_regions() to common header Oleksii Kurochko
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-12 16:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Romain Caritey, Oleksii Kurochko, Alistair Francis, Connor Davis,
	Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
	Julien Grall, Roger Pau Monné, Stefano Stabellini

Introduce support for loading a Linux zImage kernel on RISC-V.

Note that if panic() is used instead of returning an error as common code
doesn't expect to have return code and it is something that should be
done separately.

This prepares the RISC-V port for booting Linux guests using the common
domain build infrastructure.

The code is based on Xen Arm code.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/Makefile             |   1 +
 xen/arch/riscv/include/asm/config.h |  13 +++
 xen/arch/riscv/kernel.c             | 156 ++++++++++++++++++++++++++++
 3 files changed, 170 insertions(+)
 create mode 100644 xen/arch/riscv/kernel.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 90210799e038..2e15f894fdd4 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -7,6 +7,7 @@ obj-y += guestcopy.o
 obj-y += imsic.o
 obj-y += intc.o
 obj-y += irq.o
+obj-y += kernel.o
 obj-y += mm.o
 obj-y += p2m.o
 obj-y += paging.o
diff --git a/xen/arch/riscv/include/asm/config.h b/xen/arch/riscv/include/asm/config.h
index 86a95df018b5..d24b54d656b8 100644
--- a/xen/arch/riscv/include/asm/config.h
+++ b/xen/arch/riscv/include/asm/config.h
@@ -152,6 +152,19 @@
 extern unsigned long phys_offset; /* = load_start - XEN_VIRT_START */
 #endif
 
+/*
+ * KERNEL_LOAD_ADDR_ALIGNMENT is defined based on paragraph of
+ * "Kernel location" of boot.rst:
+ * https://docs.kernel.org/arch/riscv/boot.html#kernel-location
+ */
+#if defined(CONFIG_RISCV_32)
+#define KERNEL_LOAD_ADDR_ALIGNMENT MB(4)
+#elif defined(CONFIG_RISCV_64)
+#define KERNEL_LOAD_ADDR_ALIGNMENT MB(2)
+#else
+#error "Define KERNEL_LOAD_ADDR_ALIGNMENT"
+#endif
+
 #endif /* ASM__RISCV__CONFIG_H */
 /*
  * Local variables:
diff --git a/xen/arch/riscv/kernel.c b/xen/arch/riscv/kernel.c
new file mode 100644
index 000000000000..f91e9ada8a9c
--- /dev/null
+++ b/xen/arch/riscv/kernel.c
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#include <xen/bug.h>
+#include <xen/compiler.h>
+#include <xen/errno.h>
+#include <xen/fdt-kernel.h>
+#include <xen/guest_access.h>
+#include <xen/init.h>
+#include <xen/libfdt/libfdt.h>
+#include <xen/mm.h>
+#include <xen/types.h>
+#include <xen/vmap.h>
+
+#include <asm/setup.h>
+
+#define ZIMAGE64_MAGIC_V2 0x05435352 /* Magic number 2, le, "RSC\x05" */
+
+static void __init place_modules(struct kernel_info *info, paddr_t kernbase,
+                                 paddr_t kernend)
+{
+    const struct boot_module *mod = info->bd.initrd;
+
+    const paddr_t initrd_len = ROUNDUP(mod ? mod->size : 0, MB(2));
+    const paddr_t dtb_len = ROUNDUP(fdt_totalsize(info->fdt), MB(2));
+    const paddr_t modsize = initrd_len + dtb_len;
+
+    const paddr_t ramsize = info->mem.bank[0].size;
+    const paddr_t kernsize = ROUNDUP(kernend, MB(2)) - kernbase;
+
+    if ( modsize + kernsize > ramsize )
+        panic("Not enough memory in the first bank for the kernel+dtb+initrd\n");
+
+    info->dtb_paddr = ROUNDUP(kernend, MB(2));
+
+    info->initrd_paddr = info->dtb_paddr + dtb_len;
+}
+
+static paddr_t __init kernel_zimage_place(struct kernel_info *info)
+{
+    paddr_t load_addr;
+
+    /*
+     * At the moment, RISC-V's Linux kernel should be always position
+     * independent based on "Per-MMU execution" of boot.rst:
+     *   https://docs.kernel.org/arch/riscv/boot.html#pre-mmu-execution
+     *
+     * But just for the case when RISC-V's Linux kernel isn't position
+     * indepenet it is needed to take load address from
+     * info->zimage.start.
+     *
+     * If `start` is zero, the zImage is position independent. */
+    if ( likely(!info->zimage.start) )
+        /*
+         * According to boot.rst kernel load address should be properly
+         * aligned:
+         *   https://docs.kernel.org/arch/riscv/boot.html#kernel-location
+         */
+        load_addr = ROUNDUP(info->mem.bank[0].start, KERNEL_LOAD_ADDR_ALIGNMENT);
+    else
+        load_addr = info->zimage.start;
+
+    return load_addr;
+}
+
+static void __init kernel_zimage_load(struct kernel_info *info)
+{
+    int rc;
+    paddr_t load_addr = kernel_zimage_place(info);
+    paddr_t paddr = info->zimage.kernel_addr;
+    paddr_t len = info->zimage.len;
+    void *kernel;
+
+    info->entry = load_addr;
+
+    place_modules(info, load_addr, load_addr + len);
+
+    printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
+            paddr, load_addr, load_addr + len);
+
+    kernel = ioremap_wc(paddr, len);
+
+    if ( !kernel )
+        panic("Unable to map kernel\n");
+
+    /* Move kernel to proper location in guest phys map */
+    rc = copy_to_guest_phys(info->bd.d, load_addr, kernel, len);
+
+    if ( rc )
+        panic("Unable to copy kernel to proper guest location\n");
+
+    iounmap(kernel);
+}
+
+/* Check if the image is a 64-bit Image */
+static int __init kernel_zimage64_probe(struct kernel_info *info,
+                                        paddr_t addr, paddr_t size)
+{
+    /* riscv/boot-image-header.rst */
+    struct {
+        u32 code0;		  /* Executable code */
+        u32 code1;		  /* Executable code */
+        u64 text_offset;  /* Image load offset, little endian */
+        u64 image_size;	  /* Effective Image size, little endian */
+        u64 flags;		  /* kernel flags, little endian */
+        u32 version;	  /* Version of this header */
+        u32 res1;		  /* Reserved */
+        u64 res2;		  /* Reserved */
+        u64 magic;        /* Deprecated: Magic number, little endian, "RISCV" */
+        u32 magic2;       /* Magic number 2, little endian, "RSC\x05" */
+        u32 res3;		  /* Reserved for PE COFF offset */
+    } zimage;
+    uint64_t start, end;
+
+    if ( size < sizeof(zimage) )
+        return -EINVAL;
+
+    copy_from_paddr(&zimage, addr, sizeof(zimage));
+
+    /* Magic v1 is deprecated and may be removed.  Only use v2 */
+    if ( zimage.magic2 != ZIMAGE64_MAGIC_V2 )
+        return -EINVAL;
+
+    /* Currently there is no length in the header, so just use the size */
+    start = 0;
+    end = size;
+
+    /*
+     * Given the above this check is a bit pointless, but leave it
+     * here in case someone adds a length field in the future.
+     */
+    if ( (end - start) > size )
+        return -EINVAL;
+
+    info->zimage.kernel_addr = addr;
+    info->zimage.len = end - start;
+    info->zimage.text_offset = zimage.text_offset;
+    info->zimage.start = 0;
+
+    info->load = kernel_zimage_load;
+
+    return 0;
+}
+
+int __init kernel_zimage_probe(struct kernel_info *info, paddr_t addr,
+                               paddr_t size)
+{
+    int rc;
+
+#ifdef CONFIG_RISCV_64
+    rc = kernel_zimage64_probe(info, addr, size);
+    if (rc < 0)
+#endif
+        panic("only RISC-V 64 is supported\n");
+
+    return rc;
+}
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 4/6] xen: move declaration of fw_unreserved_regions() to common header
  2026-02-12 16:21 [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
                   ` (2 preceding siblings ...)
  2026-02-12 16:21 ` [PATCH v1 3/6] xen/riscv: add zImage kernel loading support Oleksii Kurochko
@ 2026-02-12 16:21 ` Oleksii Kurochko
  2026-02-12 16:21 ` [PATCH v1 5/6] xen: move domain_use_host_layout() " Oleksii Kurochko
  2026-02-12 16:21 ` [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
  5 siblings, 0 replies; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-12 16:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Romain Caritey, Oleksii Kurochko, Stefano Stabellini,
	Julien Grall, Bertrand Marquis, Michal Orzel, Volodymyr Babchuk,
	Andrew Cooper, Anthony PERARD, Jan Beulich, Roger Pau Monné

Since the implementation of fw_unreserved_regions() is in common code, move
its declaration to xen/bootinfo.h.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/arm/include/asm/setup.h | 3 ---
 xen/include/xen/bootinfo.h       | 4 ++++
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/include/asm/setup.h b/xen/arch/arm/include/asm/setup.h
index 899e33925ca4..0d29b46ea52b 100644
--- a/xen/arch/arm/include/asm/setup.h
+++ b/xen/arch/arm/include/asm/setup.h
@@ -43,9 +43,6 @@ int acpi_make_efi_nodes(void *fdt, struct membank tbl_add[]);
 void create_dom0(void);
 
 void discard_initial_modules(void);
-void fw_unreserved_regions(paddr_t s, paddr_t e,
-                           void (*cb)(paddr_t ps, paddr_t pe),
-                           unsigned int first);
 
 void init_pdx(void);
 void setup_mm(void);
diff --git a/xen/include/xen/bootinfo.h b/xen/include/xen/bootinfo.h
index f834f1957155..dbf492c2e36e 100644
--- a/xen/include/xen/bootinfo.h
+++ b/xen/include/xen/bootinfo.h
@@ -210,4 +210,8 @@ static inline struct membanks *membanks_xzalloc(unsigned int nr,
     return banks;
 }
 
+void fw_unreserved_regions(paddr_t s, paddr_t e,
+                           void (*cb)(paddr_t ps, paddr_t pe),
+                           unsigned int first);
+
 #endif /* XEN_BOOTINFO_H */
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-12 16:21 [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
                   ` (3 preceding siblings ...)
  2026-02-12 16:21 ` [PATCH v1 4/6] xen: move declaration of fw_unreserved_regions() to common header Oleksii Kurochko
@ 2026-02-12 16:21 ` Oleksii Kurochko
  2026-02-16 16:36   ` Jan Beulich
  2026-02-12 16:21 ` [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
  5 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-12 16:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Romain Caritey, Oleksii Kurochko, Stefano Stabellini,
	Julien Grall, Bertrand Marquis, Michal Orzel, Volodymyr Babchuk,
	Andrew Cooper, Anthony PERARD, Jan Beulich, Roger Pau Monné

domain_use_host_layout() is generic enough to be moved to the
common header xen/domain.h.

Wrap domain_use_host_layout() with "#ifndef domain_use_host_layout"
to allow architectures to override it if needed.

No functional change.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/arm/include/asm/domain.h | 14 --------------
 xen/include/xen/domain.h          | 16 ++++++++++++++++
 2 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/domain.h
index 758ad807e461..1a04fe658c97 100644
--- a/xen/arch/arm/include/asm/domain.h
+++ b/xen/arch/arm/include/asm/domain.h
@@ -29,20 +29,6 @@ enum domain_type {
 #define is_64bit_domain(d) (0)
 #endif
 
-/*
- * Is the domain using the host memory layout?
- *
- * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
- * To avoid any trouble finding space, it is easier to force using the
- * host memory layout.
- *
- * The hardware domain will use the host layout regardless of
- * direct-mapped because some OS may rely on a specific address ranges
- * for the devices.
- */
-#define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
-                                   is_hardware_domain(d))
-
 struct vtimer {
     struct vcpu *v;
     int irq;
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 93c0fd00c1d7..40487825ad91 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -62,6 +62,22 @@ void domid_free(domid_t domid);
 #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
 #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
 
+/*
+ * Is the domain using the host memory layout?
+ *
+ * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
+ * To avoid any trouble finding space, it is easier to force using the
+ * host memory layout.
+ *
+ * The hardware domain will use the host layout regardless of
+ * direct-mapped because some OS may rely on a specific address ranges
+ * for the devices.
+ */
+#ifndef domain_use_host_layout
+# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
+                                    is_hardware_domain(d))
+#endif
+
 /*
  * Arch-specifics.
  */
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-02-12 16:21 [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
                   ` (4 preceding siblings ...)
  2026-02-12 16:21 ` [PATCH v1 5/6] xen: move domain_use_host_layout() " Oleksii Kurochko
@ 2026-02-12 16:21 ` Oleksii Kurochko
  2026-02-12 16:39   ` Jan Beulich
  5 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-12 16:21 UTC (permalink / raw)
  To: xen-devel
  Cc: Romain Caritey, Oleksii Kurochko, Alistair Francis, Connor Davis,
	Andrew Cooper, Anthony PERARD, Michal Orzel, Jan Beulich,
	Julien Grall, Roger Pau Monné, Stefano Stabellini

To enable DOMAIN_BUILD_HELPERS for RISC-V the following is introduced:
- Add a global p2m_ipa_bits variable, initialized to PADDR_BITS, to
  represent the maximum supported IPA size as find_unallocated_memory()
  requires it.
- Define default guest RAM layout parameters in the public RISC-V
  header as it is required by allocate_memory().

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/riscv/Kconfig           | 1 +
 xen/arch/riscv/include/asm/p2m.h | 3 +++
 xen/arch/riscv/p2m.c             | 6 ++++++
 xen/include/public/arch-riscv.h  | 8 ++++++++
 4 files changed, 18 insertions(+)

diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
index 89876b32175d..12b337365f1f 100644
--- a/xen/arch/riscv/Kconfig
+++ b/xen/arch/riscv/Kconfig
@@ -1,5 +1,6 @@
 config RISCV
 	def_bool y
+	select DOMAIN_BUILD_HELPERS
 	select FUNCTION_ALIGNMENT_16B
 	select GENERIC_BUG_FRAME
 	select GENERIC_UART_INIT
diff --git a/xen/arch/riscv/include/asm/p2m.h b/xen/arch/riscv/include/asm/p2m.h
index c68494593fd9..083549ef9640 100644
--- a/xen/arch/riscv/include/asm/p2m.h
+++ b/xen/arch/riscv/include/asm/p2m.h
@@ -44,6 +44,9 @@
 #define P2M_LEVEL_MASK(p2m, lvl) \
     (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
 
+/* Holds the bit size of IPAs in p2m tables */
+extern unsigned int p2m_ipa_bits;
+
 #define paddr_bits PADDR_BITS
 
 /* Get host p2m table */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
index f5b03e1e3264..62bd8a2f602f 100644
--- a/xen/arch/riscv/p2m.c
+++ b/xen/arch/riscv/p2m.c
@@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
     .name = "Bare",
 };
 
+/*
+ * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
+ * restricted by external entity (e.g. IOMMU).
+ */
+unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
+
 static void p2m_free_page(struct p2m_domain *p2m, struct page_info *pg);
 
 static inline void p2m_free_metadata_page(struct p2m_domain *p2m,
diff --git a/xen/include/public/arch-riscv.h b/xen/include/public/arch-riscv.h
index 360d8e6871ba..91cee3096041 100644
--- a/xen/include/public/arch-riscv.h
+++ b/xen/include/public/arch-riscv.h
@@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
+#define GUEST_RAM_BANKS   1
+
+#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
+#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
+
+#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
+#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
+
 struct vcpu_guest_context {
 };
 typedef struct vcpu_guest_context vcpu_guest_context_t;
-- 
2.52.0



^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-02-12 16:21 ` [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
@ 2026-02-12 16:39   ` Jan Beulich
  2026-02-13 12:54     ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-12 16:39 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 12.02.2026 17:21, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/p2m.h
> +++ b/xen/arch/riscv/include/asm/p2m.h
> @@ -44,6 +44,9 @@
>  #define P2M_LEVEL_MASK(p2m, lvl) \
>      (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
>  
> +/* Holds the bit size of IPAs in p2m tables */
> +extern unsigned int p2m_ipa_bits;

Hmm, I can spot a declaration and ...

> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
>      .name = "Bare",
>  };
>  
> +/*
> + * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
> + * restricted by external entity (e.g. IOMMU).
> + */
> +unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;

... a definition, but neither a use nor a place where the variable would
be set. Hmm, yes, I see common/device-tree/domain-build.c uses it. Then
the following questions arise:
- What does "ipa" stand for? Is this a term sensible in RISC-V context at
  all? Judging from the comment at the decl, isn't it PPN width (plus
  PAGE_SHIFT) that it describes?
- With there not being anyone writing to the variable, why is it not
  const (or even a #define), or at the very least __ro_after_init?
And no, "Arm has it like this" doesn't count as an answer. Considering
all the review comments you've got so far you should know by now that you
shouldn't copy things blindly.

> --- a/xen/include/public/arch-riscv.h
> +++ b/xen/include/public/arch-riscv.h
> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>  
>  #if defined(__XEN__) || defined(__XEN_TOOLS__)
>  
> +#define GUEST_RAM_BANKS   1
> +
> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
> +
> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }

Hmm, does RISC-V really want to go with compile-time constants here? And
if so, why would guests be limited to just 2 Gb? That may more efficiently
be RV32 guests then, with perhaps just an RV32 hypervisor.

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-02-12 16:39   ` Jan Beulich
@ 2026-02-13 12:54     ` Oleksii Kurochko
  2026-02-13 13:11       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-13 12:54 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/12/26 5:39 PM, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>> --- a/xen/arch/riscv/include/asm/p2m.h
>> +++ b/xen/arch/riscv/include/asm/p2m.h
>> @@ -44,6 +44,9 @@
>>   #define P2M_LEVEL_MASK(p2m, lvl) \
>>       (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
>>   
>> +/* Holds the bit size of IPAs in p2m tables */
>> +extern unsigned int p2m_ipa_bits;
> Hmm, I can spot a declaration and ...
>
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
>>       .name = "Bare",
>>   };
>>   
>> +/*
>> + * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
>> + * restricted by external entity (e.g. IOMMU).
>> + */
>> +unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
> ... a definition, but neither a use nor a place where the variable would
> be set. Hmm, yes, I see common/device-tree/domain-build.c uses it. Then
> the following questions arise:
> - What does "ipa" stand for? Is this a term sensible in RISC-V context at
>    all?

IPA stands for GPA. (maybe it would better to rename the in common code to gpa too).
It was used as common code uses p2m_ipa_bits.

Yes, I miss to set p2m_ipa_bits properly in p2m_init() where G-stage MMU mode is set.

> Judging from the comment at the decl, isn't it PPN width (plus
>    PAGE_SHIFT) that it describes?

It is PPN width + PAGE_SHIFT what is equal to PADDR_BITS (44bit PPN + 12 bit PAGE_SHIFT).

> - With there not being anyone writing to the variable, why is it not
>    const (or even a #define), or at the very least __ro_after_init?
> And no, "Arm has it like this" doesn't count as an answer. Considering
> all the review comments you've got so far you should know by now that you
> shouldn't copy things blindly.

It was added because of the usage in common/device-tree/domain-build.c.

It was done in the same way as it is also possible that an IOMMU shares the P2M page
tables with the CPU's G-stage(stage-2) translation, so GPA size must not exceed what
the IOMMU can handle (or G-stage address limitation if it is smaller then IOMMU's).

(a) It could be that MMU uses Sv57, IOMMU uses Sv39, so in this case if IOMMU and MMU
shares G-stae page tables it is necessary to respect guest address limitation.

But considering that according to RISC-V IOMMU spec ... :
   The IOMMU must support all the virtual memory extensions that are supported by
   any of the harts in the system.
... (a) isn't real issue as we could always program IOMMU to use the same as MMU mode
and then p2m_ipa_bits __ro_after_init should work well. It can't be const as I mentioned
above I missed to initialize it properly in p2m_init() code. (It is also a case for RISC-V
that IOMMU could use x4 mode, so MMU uses Sv57 and IOMMU uses Sv57x4.)

>> --- a/xen/include/public/arch-riscv.h
>> +++ b/xen/include/public/arch-riscv.h
>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>   
>>   #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>   
>> +#define GUEST_RAM_BANKS   1
>> +
>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>> +
>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
> Hmm, does RISC-V really want to go with compile-time constants here?

It is needed for allocate_memory() for guest domains, so it is expected
to be compile-time constant with the current code of common dom0less
approach.

It represents the start of RAM address for DomU and the maximum RAM size
(the actual size will be calculated based on what is mentioned in DomU node
in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
RAM size).

>   And
> if so, why would guests be limited to just 2 Gb?

It is enough for guest domain I am using in dom0less mode.

> That may more efficiently
> be RV32 guests then, with perhaps just an RV32 hypervisor.

I  didn't get this point. Could you please explain differently what do you
mean?

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-02-13 12:54     ` Oleksii Kurochko
@ 2026-02-13 13:11       ` Jan Beulich
  2026-02-18 10:39         ` Oleksii Kurochko
  2026-03-17 12:49         ` Oleksii Kurochko
  0 siblings, 2 replies; 39+ messages in thread
From: Jan Beulich @ 2026-02-13 13:11 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 13.02.2026 13:54, Oleksii Kurochko wrote:
> On 2/12/26 5:39 PM, Jan Beulich wrote:
>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/include/asm/p2m.h
>>> +++ b/xen/arch/riscv/include/asm/p2m.h
>>> @@ -44,6 +44,9 @@
>>>   #define P2M_LEVEL_MASK(p2m, lvl) \
>>>       (P2M_TABLE_OFFSET(p2m, lvl) << P2M_GFN_LEVEL_SHIFT(lvl))
>>>   
>>> +/* Holds the bit size of IPAs in p2m tables */
>>> +extern unsigned int p2m_ipa_bits;
>> Hmm, I can spot a declaration and ...
>>
>>> --- a/xen/arch/riscv/p2m.c
>>> +++ b/xen/arch/riscv/p2m.c
>>> @@ -51,6 +51,12 @@ static struct gstage_mode_desc __ro_after_init max_gstage_mode = {
>>>       .name = "Bare",
>>>   };
>>>   
>>> +/*
>>> + * Set to the maximum configured support for IPA bits, so the number of IPA bits can be
>>> + * restricted by external entity (e.g. IOMMU).
>>> + */
>>> +unsigned int __read_mostly p2m_ipa_bits = PADDR_BITS;
>> ... a definition, but neither a use nor a place where the variable would
>> be set. Hmm, yes, I see common/device-tree/domain-build.c uses it. Then
>> the following questions arise:
>> - What does "ipa" stand for? Is this a term sensible in RISC-V context at
>>    all?
> 
> IPA stands for GPA. (maybe it would better to rename the in common code to gpa too).
> It was used as common code uses p2m_ipa_bits.
> 
> Yes, I miss to set p2m_ipa_bits properly in p2m_init() where G-stage MMU mode is set.
> 
>> Judging from the comment at the decl, isn't it PPN width (plus
>>    PAGE_SHIFT) that it describes?
> 
> It is PPN width + PAGE_SHIFT what is equal to PADDR_BITS (44bit PPN + 12 bit PAGE_SHIFT).
> 
>> - With there not being anyone writing to the variable, why is it not
>>    const (or even a #define), or at the very least __ro_after_init?
>> And no, "Arm has it like this" doesn't count as an answer. Considering
>> all the review comments you've got so far you should know by now that you
>> shouldn't copy things blindly.
> 
> It was added because of the usage in common/device-tree/domain-build.c.

Well, I understand that, but this isn't the way to do. And you've been through
such before. Anything you want to share between arch-es that isn't shared yet,
will want some suitable abstraction done. Like giving variables names which
are appropriate independent of the arch.

>>> --- a/xen/include/public/arch-riscv.h
>>> +++ b/xen/include/public/arch-riscv.h
>>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>>   
>>>   #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>   
>>> +#define GUEST_RAM_BANKS   1
>>> +
>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>> +
>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
>> Hmm, does RISC-V really want to go with compile-time constants here?
> 
> It is needed for allocate_memory() for guest domains, so it is expected
> to be compile-time constant with the current code of common dom0less
> approach.
> 
> It represents the start of RAM address for DomU and the maximum RAM size
> (the actual size will be calculated based on what is mentioned in DomU node
> in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
> as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
> RAM size).
> 
>>   And
>> if so, why would guests be limited to just 2 Gb?
> 
> It is enough for guest domain I am using in dom0less mode.

And what others may want to use RISC-V for once it actually becomes usable
isn't relevant? As you start adding things to the public headers, you will
need to understand that you can't change easily what once was put there.
Everything there is part of the ABI, and the ABI needs to remain stable
(within certain limits).

>> That may more efficiently
>> be RV32 guests then, with perhaps just an RV32 hypervisor.
> 
> I  didn't get this point. Could you please explain differently what do you
> mean?

If all you want are 2Gb guests, why would such guests be 64-bit? And with
(iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
even a 32-bit hypervisor would suffice?

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn()
  2026-02-12 16:21 ` [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn() Oleksii Kurochko
@ 2026-02-16 12:38   ` Jan Beulich
  2026-02-16 12:41     ` Jan Beulich
  2026-02-17  9:01     ` Oleksii Kurochko
  0 siblings, 2 replies; 39+ messages in thread
From: Jan Beulich @ 2026-02-16 12:38 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 12.02.2026 17:21, Oleksii Kurochko wrote:
> Provide a RISC-V implementation of get_page_from_gfn(), matching the
> semantics used by other architectures.
> 
> For translated guests, this is implemented as a wrapper around
> p2m_get_page_from_gfn(). For DOMID_XEN, which is not auto-translated,
> provide a 1:1 RAM/MMIO mapping and perform the required validation and
> reference counting.
> 
> The function is implemented out-of-line rather than as a static inline,
> to avoid header ordering issues where struct domain is incomplete when
> asm/p2m.h is included, leading to build failures:
>   In file included from ./arch/riscv/include/asm/domain.h:10,
>                    from ./include/xen/domain.h:16,
>                    from ./include/xen/sched.h:11,
>                    from ./include/xen/event.h:12,
>                    from common/cpu.c:3:
>   ./arch/riscv/include/asm/p2m.h: In function 'get_page_from_gfn':
>   ./arch/riscv/include/asm/p2m.h:50:33: error: invalid use of undefined type 'struct domain'
>      50 | #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>         |                                 ^~
>   ./arch/riscv/include/asm/p2m.h:180:38: note: in expansion of macro 'p2m_get_hostp2m'
>     180 |         return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
>         |                                      ^~~~~~~~~~~~~~~
>   make[2]: *** [Rules.mk:253: common/cpu.o] Error 1
>   make[1]: *** [build.mk:72: common] Error 2
>   make: *** [Makefile:623: xen] Error 2

Surely this can be addressed, when x86 and Arm have the function as inline?

> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Does it make sense to make this function almost fully generic?
> 
> It looks like most of the logic here is architecture-independent and identical
> across architectures, except for the following points:
> 
> 1. ```
>    if ( likely(d != dom_xen) )
>    ```
> 
>    This could be made generic by introducing paging_mode_translate() for ARM
>    and defining it as `(d != dom_xen)` there.
> 
> 2. ```
>    if ( t )
>        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
>    ```
> 
>    Here, only `p2m_mmio_direct_io` appears to be architecture-specific. This
>    could be abstracted via a helper such as `dom_io_p2m_type()` and used here
>    instead.

With P2M stuff I'd be careful. Abstracting the two aspects above may make
future arch-specific changes there more difficult.

> --- a/xen/arch/riscv/p2m.c
> +++ b/xen/arch/riscv/p2m.c
> @@ -1557,3 +1557,31 @@ void p2m_handle_vmenter(void)
>          flush_tlb_guest_local();
>      }
>  }
> +
> +struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
> +                                    p2m_type_t *t, p2m_query_t q)
> +{
> +    struct page_info *page;
> +
> +    /*
> +     * Special case for DOMID_XEN as it is the only domain so far that is
> +     * not auto-translated.
> +     */

Once again something taken verbatim from Arm. Yes, dom_xen can in fact appear
here, but it's not a real domain, has no memory truly assigned to it, has no
GFN space, and hence calling it translated (or not) is simply wrong (at best:
misleading). IOW ...

> +    if ( likely(d != dom_xen) )
> +        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
> +
> +    /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */

... this comment would also want re-wording.

> +    if ( t )
> +        *t = p2m_invalid;
> +
> +    page = mfn_to_page(_mfn(gfn));
> +
> +    if ( !mfn_valid(_mfn(gfn)) || !get_page(page, d) )
> +        return NULL;
> +
> +    if ( t )
> +        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;

If only dom_xen can make it here, why the check for dom_io?

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn()
  2026-02-16 12:38   ` Jan Beulich
@ 2026-02-16 12:41     ` Jan Beulich
  2026-02-17  9:01     ` Oleksii Kurochko
  1 sibling, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2026-02-16 12:41 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 16.02.2026 13:38, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>> Provide a RISC-V implementation of get_page_from_gfn(), matching the
>> semantics used by other architectures.
>>
>> For translated guests, this is implemented as a wrapper around
>> p2m_get_page_from_gfn(). For DOMID_XEN, which is not auto-translated,
>> provide a 1:1 RAM/MMIO mapping and perform the required validation and
>> reference counting.
>>
>> The function is implemented out-of-line rather than as a static inline,
>> to avoid header ordering issues where struct domain is incomplete when
>> asm/p2m.h is included, leading to build failures:
>>   In file included from ./arch/riscv/include/asm/domain.h:10,
>>                    from ./include/xen/domain.h:16,
>>                    from ./include/xen/sched.h:11,
>>                    from ./include/xen/event.h:12,
>>                    from common/cpu.c:3:
>>   ./arch/riscv/include/asm/p2m.h: In function 'get_page_from_gfn':
>>   ./arch/riscv/include/asm/p2m.h:50:33: error: invalid use of undefined type 'struct domain'
>>      50 | #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>>         |                                 ^~
>>   ./arch/riscv/include/asm/p2m.h:180:38: note: in expansion of macro 'p2m_get_hostp2m'
>>     180 |         return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
>>         |                                      ^~~~~~~~~~~~~~~
>>   make[2]: *** [Rules.mk:253: common/cpu.o] Error 1
>>   make[1]: *** [build.mk:72: common] Error 2
>>   make: *** [Makefile:623: xen] Error 2
> 
> Surely this can be addressed, when x86 and Arm have the function as inline?
> 
>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>> ---
>> Does it make sense to make this function almost fully generic?
>>
>> It looks like most of the logic here is architecture-independent and identical
>> across architectures, except for the following points:
>>
>> 1. ```
>>    if ( likely(d != dom_xen) )
>>    ```
>>
>>    This could be made generic by introducing paging_mode_translate() for ARM
>>    and defining it as `(d != dom_xen)` there.
>>
>> 2. ```
>>    if ( t )
>>        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
>>    ```
>>
>>    Here, only `p2m_mmio_direct_io` appears to be architecture-specific. This
>>    could be abstracted via a helper such as `dom_io_p2m_type()` and used here
>>    instead.
> 
> With P2M stuff I'd be careful. Abstracting the two aspects above may make
> future arch-specific changes there more difficult.
> 
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -1557,3 +1557,31 @@ void p2m_handle_vmenter(void)
>>          flush_tlb_guest_local();
>>      }
>>  }
>> +
>> +struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
>> +                                    p2m_type_t *t, p2m_query_t q)
>> +{
>> +    struct page_info *page;
>> +
>> +    /*
>> +     * Special case for DOMID_XEN as it is the only domain so far that is
>> +     * not auto-translated.
>> +     */
> 
> Once again something taken verbatim from Arm.

Actually it's a mix, up to ...

> Yes, dom_xen can in fact appear
> here, but it's not a real domain, has no memory truly assigned to it, has no
> GFN space, and hence calling it translated (or not) is simply wrong (at best:
> misleading). IOW ...
> 
>> +    if ( likely(d != dom_xen) )
>> +        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);

... here it's Arm code, but what follows is x86 code. Why did you create such
a mix?

Jan

>> +    /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
> 
> ... this comment would also want re-wording.
> 
>> +    if ( t )
>> +        *t = p2m_invalid;
>> +
>> +    page = mfn_to_page(_mfn(gfn));
>> +
>> +    if ( !mfn_valid(_mfn(gfn)) || !get_page(page, d) )
>> +        return NULL;
>> +
>> +    if ( t )
>> +        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
> 
> If only dom_xen can make it here, why the check for dom_io?
> 
> Jan



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys()
  2026-02-12 16:21 ` [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys() Oleksii Kurochko
@ 2026-02-16 14:57   ` Jan Beulich
  2026-02-17 10:25     ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-16 14:57 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 12.02.2026 17:21, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/guestcopy.c
> @@ -0,0 +1,112 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#include <xen/domain_page.h>
> +#include <xen/page-size.h>
> +#include <xen/sched.h>
> +#include <xen/string.h>
> +
> +#include <asm/guest_access.h>
> +
> +#define COPY_from_guest     (0U << 0)
> +#define COPY_to_guest       (1U << 0)
> +#define COPY_ipa            (0U << 1)

Like already asked elsewhere - is "ipa" a term commonly in use on RISC-V?
To me it's Arm terminology, which you don't want to copy as is.

Also, don't you prefer to use BIT() everywhere else?

> +#define COPY_linear         (1U << 1)
> +
> +typedef union
> +{
> +    struct
> +    {
> +        struct vcpu *v;
> +    } gva;
> +
> +    struct
> +    {
> +        struct domain *d;
> +    } gpa;
> +} copy_info_t;
> +
> +#define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
> +#define GPA_INFO(domain) ((copy_info_t) { .gpa = { domain } })
> +
> +static struct page_info *translate_get_page(copy_info_t info, uint64_t addr,

The caller has to pass in a domain here. I therefore recommend against
use of copy_info_t for this function. Or wait, this is misleading, as
the consuming part ...

> +                                            bool linear, bool write)
> +{
> +    p2m_type_t p2mt;
> +    struct page_info *page;
> +
> +    if ( linear )
> +        BUG_ON("unimplemeted\n");

... of "linear" is missing here.

In any event, this one please shorter as:

    BUG_ON(linear);

> +    page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
> +
> +    if ( !page )
> +        return NULL;
> +
> +    if ( !p2m_is_ram(p2mt) )
> +    {
> +        put_page(page);
> +        return NULL;
> +    }
> +
> +    return page;
> +}

The "write" function parameter also is unused, but there's no BUG_ON() for
that one? Imo the p2m_is_ram() check isn't thorough enough (on the Arm
original): p2m_ram_ro shouldn't be allowed when "write" is true. As soon
as you gain p2m_ram_ro on RISC-V, things will need updating here as well.
Perhaps best to leave a note.

> +static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
> +                                copy_info_t info, unsigned int flags)

Why an "unsigned long" return value when ...

> +{
> +    unsigned int offset = PAGE_OFFSET(addr);
> +
> +    BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
> +    BUILD_BUG_ON((sizeof(addr)) < sizeof(paddr_t));
> +
> +    while ( len )
> +    {
> +        void *p;
> +        unsigned int size = min(len, (unsigned int)PAGE_SIZE - offset);
> +        struct page_info *page;
> +
> +        page = translate_get_page(info, addr, flags & COPY_linear,
> +                                  flags & COPY_to_guest);
> +        if ( page == NULL )
> +            return len;

... only an "unsigned int" (or 0 further down) is returned? Same
question for copy_to_guest_phys() below then.

> +        p = __map_domain_page(page);
> +        p += offset;
> +        if ( flags & COPY_to_guest )
> +        {
> +            /*
> +             * buf will be NULL when the caller request to zero the
> +             * guest memory.
> +             */
> +            if ( buf )
> +                memcpy(p, buf, size);
> +            else
> +                memset(p, 0, size);
> +        }
> +        else
> +            memcpy(buf, p, size);
> +
> +        unmap_domain_page(p - offset);
> +        put_page(page);
> +        len -= size;
> +        buf += size;
> +        addr += size;
> +
> +        /*
> +         * After the first iteration, guest virtual address is correctly
> +         * aligned to PAGE_SIZE.
> +         */
> +        offset = 0;
> +    }
> +
> +    return 0;
> +}
> +
> +unsigned long copy_to_guest_phys(struct domain *d,
> +                                 paddr_t gpa,
> +                                 void *buf,
> +                                 unsigned int len)

May I suggest to make good use of line length, just like how copy_guest()
does?

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/6] xen/riscv: add zImage kernel loading support
  2026-02-12 16:21 ` [PATCH v1 3/6] xen/riscv: add zImage kernel loading support Oleksii Kurochko
@ 2026-02-16 16:31   ` Jan Beulich
  2026-02-17 11:58     ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-16 16:31 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 12.02.2026 17:21, Oleksii Kurochko wrote:
> Introduce support for loading a Linux zImage kernel on RISC-V.

Before I look here in any detail - where would a zImage come from? I can't
spot any support for it in Linux'es arch/riscv/Makefile (whereas
arch/arm/Makefile has such).

> Note that if panic() is used instead of returning an error as common code
> doesn't expect to have return code and it is something that should be
> done separately.

Is the "if" in this sentence a leftover from some editing of earlier
different text? I can't make sense of it. Also, which "common code" do you
mean? kernel_zimage_probe()'s sole caller does respect the return value
(handing it on).

> This prepares the RISC-V port for booting Linux guests using the common
> domain build infrastructure.

Again, what's "common" here? Not something x86 uses, afaict.

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-12 16:21 ` [PATCH v1 5/6] xen: move domain_use_host_layout() " Oleksii Kurochko
@ 2026-02-16 16:36   ` Jan Beulich
  2026-02-16 18:42     ` Stefano Stabellini
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-16 16:36 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Stefano Stabellini, Julien Grall,
	Bertrand Marquis, Michal Orzel, Volodymyr Babchuk, Andrew Cooper,
	Anthony PERARD, Roger Pau Monné, xen-devel

On 12.02.2026 17:21, Oleksii Kurochko wrote:
> domain_use_host_layout() is generic enough to be moved to the
> common header xen/domain.h.

Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...

> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>  #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>  
> +/*
> + * Is the domain using the host memory layout?
> + *
> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> + * To avoid any trouble finding space, it is easier to force using the
> + * host memory layout.
> + *
> + * The hardware domain will use the host layout regardless of
> + * direct-mapped because some OS may rely on a specific address ranges
> + * for the devices.
> + */
> +#ifndef domain_use_host_layout
> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> +                                    is_hardware_domain(d))

... is_domain_direct_mapped() isn't something that I'd like to see further
proliferate in common (non-DT) code.

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-16 16:36   ` Jan Beulich
@ 2026-02-16 18:42     ` Stefano Stabellini
  2026-02-17  7:34       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Stefano Stabellini @ 2026-02-16 18:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Oleksii Kurochko, Romain Caritey, Stefano Stabellini,
	Julien Grall, Bertrand Marquis, Michal Orzel, Volodymyr Babchuk,
	Andrew Cooper, Anthony PERARD, Roger Pau Monné, xen-devel

On Mon, 16 Feb 2026, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
> > domain_use_host_layout() is generic enough to be moved to the
> > common header xen/domain.h.
> 
> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
> 
> > --- a/xen/include/xen/domain.h
> > +++ b/xen/include/xen/domain.h
> > @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
> >  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
> >  #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> >  
> > +/*
> > + * Is the domain using the host memory layout?
> > + *
> > + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> > + * To avoid any trouble finding space, it is easier to force using the
> > + * host memory layout.
> > + *
> > + * The hardware domain will use the host layout regardless of
> > + * direct-mapped because some OS may rely on a specific address ranges
> > + * for the devices.
> > + */
> > +#ifndef domain_use_host_layout
> > +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> > +                                    is_hardware_domain(d))
> 
> ... is_domain_direct_mapped() isn't something that I'd like to see further
> proliferate in common (non-DT) code.

Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
domain) on x86 as well. In fact, we already have a working prototype,
although it is not suitable for upstream yet.

In addition to the PSP use case that we discussed a few months ago,
where the PSP is not behind an IOMMU and therefore exchanged addresses
must be 1:1 mapped, we also have a new use case. We are running the full
Xen-based automotive stack on an Azure instance where SVM (vmentry and
vmexit) is available, but an IOMMU is not present. All virtual machines
are configured as PVH.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-16 18:42     ` Stefano Stabellini
@ 2026-02-17  7:34       ` Jan Beulich
  2026-02-18 12:58         ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-17  7:34 UTC (permalink / raw)
  To: Stefano Stabellini, Oleksii Kurochko
  Cc: Romain Caritey, Julien Grall, Bertrand Marquis, Michal Orzel,
	Volodymyr Babchuk, Andrew Cooper, Anthony PERARD,
	Roger Pau Monné, xen-devel

On 16.02.2026 19:42, Stefano Stabellini wrote:
> On Mon, 16 Feb 2026, Jan Beulich wrote:
>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>> domain_use_host_layout() is generic enough to be moved to the
>>> common header xen/domain.h.
>>
>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>
>>> --- a/xen/include/xen/domain.h
>>> +++ b/xen/include/xen/domain.h
>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>  #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>  #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>  
>>> +/*
>>> + * Is the domain using the host memory layout?
>>> + *
>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>> + * To avoid any trouble finding space, it is easier to force using the
>>> + * host memory layout.
>>> + *
>>> + * The hardware domain will use the host layout regardless of
>>> + * direct-mapped because some OS may rely on a specific address ranges
>>> + * for the devices.
>>> + */
>>> +#ifndef domain_use_host_layout
>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>> +                                    is_hardware_domain(d))
>>
>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>> proliferate in common (non-DT) code.
> 
> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
> domain) on x86 as well. In fact, we already have a working prototype,
> although it is not suitable for upstream yet.
> 
> In addition to the PSP use case that we discussed a few months ago,
> where the PSP is not behind an IOMMU and therefore exchanged addresses
> must be 1:1 mapped, we also have a new use case. We are running the full
> Xen-based automotive stack on an Azure instance where SVM (vmentry and
> vmexit) is available, but an IOMMU is not present. All virtual machines
> are configured as PVH.

Hmm. Then adjustments need making, for commentary and macro to be correct
on x86. First and foremost none of what is there is true for PV.

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn()
  2026-02-16 12:38   ` Jan Beulich
  2026-02-16 12:41     ` Jan Beulich
@ 2026-02-17  9:01     ` Oleksii Kurochko
  2026-02-17  9:10       ` Jan Beulich
  1 sibling, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-17  9:01 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/16/26 1:38 PM, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>> Provide a RISC-V implementation of get_page_from_gfn(), matching the
>> semantics used by other architectures.
>>
>> For translated guests, this is implemented as a wrapper around
>> p2m_get_page_from_gfn(). For DOMID_XEN, which is not auto-translated,
>> provide a 1:1 RAM/MMIO mapping and perform the required validation and
>> reference counting.
>>
>> The function is implemented out-of-line rather than as a static inline,
>> to avoid header ordering issues where struct domain is incomplete when
>> asm/p2m.h is included, leading to build failures:
>>    In file included from ./arch/riscv/include/asm/domain.h:10,
>>                     from ./include/xen/domain.h:16,
>>                     from ./include/xen/sched.h:11,
>>                     from ./include/xen/event.h:12,
>>                     from common/cpu.c:3:
>>    ./arch/riscv/include/asm/p2m.h: In function 'get_page_from_gfn':
>>    ./arch/riscv/include/asm/p2m.h:50:33: error: invalid use of undefined type 'struct domain'
>>       50 | #define p2m_get_hostp2m(d) (&(d)->arch.p2m)
>>          |                                 ^~
>>    ./arch/riscv/include/asm/p2m.h:180:38: note: in expansion of macro 'p2m_get_hostp2m'
>>      180 |         return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
>>          |                                      ^~~~~~~~~~~~~~~
>>    make[2]: *** [Rules.mk:253: common/cpu.o] Error 1
>>    make[1]: *** [build.mk:72: common] Error 2
>>    make: *** [Makefile:623: xen] Error 2
> Surely this can be addressed, when x86 and Arm have the function as inline?

Yes, it should be possible. The reason for now that is working for x86 and Arm is that:
1. Arm only pass pointer to struct domain to p2m_get_page_from_gfn() so it is enough just
    to have forward declaration for struct domain.
2. x86 uses pointer to p2m_domain in arch_domain so there is no need to include asm/p2m.h
    in asm/domain.h and so forward declaration will be enough. And so there is no dependency
    between xen/sched.h and asm/p2m.h through asm/domain.h which leads to the issue
    mentioned in the commit message.

RISC-V could in principle follow the x86 pattern (avoid including asm/p2m.h),
but the current out-of-line approach is also acceptable, it is simpler and more robust
against future header reordering problems.

>> Signed-off-by: Oleksii Kurochko<oleksii.kurochko@gmail.com>
>> ---
>> Does it make sense to make this function almost fully generic?
>>
>> It looks like most of the logic here is architecture-independent and identical
>> across architectures, except for the following points:
>>
>> 1. ```
>>     if ( likely(d != dom_xen) )
>>     ```
>>
>>     This could be made generic by introducing paging_mode_translate() for ARM
>>     and defining it as `(d != dom_xen)` there.
>>
>> 2. ```
>>     if ( t )
>>         *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
>>     ```
>>
>>     Here, only `p2m_mmio_direct_io` appears to be architecture-specific. This
>>     could be abstracted via a helper such as `dom_io_p2m_type()` and used here
>>     instead.
> With P2M stuff I'd be careful. Abstracting the two aspects above may make
> future arch-specific changes there more difficult.
>
>> --- a/xen/arch/riscv/p2m.c
>> +++ b/xen/arch/riscv/p2m.c
>> @@ -1557,3 +1557,31 @@ void p2m_handle_vmenter(void)
>>           flush_tlb_guest_local();
>>       }
>>   }
>> +
>> +struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
>> +                                    p2m_type_t *t, p2m_query_t q)
>> +{
>> +    struct page_info *page;
>> +
>> +    /*
>> +     * Special case for DOMID_XEN as it is the only domain so far that is
>> +     * not auto-translated.
>> +     */
> Once again something taken verbatim from Arm. Yes, dom_xen can in fact appear
> here, but it's not a real domain, has no memory truly assigned to it, has no
> GFN space, and hence calling it translated (or not) is simply wrong (at best:
> misleading). IOW ...
>
>> +    if ( likely(d != dom_xen) )
>> +        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
>> +
>> +    /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
> ... this comment would also want re-wording.

As you mentioned in the another reply to this patch, I messed up x86 and Arm
implementation in a bad way, so here should be DOMID_XEN used instead of
"Non-translated".

Based on your reply it seems like the first comment should be also rephrased
as you mentioned that DOMID_XEN can't be called also "not auto-translated".
I think it would be better to write the following:
  /*
   * Special case for DOMID_XEN as it is the only domain so far that has
   * no GFN space.
   */


>
>> +    if ( t )
>> +        *t = p2m_invalid;
>> +
>> +    page = mfn_to_page(_mfn(gfn));
>> +
>> +    if ( !mfn_valid(_mfn(gfn)) || !get_page(page, d) )
>> +        return NULL;
>> +
>> +    if ( t )
>> +        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
> If only dom_xen can make it here, why the check for dom_io?

Incorrectly copied from x86. It should be just:
  *t = p2m_ram_rw
here as in RISC-V for MMIO pages owner isn't set to dom_io (and the same is
true for Arm I think).

Thanks.

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn()
  2026-02-17  9:01     ` Oleksii Kurochko
@ 2026-02-17  9:10       ` Jan Beulich
  2026-02-17  9:58         ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-17  9:10 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 17.02.2026 10:01, Oleksii Kurochko wrote:
> On 2/16/26 1:38 PM, Jan Beulich wrote:
>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/p2m.c
>>> +++ b/xen/arch/riscv/p2m.c
>>> @@ -1557,3 +1557,31 @@ void p2m_handle_vmenter(void)
>>>           flush_tlb_guest_local();
>>>       }
>>>   }
>>> +
>>> +struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
>>> +                                    p2m_type_t *t, p2m_query_t q)
>>> +{
>>> +    struct page_info *page;
>>> +
>>> +    /*
>>> +     * Special case for DOMID_XEN as it is the only domain so far that is
>>> +     * not auto-translated.
>>> +     */
>> Once again something taken verbatim from Arm. Yes, dom_xen can in fact appear
>> here, but it's not a real domain, has no memory truly assigned to it, has no
>> GFN space, and hence calling it translated (or not) is simply wrong (at best:
>> misleading). IOW ...
>>
>>> +    if ( likely(d != dom_xen) )
>>> +        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
>>> +
>>> +    /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
>> ... this comment would also want re-wording.
> 
> As you mentioned in the another reply to this patch, I messed up x86 and Arm
> implementation in a bad way, so here should be DOMID_XEN used instead of
> "Non-translated".
> 
> Based on your reply it seems like the first comment should be also rephrased
> as you mentioned that DOMID_XEN can't be called also "not auto-translated".
> I think it would be better to write the following:
>   /*
>    * Special case for DOMID_XEN as it is the only domain so far that has
>    * no GFN space.
>    */

Simply say that dom_xen isn't a "normal" domain?

>>> +    if ( t )
>>> +        *t = p2m_invalid;
>>> +
>>> +    page = mfn_to_page(_mfn(gfn));
>>> +
>>> +    if ( !mfn_valid(_mfn(gfn)) || !get_page(page, d) )
>>> +        return NULL;
>>> +
>>> +    if ( t )
>>> +        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
>> If only dom_xen can make it here, why the check for dom_io?
> 
> Incorrectly copied from x86. It should be just:
>   *t = p2m_ram_rw
> here as in RISC-V for MMIO pages owner isn't set to dom_io (and the same is
> true for Arm I think).

May I suggest that right away you use the construct that I suggested Arm to
switch to (you were Cc-ed on that patch, I think)? Despite the absence of
p2m_ram_ro on RISC-V, that'll be usable, and it will allow keeping the code
untouched when p2m_ram_ro is introduced (sooner or later you will need it,
I expect).

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn()
  2026-02-17  9:10       ` Jan Beulich
@ 2026-02-17  9:58         ` Oleksii Kurochko
  2026-02-17 10:40           ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-17  9:58 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/17/26 10:10 AM, Jan Beulich wrote:
> On 17.02.2026 10:01, Oleksii Kurochko wrote:
>> On 2/16/26 1:38 PM, Jan Beulich wrote:
>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>> --- a/xen/arch/riscv/p2m.c
>>>> +++ b/xen/arch/riscv/p2m.c
>>>> @@ -1557,3 +1557,31 @@ void p2m_handle_vmenter(void)
>>>>            flush_tlb_guest_local();
>>>>        }
>>>>    }
>>>> +
>>>> +struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
>>>> +                                    p2m_type_t *t, p2m_query_t q)
>>>> +{
>>>> +    struct page_info *page;
>>>> +
>>>> +    /*
>>>> +     * Special case for DOMID_XEN as it is the only domain so far that is
>>>> +     * not auto-translated.
>>>> +     */
>>> Once again something taken verbatim from Arm. Yes, dom_xen can in fact appear
>>> here, but it's not a real domain, has no memory truly assigned to it, has no
>>> GFN space, and hence calling it translated (or not) is simply wrong (at best:
>>> misleading). IOW ...
>>>
>>>> +    if ( likely(d != dom_xen) )
>>>> +        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
>>>> +
>>>> +    /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
>>> ... this comment would also want re-wording.
>> As you mentioned in the another reply to this patch, I messed up x86 and Arm
>> implementation in a bad way, so here should be DOMID_XEN used instead of
>> "Non-translated".
>>
>> Based on your reply it seems like the first comment should be also rephrased
>> as you mentioned that DOMID_XEN can't be called also "not auto-translated".
>> I think it would be better to write the following:
>>    /*
>>     * Special case for DOMID_XEN as it is the only domain so far that has
>>     * no GFN space.
>>     */
> Simply say that dom_xen isn't a "normal" domain?
>
>>>> +    if ( t )
>>>> +        *t = p2m_invalid;
>>>> +
>>>> +    page = mfn_to_page(_mfn(gfn));
>>>> +
>>>> +    if ( !mfn_valid(_mfn(gfn)) || !get_page(page, d) )
>>>> +        return NULL;
>>>> +
>>>> +    if ( t )
>>>> +        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
>>> If only dom_xen can make it here, why the check for dom_io?
>> Incorrectly copied from x86. It should be just:
>>    *t = p2m_ram_rw
>> here as in RISC-V for MMIO pages owner isn't set to dom_io (and the same is
>> true for Arm I think).
> May I suggest that right away you use the construct that I suggested Arm to
> switch to (you were Cc-ed on that patch, I think)? Despite the absence of
> p2m_ram_ro on RISC-V, that'll be usable, and it will allow keeping the code
> untouched when p2m_ram_ro is introduced (sooner or later you will need it,
> I expect).

Sure, but doesn't that patch is connected to another function (translate_get_page())
and just fixing the handling of what get_page_from_gfn() in *t?

For get_page_from_gfn() to not miss the case when new type is introduced it make
sense to do the following:
     if ( page->u.inuse.type_info & PGT_writable_page )
         *t = p2m_ram_rw;
     else
	BUG_ON("unimplemented");

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys()
  2026-02-16 14:57   ` Jan Beulich
@ 2026-02-17 10:25     ` Oleksii Kurochko
  2026-02-17 10:42       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-17 10:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/16/26 3:57 PM, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>> --- /dev/null
>> +++ b/xen/arch/riscv/guestcopy.c
>> @@ -0,0 +1,112 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +
>> +#include <xen/domain_page.h>
>> +#include <xen/page-size.h>
>> +#include <xen/sched.h>
>> +#include <xen/string.h>
>> +
>> +#include <asm/guest_access.h>
>> +
>> +#define COPY_from_guest     (0U << 0)
>> +#define COPY_to_guest       (1U << 0)
>> +#define COPY_ipa            (0U << 1)
> Like already asked elsewhere - is "ipa" a term commonly in use on RISC-V?
> To me it's Arm terminology, which you don't want to copy as is.

As we discussed in another patch thread, IPA isn't really used for RISC-V
and I will rename it to GPA.

>
> Also, don't you prefer to use BIT() everywhere else?

Yes, BIT() would be better for consistency.


>
>> +#define COPY_linear         (1U << 1)
>> +
>> +typedef union
>> +{
>> +    struct
>> +    {
>> +        struct vcpu *v;
>> +    } gva;
>> +
>> +    struct
>> +    {
>> +        struct domain *d;
>> +    } gpa;
>> +} copy_info_t;
>> +
>> +#define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
>> +#define GPA_INFO(domain) ((copy_info_t) { .gpa = { domain } })
>> +
>> +static struct page_info *translate_get_page(copy_info_t info, uint64_t addr,
> The caller has to pass in a domain here. I therefore recommend against
> use of copy_info_t for this function. Or wait, this is misleading, as
> the consuming part ...
>
>> +                                            bool linear, bool write)
>> +{
>> +    p2m_type_t p2mt;
>> +    struct page_info *page;
>> +
>> +    if ( linear )
>> +        BUG_ON("unimplemeted\n");
> ... of "linear" is missing here.

Yes, for this once cases it will be used vcpu as an argument passed by "copy_info_t info".
I will add the comment above suggested below BUG_ON(linear).

Btw, I think it makes sense to change linear to GVA to be more close to RISC-V spec?

>
> In any event, this one please shorter as:
>
>      BUG_ON(linear);
>
>> +    page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
>> +
>> +    if ( !page )
>> +        return NULL;
>> +
>> +    if ( !p2m_is_ram(p2mt) )
>> +    {
>> +        put_page(page);
>> +        return NULL;
>> +    }
>> +
>> +    return page;
>> +}
> The "write" function parameter also is unused, but there's no BUG_ON() for
> that one? Imo the p2m_is_ram() check isn't thorough enough (on the Arm
> original): p2m_ram_ro shouldn't be allowed when "write" is true. As soon
> as you gain p2m_ram_ro on RISC-V, things will need updating here as well.
> Perhaps best to leave a note.

I will apply your changes from suggested for Arm patch (Arm: tighten
translate_get_page()) so write will be used and also no extra updates will
be needed here.


>
>> +static unsigned long copy_guest(void *buf, uint64_t addr, unsigned int len,
>> +                                copy_info_t info, unsigned int flags)
> Why an "unsigned long" return value when ...
>
>> +{
>> +    unsigned int offset = PAGE_OFFSET(addr);
>> +
>> +    BUILD_BUG_ON((sizeof(addr)) < sizeof(vaddr_t));
>> +    BUILD_BUG_ON((sizeof(addr)) < sizeof(paddr_t));
>> +
>> +    while ( len )
>> +    {
>> +        void *p;
>> +        unsigned int size = min(len, (unsigned int)PAGE_SIZE - offset);
>> +        struct page_info *page;
>> +
>> +        page = translate_get_page(info, addr, flags & COPY_linear,
>> +                                  flags & COPY_to_guest);
>> +        if ( page == NULL )
>> +            return len;
> ... only an "unsigned int" (or 0 further down) is returned? Same
> question for copy_to_guest_phys() below then.

Agree, unsigned int should be enough.

>
>> +        p = __map_domain_page(page);
>> +        p += offset;
>> +        if ( flags & COPY_to_guest )
>> +        {
>> +            /*
>> +             * buf will be NULL when the caller request to zero the
>> +             * guest memory.
>> +             */
>> +            if ( buf )
>> +                memcpy(p, buf, size);
>> +            else
>> +                memset(p, 0, size);
>> +        }
>> +        else
>> +            memcpy(buf, p, size);
>> +
>> +        unmap_domain_page(p - offset);
>> +        put_page(page);
>> +        len -= size;
>> +        buf += size;
>> +        addr += size;
>> +
>> +        /*
>> +         * After the first iteration, guest virtual address is correctly
>> +         * aligned to PAGE_SIZE.
>> +         */
>> +        offset = 0;
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +unsigned long copy_to_guest_phys(struct domain *d,
>> +                                 paddr_t gpa,
>> +                                 void *buf,
>> +                                 unsigned int len)
> May I suggest to make good use of line length, just like how copy_guest()
> does?

Sure, I will do that.

Thanks.

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn()
  2026-02-17  9:58         ` Oleksii Kurochko
@ 2026-02-17 10:40           ` Jan Beulich
  0 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2026-02-17 10:40 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 17.02.2026 10:58, Oleksii Kurochko wrote:
> 
> On 2/17/26 10:10 AM, Jan Beulich wrote:
>> On 17.02.2026 10:01, Oleksii Kurochko wrote:
>>> On 2/16/26 1:38 PM, Jan Beulich wrote:
>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>> --- a/xen/arch/riscv/p2m.c
>>>>> +++ b/xen/arch/riscv/p2m.c
>>>>> @@ -1557,3 +1557,31 @@ void p2m_handle_vmenter(void)
>>>>>            flush_tlb_guest_local();
>>>>>        }
>>>>>    }
>>>>> +
>>>>> +struct page_info *get_page_from_gfn(struct domain *d, unsigned long gfn,
>>>>> +                                    p2m_type_t *t, p2m_query_t q)
>>>>> +{
>>>>> +    struct page_info *page;
>>>>> +
>>>>> +    /*
>>>>> +     * Special case for DOMID_XEN as it is the only domain so far that is
>>>>> +     * not auto-translated.
>>>>> +     */
>>>> Once again something taken verbatim from Arm. Yes, dom_xen can in fact appear
>>>> here, but it's not a real domain, has no memory truly assigned to it, has no
>>>> GFN space, and hence calling it translated (or not) is simply wrong (at best:
>>>> misleading). IOW ...
>>>>
>>>>> +    if ( likely(d != dom_xen) )
>>>>> +        return p2m_get_page_from_gfn(p2m_get_hostp2m(d), _gfn(gfn), t);
>>>>> +
>>>>> +    /* Non-translated guests see 1-1 RAM / MMIO mappings everywhere */
>>>> ... this comment would also want re-wording.
>>> As you mentioned in the another reply to this patch, I messed up x86 and Arm
>>> implementation in a bad way, so here should be DOMID_XEN used instead of
>>> "Non-translated".
>>>
>>> Based on your reply it seems like the first comment should be also rephrased
>>> as you mentioned that DOMID_XEN can't be called also "not auto-translated".
>>> I think it would be better to write the following:
>>>    /*
>>>     * Special case for DOMID_XEN as it is the only domain so far that has
>>>     * no GFN space.
>>>     */
>> Simply say that dom_xen isn't a "normal" domain?
>>
>>>>> +    if ( t )
>>>>> +        *t = p2m_invalid;
>>>>> +
>>>>> +    page = mfn_to_page(_mfn(gfn));
>>>>> +
>>>>> +    if ( !mfn_valid(_mfn(gfn)) || !get_page(page, d) )
>>>>> +        return NULL;
>>>>> +
>>>>> +    if ( t )
>>>>> +        *t = likely(d != dom_io) ? p2m_ram_rw : p2m_mmio_direct_io;
>>>> If only dom_xen can make it here, why the check for dom_io?
>>> Incorrectly copied from x86. It should be just:
>>>    *t = p2m_ram_rw
>>> here as in RISC-V for MMIO pages owner isn't set to dom_io (and the same is
>>> true for Arm I think).
>> May I suggest that right away you use the construct that I suggested Arm to
>> switch to (you were Cc-ed on that patch, I think)? Despite the absence of
>> p2m_ram_ro on RISC-V, that'll be usable, and it will allow keeping the code
>> untouched when p2m_ram_ro is introduced (sooner or later you will need it,
>> I expect).
> 
> Sure, but doesn't that patch is connected to another function (translate_get_page())
> and just fixing the handling of what get_page_from_gfn() in *t?

Oh, sorry, I should have made explicit that the request was for patch 2.
Here indeed ...

> For get_page_from_gfn() to not miss the case when new type is introduced it make
> sense to do the following:
>      if ( page->u.inuse.type_info & PGT_writable_page )
>          *t = p2m_ram_rw;
>      else
> 	BUG_ON("unimplemented");

... this may be the best you can do right now (unless you want to introduce
p2m_ram_ro).

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys()
  2026-02-17 10:25     ` Oleksii Kurochko
@ 2026-02-17 10:42       ` Jan Beulich
  0 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2026-02-17 10:42 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 17.02.2026 11:25, Oleksii Kurochko wrote:
> On 2/16/26 3:57 PM, Jan Beulich wrote:
>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/guestcopy.c
>>> @@ -0,0 +1,112 @@
>>> +/* SPDX-License-Identifier: GPL-2.0-only */
>>> +
>>> +#include <xen/domain_page.h>
>>> +#include <xen/page-size.h>
>>> +#include <xen/sched.h>
>>> +#include <xen/string.h>
>>> +
>>> +#include <asm/guest_access.h>
>>> +
>>> +#define COPY_from_guest     (0U << 0)
>>> +#define COPY_to_guest       (1U << 0)
>>> +#define COPY_ipa            (0U << 1)
>> Like already asked elsewhere - is "ipa" a term commonly in use on RISC-V?
>> To me it's Arm terminology, which you don't want to copy as is.
> 
> As we discussed in another patch thread, IPA isn't really used for RISC-V
> and I will rename it to GPA.
> 
>> Also, don't you prefer to use BIT() everywhere else?
> 
> Yes, BIT() would be better for consistency.
> 
>>> +#define COPY_linear         (1U << 1)
>>> +
>>> +typedef union
>>> +{
>>> +    struct
>>> +    {
>>> +        struct vcpu *v;
>>> +    } gva;
>>> +
>>> +    struct
>>> +    {
>>> +        struct domain *d;
>>> +    } gpa;
>>> +} copy_info_t;
>>> +
>>> +#define GVA_INFO(vcpu) ((copy_info_t) { .gva = { vcpu } })
>>> +#define GPA_INFO(domain) ((copy_info_t) { .gpa = { domain } })
>>> +
>>> +static struct page_info *translate_get_page(copy_info_t info, uint64_t addr,
>> The caller has to pass in a domain here. I therefore recommend against
>> use of copy_info_t for this function. Or wait, this is misleading, as
>> the consuming part ...
>>
>>> +                                            bool linear, bool write)
>>> +{
>>> +    p2m_type_t p2mt;
>>> +    struct page_info *page;
>>> +
>>> +    if ( linear )
>>> +        BUG_ON("unimplemeted\n");
>> ... of "linear" is missing here.
> 
> Yes, for this once cases it will be used vcpu as an argument passed by "copy_info_t info".
> I will add the comment above suggested below BUG_ON(linear).
> 
> Btw, I think it makes sense to change linear to GVA to be more close to RISC-V spec?

And to better match the rename to GPA that you talk about above.

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/6] xen/riscv: add zImage kernel loading support
  2026-02-16 16:31   ` Jan Beulich
@ 2026-02-17 11:58     ` Oleksii Kurochko
  2026-02-17 13:02       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-17 11:58 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/16/26 5:31 PM, Jan Beulich wrote:
> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>> Introduce support for loading a Linux zImage kernel on RISC-V.
> Before I look here in any detail - where would a zImage come from? I can't
> spot any support for it in Linux'es arch/riscv/Makefile (whereas
> arch/arm/Makefile has such).

Good point.

It is something that should be renamed as not Arm64 (Arm32 really has such
target), not RISC-V doesn't really work with zImage. They are using Image plus
Image.gz as compressed image.

Maybe it would be better to rename kernel_zimage_probe() to something more
generic kernel_image_probe().

>
>> Note that if panic() is used instead of returning an error as common code
>> doesn't expect to have return code and it is something that should be
>> done separately.
> Is the "if" in this sentence a leftover from some editing of earlier
> different text? I can't make sense of it. Also, which "common code" do you
> mean? kernel_zimage_probe()'s sole caller does respect the return value
> (handing it on).

It is about kernel_zimage_load() which is used to set:
   struct kernel_info->load = kernel_zimage_load
in kernel_zimage64_probe().

and which is called (and is in common code) from:
   void __init kernel_load(struct kernel_info *info)
   {
       ASSERT(info && info->load);

       info->load(info);
   }


>
>> This prepares the RISC-V port for booting Linux guests using the common
>> domain build infrastructure.
> Again, what's "common" here? Not something x86 uses, afaict.

By "common" here I meant dom0less's common code which may use functions
from this file.
I will update that part to use more specific instead of "common".

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/6] xen/riscv: add zImage kernel loading support
  2026-02-17 11:58     ` Oleksii Kurochko
@ 2026-02-17 13:02       ` Jan Beulich
  2026-02-17 15:28         ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-17 13:02 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 17.02.2026 12:58, Oleksii Kurochko wrote:
> On 2/16/26 5:31 PM, Jan Beulich wrote:
>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>> Introduce support for loading a Linux zImage kernel on RISC-V.
>> Before I look here in any detail - where would a zImage come from? I can't
>> spot any support for it in Linux'es arch/riscv/Makefile (whereas
>> arch/arm/Makefile has such).
> 
> Good point.
> 
> It is something that should be renamed as not Arm64 (Arm32 really has such
> target), not RISC-V doesn't really work with zImage. They are using Image plus
> Image.gz as compressed image.
> 
> Maybe it would be better to rename kernel_zimage_probe() to something more
> generic kernel_image_probe().

Well, it's two things. In the description you explicitly say zImage. That's
simply misleading. Renaming the function (if indeed it copes with more than
just zImage) would likely be a good thing too, but needs sorting with its
maintainers.

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 3/6] xen/riscv: add zImage kernel loading support
  2026-02-17 13:02       ` Jan Beulich
@ 2026-02-17 15:28         ` Oleksii Kurochko
  0 siblings, 0 replies; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-17 15:28 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/17/26 2:02 PM, Jan Beulich wrote:
> On 17.02.2026 12:58, Oleksii Kurochko wrote:
>> On 2/16/26 5:31 PM, Jan Beulich wrote:
>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>> Introduce support for loading a Linux zImage kernel on RISC-V.
>>> Before I look here in any detail - where would a zImage come from? I can't
>>> spot any support for it in Linux'es arch/riscv/Makefile (whereas
>>> arch/arm/Makefile has such).
>> Good point.
>>
>> It is something that should be renamed as not Arm64 (Arm32 really has such
>> target), not RISC-V doesn't really work with zImage. They are using Image plus
>> Image.gz as compressed image.
>>
>> Maybe it would be better to rename kernel_zimage_probe() to something more
>> generic kernel_image_probe().
> Well, it's two things. In the description you explicitly say zImage. That's
> simply misleading.

Agree, it should be just Image, I'll update that part of commit description
in the next version.

>   Renaming the function (if indeed it copes with more than
> just zImage) would likely be a good thing too, but needs sorting with its
> maintainers.

I will suggest that then in a separate patch.

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-02-13 13:11       ` Jan Beulich
@ 2026-02-18 10:39         ` Oleksii Kurochko
  2026-02-18 10:45           ` Jan Beulich
  2026-03-17 12:49         ` Oleksii Kurochko
  1 sibling, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-18 10:39 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/13/26 2:11 PM, Jan Beulich wrote:
> On 13.02.2026 13:54, Oleksii Kurochko wrote:
>> On 2/12/26 5:39 PM, Jan Beulich wrote:
>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>> --- a/xen/include/public/arch-riscv.h
>>>> +++ b/xen/include/public/arch-riscv.h
>>>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>>>    
>>>>    #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>    
>>>> +#define GUEST_RAM_BANKS   1
>>>> +
>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>> +
>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
>>> Hmm, does RISC-V really want to go with compile-time constants here?
>> It is needed for allocate_memory() for guest domains, so it is expected
>> to be compile-time constant with the current code of common dom0less
>> approach.
>>
>> It represents the start of RAM address for DomU and the maximum RAM size
>> (the actual size will be calculated based on what is mentioned in DomU node
>> in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
>> as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
>> RAM size).
>>
>>>    And
>>> if so, why would guests be limited to just 2 Gb?
>> It is enough for guest domain I am using in dom0less mode.
> And what others may want to use RISC-V for once it actually becomes usable
> isn't relevant? As you start adding things to the public headers, you will
> need to understand that you can't change easily what once was put there.
> Everything there is part of the ABI, and the ABI needs to remain stable
> (within certain limits).

Considering this ...

>
>>> That may more efficiently
>>> be RV32 guests then, with perhaps just an RV32 hypervisor.
>> I  didn't get this point. Could you please explain differently what do you
>> mean?
> If all you want are 2Gb guests, why would such guests be 64-bit? And with
> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
> even a 32-bit hypervisor would suffice?

... now I can agree that Xen should permit bigger amount of RAM. At least,
(2^34-1) should be allowed for RV32 and so for RV64 so it could be used
like a base for both of them. As RV64 allows (2^56 - 1) it makes sense
to add another bank to cover range from 2^34 to (2^56 -1) for RV64 (and ifdef
this second bank for  RV64).

Would it be better?

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-02-18 10:39         ` Oleksii Kurochko
@ 2026-02-18 10:45           ` Jan Beulich
  0 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2026-02-18 10:45 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 18.02.2026 11:39, Oleksii Kurochko wrote:
> 
> On 2/13/26 2:11 PM, Jan Beulich wrote:
>> On 13.02.2026 13:54, Oleksii Kurochko wrote:
>>> On 2/12/26 5:39 PM, Jan Beulich wrote:
>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>> --- a/xen/include/public/arch-riscv.h
>>>>> +++ b/xen/include/public/arch-riscv.h
>>>>> @@ -50,6 +50,14 @@ typedef uint64_t xen_ulong_t;
>>>>>    
>>>>>    #if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>>>    
>>>>> +#define GUEST_RAM_BANKS   1
>>>>> +
>>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>>> +
>>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
>>>> Hmm, does RISC-V really want to go with compile-time constants here?
>>> It is needed for allocate_memory() for guest domains, so it is expected
>>> to be compile-time constant with the current code of common dom0less
>>> approach.
>>>
>>> It represents the start of RAM address for DomU and the maximum RAM size
>>> (the actual size will be calculated based on what is mentioned in DomU node
>>> in dts) and then will be used to generate memory node for DomU (GUEST_RAM0_BASE
>>> as RAM start address and min(GUEST_RAM0_SIZE, dts->domU->memory->size) as a
>>> RAM size).
>>>
>>>>    And
>>>> if so, why would guests be limited to just 2 Gb?
>>> It is enough for guest domain I am using in dom0less mode.
>> And what others may want to use RISC-V for once it actually becomes usable
>> isn't relevant? As you start adding things to the public headers, you will
>> need to understand that you can't change easily what once was put there.
>> Everything there is part of the ABI, and the ABI needs to remain stable
>> (within certain limits).
> 
> Considering this ...
> 
>>>> That may more efficiently
>>>> be RV32 guests then, with perhaps just an RV32 hypervisor.
>>> I  didn't get this point. Could you please explain differently what do you
>>> mean?
>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>> even a 32-bit hypervisor would suffice?
> 
> ... now I can agree that Xen should permit bigger amount of RAM. At least,
> (2^34-1) should be allowed for RV32 and so for RV64 so it could be used
> like a base for both of them. As RV64 allows (2^56 - 1) it makes sense
> to add another bank to cover range from 2^34 to (2^56 -1) for RV64 (and ifdef
> this second bank for  RV64).
> 
> Would it be better?

Having a 2nd bank right away for RV64 would seem better to me, yes. Whether
that means going all the way up to 2^56 I don't know.

In whether a public header can be changed, it also matters whether these
#define-s actually are meant to be exposed to guests (vs. merely the tool
stack). Longer-term, however, this is going to change (as we intend to move
to a fully stable ABI).

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-17  7:34       ` Jan Beulich
@ 2026-02-18 12:58         ` Oleksii Kurochko
  2026-02-18 13:12           ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-18 12:58 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Julien Grall, Bertrand Marquis, Michal Orzel,
	Volodymyr Babchuk, Andrew Cooper, Anthony PERARD,
	Roger Pau Monné, xen-devel, Stefano Stabellini


On 2/17/26 8:34 AM, Jan Beulich wrote:
> On 16.02.2026 19:42, Stefano Stabellini wrote:
>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>> domain_use_host_layout() is generic enough to be moved to the
>>>> common header xen/domain.h.
>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>
>>>> --- a/xen/include/xen/domain.h
>>>> +++ b/xen/include/xen/domain.h
>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>   #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>   #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>   
>>>> +/*
>>>> + * Is the domain using the host memory layout?
>>>> + *
>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>> + * host memory layout.
>>>> + *
>>>> + * The hardware domain will use the host layout regardless of
>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>> + * for the devices.
>>>> + */
>>>> +#ifndef domain_use_host_layout
>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>> +                                    is_hardware_domain(d))
>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>> proliferate in common (non-DT) code.
>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>> domain) on x86 as well. In fact, we already have a working prototype,
>> although it is not suitable for upstream yet.
>>
>> In addition to the PSP use case that we discussed a few months ago,
>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>> must be 1:1 mapped, we also have a new use case. We are running the full
>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>> vmexit) is available, but an IOMMU is not present. All virtual machines
>> are configured as PVH.
> Hmm. Then adjustments need making, for commentary and macro to be correct
> on x86. First and foremost none of what is there is true for PV.

As is_domain_direct_mapped() returns always false for x86, so
domain_use_host_layout macro will return incorrect value for non-hardware
domains (dom0?). And as PV domains are not auto_translated domains so are
always direct-mapped, so technically is_domain_direct_mapped() (or
domain_use_host_layout()) should return true in such case.

(I assume it is also true for every domain except HVM according to the comment
/* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
domains (x86 only) are always direct-mapped*/).

Is my understanding correct?

Then isn't that a problem of how is_domain_direct_mapped() is defined
for x86? Shouldn't it be defined like:
   #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))

Would it be better to move "!paging_mode_translate(d) || " to the definition
of domain_use_host_layout()?

Could you please explain what is wrong with the comment? Probably, except:
   * To avoid any trouble finding space, it is easier to force using the
   * host memory layout.
everything else should be true for x86.

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-18 12:58         ` Oleksii Kurochko
@ 2026-02-18 13:12           ` Jan Beulich
  2026-02-18 14:38             ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-18 13:12 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Julien Grall, Bertrand Marquis, Michal Orzel,
	Volodymyr Babchuk, Andrew Cooper, Anthony PERARD,
	Roger Pau Monné, xen-devel, Stefano Stabellini

On 18.02.2026 13:58, Oleksii Kurochko wrote:
> 
> On 2/17/26 8:34 AM, Jan Beulich wrote:
>> On 16.02.2026 19:42, Stefano Stabellini wrote:
>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>> domain_use_host_layout() is generic enough to be moved to the
>>>>> common header xen/domain.h.
>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>>
>>>>> --- a/xen/include/xen/domain.h
>>>>> +++ b/xen/include/xen/domain.h
>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>>   #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>>   #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>>   
>>>>> +/*
>>>>> + * Is the domain using the host memory layout?
>>>>> + *
>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>>> + * host memory layout.
>>>>> + *
>>>>> + * The hardware domain will use the host layout regardless of
>>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>>> + * for the devices.
>>>>> + */
>>>>> +#ifndef domain_use_host_layout
>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>>> +                                    is_hardware_domain(d))
>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>>> proliferate in common (non-DT) code.
>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>>> domain) on x86 as well. In fact, we already have a working prototype,
>>> although it is not suitable for upstream yet.
>>>
>>> In addition to the PSP use case that we discussed a few months ago,
>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>>> must be 1:1 mapped, we also have a new use case. We are running the full
>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>>> vmexit) is available, but an IOMMU is not present. All virtual machines
>>> are configured as PVH.
>> Hmm. Then adjustments need making, for commentary and macro to be correct
>> on x86. First and foremost none of what is there is true for PV.
> 
> As is_domain_direct_mapped() returns always false for x86, so
> domain_use_host_layout macro will return incorrect value for non-hardware
> domains (dom0?). And as PV domains are not auto_translated domains so are
> always direct-mapped, so technically is_domain_direct_mapped() (or
> domain_use_host_layout()) should return true in such case.

Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
some special purpose (absence of IOMMU iirc).

> (I assume it is also true for every domain except HVM according to the comment
> /* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
> the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
> domains (x86 only) are always direct-mapped*/).
> 
> Is my understanding correct?
> 
> Then isn't that a problem of how is_domain_direct_mapped() is defined
> for x86? Shouldn't it be defined like:
>    #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))
> 
> Would it be better to move "!paging_mode_translate(d) || " to the definition
> of domain_use_host_layout()?
> 
> Could you please explain what is wrong with the comment? Probably, except:
>    * To avoid any trouble finding space, it is easier to force using the
>    * host memory layout.
> everything else should be true for x86.

"The hardware domain will use ..." isn't true for PV Dom0.

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-18 13:12           ` Jan Beulich
@ 2026-02-18 14:38             ` Oleksii Kurochko
  2026-02-18 14:50               ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-02-18 14:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Julien Grall, Bertrand Marquis, Michal Orzel,
	Volodymyr Babchuk, Andrew Cooper, Anthony PERARD,
	Roger Pau Monné, xen-devel, Stefano Stabellini


On 2/18/26 2:12 PM, Jan Beulich wrote:
> On 18.02.2026 13:58, Oleksii Kurochko wrote:
>> On 2/17/26 8:34 AM, Jan Beulich wrote:
>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>>> domain_use_host_layout() is generic enough to be moved to the
>>>>>> common header xen/domain.h.
>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>>>
>>>>>> --- a/xen/include/xen/domain.h
>>>>>> +++ b/xen/include/xen/domain.h
>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>>>    
>>>>>> +/*
>>>>>> + * Is the domain using the host memory layout?
>>>>>> + *
>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>>>> + * host memory layout.
>>>>>> + *
>>>>>> + * The hardware domain will use the host layout regardless of
>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>>>> + * for the devices.
>>>>>> + */
>>>>>> +#ifndef domain_use_host_layout
>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>>>> +                                    is_hardware_domain(d))
>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>>>> proliferate in common (non-DT) code.
>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>>>> domain) on x86 as well. In fact, we already have a working prototype,
>>>> although it is not suitable for upstream yet.
>>>>
>>>> In addition to the PSP use case that we discussed a few months ago,
>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>>>> must be 1:1 mapped, we also have a new use case. We are running the full
>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
>>>> are configured as PVH.
>>> Hmm. Then adjustments need making, for commentary and macro to be correct
>>> on x86. First and foremost none of what is there is true for PV.
>> As is_domain_direct_mapped() returns always false for x86, so
>> domain_use_host_layout macro will return incorrect value for non-hardware
>> domains (dom0?). And as PV domains are not auto_translated domains so are
>> always direct-mapped, so technically is_domain_direct_mapped() (or
>> domain_use_host_layout()) should return true in such case.
> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
> some special purpose (absence of IOMMU iirc).

I made such conclusion because of the comments in the code mentioned below:
  - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
  - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107

Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
is mentioned that:
   XENFEAT_direct_mapped is always set for not auto-translated guests.

>
>> (I assume it is also true for every domain except HVM according to the comment
>> /* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
>> the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
>> domains (x86 only) are always direct-mapped*/).
>>
>> Is my understanding correct?
>>
>> Then isn't that a problem of how is_domain_direct_mapped() is defined
>> for x86? Shouldn't it be defined like:
>>     #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))
>>
>> Would it be better to move "!paging_mode_translate(d) || " to the definition
>> of domain_use_host_layout()?
>>
>> Could you please explain what is wrong with the comment? Probably, except:
>>     * To avoid any trouble finding space, it is easier to force using the
>>     * host memory layout.
>> everything else should be true for x86.
> "The hardware domain will use ..." isn't true for PV Dom0.

And then just pure is_hardware_domain(d) inside macros isn't correct too, right?
So it should be (... || (!is_pv_domain(d) && is_hardware_domain(d)))

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-18 14:38             ` Oleksii Kurochko
@ 2026-02-18 14:50               ` Jan Beulich
  2026-02-28  1:42                 ` Stefano Stabellini
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-02-18 14:50 UTC (permalink / raw)
  To: Oleksii Kurochko, Stefano Stabellini
  Cc: Romain Caritey, Julien Grall, Bertrand Marquis, Michal Orzel,
	Volodymyr Babchuk, Andrew Cooper, Anthony PERARD,
	Roger Pau Monné, xen-devel

On 18.02.2026 15:38, Oleksii Kurochko wrote:
> On 2/18/26 2:12 PM, Jan Beulich wrote:
>> On 18.02.2026 13:58, Oleksii Kurochko wrote:
>>> On 2/17/26 8:34 AM, Jan Beulich wrote:
>>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
>>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
>>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
>>>>>>> domain_use_host_layout() is generic enough to be moved to the
>>>>>>> common header xen/domain.h.
>>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
>>>>>>
>>>>>>> --- a/xen/include/xen/domain.h
>>>>>>> +++ b/xen/include/xen/domain.h
>>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
>>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
>>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
>>>>>>>    
>>>>>>> +/*
>>>>>>> + * Is the domain using the host memory layout?
>>>>>>> + *
>>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
>>>>>>> + * To avoid any trouble finding space, it is easier to force using the
>>>>>>> + * host memory layout.
>>>>>>> + *
>>>>>>> + * The hardware domain will use the host layout regardless of
>>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
>>>>>>> + * for the devices.
>>>>>>> + */
>>>>>>> +#ifndef domain_use_host_layout
>>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>>>>>>> +                                    is_hardware_domain(d))
>>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
>>>>>> proliferate in common (non-DT) code.
>>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
>>>>> domain) on x86 as well. In fact, we already have a working prototype,
>>>>> although it is not suitable for upstream yet.
>>>>>
>>>>> In addition to the PSP use case that we discussed a few months ago,
>>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
>>>>> must be 1:1 mapped, we also have a new use case. We are running the full
>>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
>>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
>>>>> are configured as PVH.
>>>> Hmm. Then adjustments need making, for commentary and macro to be correct
>>>> on x86. First and foremost none of what is there is true for PV.
>>> As is_domain_direct_mapped() returns always false for x86, so
>>> domain_use_host_layout macro will return incorrect value for non-hardware
>>> domains (dom0?). And as PV domains are not auto_translated domains so are
>>> always direct-mapped, so technically is_domain_direct_mapped() (or
>>> domain_use_host_layout()) should return true in such case.
>> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
>> some special purpose (absence of IOMMU iirc).
> 
> I made such conclusion because of the comments in the code mentioned below:
>   - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
>   - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107
> 
> Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
> is mentioned that:
>    XENFEAT_direct_mapped is always set for not auto-translated guests.

Hmm, this you're right with, and XENVER_get_features handling indeed has

            if ( !paging_mode_translate(d) || is_domain_direct_mapped(d) )
                fi.submap |= (1U << XENFEAT_direct_mapped);

Which now I have a vague recollection of not having been happy with back at
the time. Based solely on the GFN == MFN statement this may be correct, but
"GFN" is a questionable term for PV in the first place. See how e.g.
common/memory.c resorts to using GPFN and GMFN, in line with commentary in
public/memory.h.

What the above demonstrates quite well though is that there's no direct
relationship between XENFEAT_direct_mapped and is_domain_direct_mapped().

>>> (I assume it is also true for every domain except HVM according to the comment
>>> /* HVM guests are translated.  PV guests are not. */ in xc_dom_translated and
>>> the comment above definition of XENFEAT_direct_mapped: /* ...not auto_translated
>>> domains (x86 only) are always direct-mapped*/).
>>>
>>> Is my understanding correct?
>>>
>>> Then isn't that a problem of how is_domain_direct_mapped() is defined
>>> for x86? Shouldn't it be defined like:
>>>     #define is_domain_direct_mapped(d) (!paging_mode_translate(d) || ((d)->cdf & CDF_directmap))
>>>
>>> Would it be better to move "!paging_mode_translate(d) || " to the definition
>>> of domain_use_host_layout()?
>>>
>>> Could you please explain what is wrong with the comment? Probably, except:
>>>     * To avoid any trouble finding space, it is easier to force using the
>>>     * host memory layout.
>>> everything else should be true for x86.
>> "The hardware domain will use ..." isn't true for PV Dom0.
> 
> And then just pure is_hardware_domain(d) inside macros isn't correct too, right?
> So it should be (... || (!is_pv_domain(d) && is_hardware_domain(d)))

Stefano, please can you guide Oleksii to put there something which is both
correct and will cover your intended use case as well?

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-18 14:50               ` Jan Beulich
@ 2026-02-28  1:42                 ` Stefano Stabellini
  2026-02-28  1:59                   ` Stefano Stabellini
  0 siblings, 1 reply; 39+ messages in thread
From: Stefano Stabellini @ 2026-02-28  1:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Oleksii Kurochko, Stefano Stabellini, Romain Caritey,
	Julien Grall, Bertrand Marquis, Michal Orzel, Volodymyr Babchuk,
	Andrew Cooper, Anthony PERARD, Roger Pau Monné, xen-devel

On Wed, 18 Feb 2026, Jan Beulich wrote:
> On 18.02.2026 15:38, Oleksii Kurochko wrote:
> > On 2/18/26 2:12 PM, Jan Beulich wrote:
> >> On 18.02.2026 13:58, Oleksii Kurochko wrote:
> >>> On 2/17/26 8:34 AM, Jan Beulich wrote:
> >>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
> >>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
> >>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
> >>>>>>> domain_use_host_layout() is generic enough to be moved to the
> >>>>>>> common header xen/domain.h.
> >>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
> >>>>>>
> >>>>>>> --- a/xen/include/xen/domain.h
> >>>>>>> +++ b/xen/include/xen/domain.h
> >>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
> >>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
> >>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> >>>>>>>    
> >>>>>>> +/*
> >>>>>>> + * Is the domain using the host memory layout?
> >>>>>>> + *
> >>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> >>>>>>> + * To avoid any trouble finding space, it is easier to force using the
> >>>>>>> + * host memory layout.
> >>>>>>> + *
> >>>>>>> + * The hardware domain will use the host layout regardless of
> >>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
> >>>>>>> + * for the devices.
> >>>>>>> + */
> >>>>>>> +#ifndef domain_use_host_layout
> >>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> >>>>>>> +                                    is_hardware_domain(d))
> >>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
> >>>>>> proliferate in common (non-DT) code.
> >>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
> >>>>> domain) on x86 as well. In fact, we already have a working prototype,
> >>>>> although it is not suitable for upstream yet.
> >>>>>
> >>>>> In addition to the PSP use case that we discussed a few months ago,
> >>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
> >>>>> must be 1:1 mapped, we also have a new use case. We are running the full
> >>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
> >>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
> >>>>> are configured as PVH.
> >>>> Hmm. Then adjustments need making, for commentary and macro to be correct
> >>>> on x86. First and foremost none of what is there is true for PV.
> >>> As is_domain_direct_mapped() returns always false for x86, so
> >>> domain_use_host_layout macro will return incorrect value for non-hardware
> >>> domains (dom0?). And as PV domains are not auto_translated domains so are
> >>> always direct-mapped, so technically is_domain_direct_mapped() (or
> >>> domain_use_host_layout()) should return true in such case.
> >> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
> >> some special purpose (absence of IOMMU iirc).
> > 
> > I made such conclusion because of the comments in the code mentioned below:
> >   - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
> >   - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107
> > 
> > Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
> > is mentioned that:
> >    XENFEAT_direct_mapped is always set for not auto-translated guests.
> 
> Hmm, this you're right with, and XENVER_get_features handling indeed has
> 
>             if ( !paging_mode_translate(d) || is_domain_direct_mapped(d) )
>                 fi.submap |= (1U << XENFEAT_direct_mapped);
> 
> Which now I have a vague recollection of not having been happy with back at
> the time. Based solely on the GFN == MFN statement this may be correct, but
> "GFN" is a questionable term for PV in the first place. See how e.g.
> common/memory.c resorts to using GPFN and GMFN, in line with commentary in
> public/memory.h.
> 
> What the above demonstrates quite well though is that there's no direct
> relationship between XENFEAT_direct_mapped and is_domain_direct_mapped().

Let's start from the easy case: domain_use_host_layout.

domain_use_host_layout is meant to indicate whether the domain memory
map (e.g. the address of the interrupt controller, the start of RAM,
etc.) matches the host memory map or not.

It is implemented as:

#define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
                                   is_hardware_domain(d))

Because on ARM there are two cases:
1) hardware domain is always using the host layout
2) non-hardware domain only use the host layout when directly mapped
(more on the later)


I think this can be generalized and made arch-neutral with the caveat
that it should return False for PV guests as Jan mentioned. After all
the virtual interrupt controller in a PV domain doesn't start at the
same guest physical address of the real interrupt controller. The
comment can be improved, but let's get to it after we talk about
is_domain_direct_mapped.


is_domain_direct_mapped is meant to indicate that a domain's memory is
allocated 1:1 such that GFN == MFN. is_domain_direct_mapped is easily
applicable as-is to PVH and HVM guests where there are two stages of
translation.

What about PV guests? One could take the stance that given that there
are no real GFN space, then GFN is always the same as MFN. But this is
more philosophical than practical.

Practically, is_domain_direct_mapped() triggers a different code path in
xen/common/memory.c:populate_physmap for contiguous 1:1 memory
allocations which is probably undesirable for PV guests.

Practically, there is a related flag exposed to Linux
XENFEAT_direct_mapped. For HVM/PVH guests makes sense to be one and the
same as is_domain_direct_mapped(). This flag is used by Linux to know
whether it can use swiotlb-xen or not. Specifically, swiotlb-xen is only
usable when XENFEAT_direct_mapped is enabled for ARM guests and the
principle could apply to HVM/PVH guests too. What about PV guests?
They also make use of swiotlb-xen and XENFEAT_direct_mapped is set to
True for PV guests today.


In conclusion, is_domain_direct_mapped() was born for autotranslated
guests and is meant to trigger large contigous memory allocations is Xen
and permit the usage of swiotlb-xen in Linux. For PV guests, while we
want swiotlb-xen and the XENFEAT_direct_mapped flag is already set to
True, we don't want to change the memory allocation scheme. 

So I think is_domain_direct_mapped() should be always False on x86:
- PV guests should be always False
- PVH/HVM guests could be True but it is currently unimplemented (AMD
  is working on an implementation)

For compatibility and functionality, XENFEAT_direct_mapped should be
left as is.

The implementation of domain_use_host_layout() can be moved to common
code with a change:


/*
 * Is the auto-translated domain using the host memory layout?
 *
 * domain_use_host_layout() is always False for PV guests.
 *
 * Direct-mapped domains (autotranslated domains with memory allocated
 * contiguously and mapped 1:1 so that GFN == MFN) are always using the
 * host memory layout to avoid address clashes.
 *
 * The hardware domain will use the host layout (regardless of
 * direct-mapped) because some OS may rely on a specific address ranges
 * for the devices. PV Dom0, like any other PV guests, has
 * domain_use_host_layout() returning False.
 */
#define domain_use_host_layout(d) (is_domain_direct_mapped(d) ||
                                   (paging_mode_translate(d) &&
                                    is_hardware_domain(d)))



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 5/6] xen: move domain_use_host_layout() to common header
  2026-02-28  1:42                 ` Stefano Stabellini
@ 2026-02-28  1:59                   ` Stefano Stabellini
  0 siblings, 0 replies; 39+ messages in thread
From: Stefano Stabellini @ 2026-02-28  1:59 UTC (permalink / raw)
  To: Stefano Stabellini
  Cc: Jan Beulich, Oleksii Kurochko, Romain Caritey, Julien Grall,
	Bertrand Marquis, Michal Orzel, Volodymyr Babchuk, Andrew Cooper,
	Anthony PERARD, Roger Pau Monné, xen-devel

On Fri, 27 Feb 2026, Stefano Stabellini wrote:
> On Wed, 18 Feb 2026, Jan Beulich wrote:
> > On 18.02.2026 15:38, Oleksii Kurochko wrote:
> > > On 2/18/26 2:12 PM, Jan Beulich wrote:
> > >> On 18.02.2026 13:58, Oleksii Kurochko wrote:
> > >>> On 2/17/26 8:34 AM, Jan Beulich wrote:
> > >>>> On 16.02.2026 19:42, Stefano Stabellini wrote:
> > >>>>> On Mon, 16 Feb 2026, Jan Beulich wrote:
> > >>>>>> On 12.02.2026 17:21, Oleksii Kurochko wrote:
> > >>>>>>> domain_use_host_layout() is generic enough to be moved to the
> > >>>>>>> common header xen/domain.h.
> > >>>>>> Maybe, but then something DT-specific, not xen/domain.h. Specifically, ...
> > >>>>>>
> > >>>>>>> --- a/xen/include/xen/domain.h
> > >>>>>>> +++ b/xen/include/xen/domain.h
> > >>>>>>> @@ -62,6 +62,22 @@ void domid_free(domid_t domid);
> > >>>>>>>    #define is_domain_direct_mapped(d) ((d)->cdf & CDF_directmap)
> > >>>>>>>    #define is_domain_using_staticmem(d) ((d)->cdf & CDF_staticmem)
> > >>>>>>>    
> > >>>>>>> +/*
> > >>>>>>> + * Is the domain using the host memory layout?
> > >>>>>>> + *
> > >>>>>>> + * Direct-mapped domain will always have the RAM mapped with GFN == MFN.
> > >>>>>>> + * To avoid any trouble finding space, it is easier to force using the
> > >>>>>>> + * host memory layout.
> > >>>>>>> + *
> > >>>>>>> + * The hardware domain will use the host layout regardless of
> > >>>>>>> + * direct-mapped because some OS may rely on a specific address ranges
> > >>>>>>> + * for the devices.
> > >>>>>>> + */
> > >>>>>>> +#ifndef domain_use_host_layout
> > >>>>>>> +# define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
> > >>>>>>> +                                    is_hardware_domain(d))
> > >>>>>> ... is_domain_direct_mapped() isn't something that I'd like to see further
> > >>>>>> proliferate in common (non-DT) code.
> > >>>>> Hi Jan, we have a requirement for 1:1 mapped Dom0 (I should say hardware
> > >>>>> domain) on x86 as well. In fact, we already have a working prototype,
> > >>>>> although it is not suitable for upstream yet.
> > >>>>>
> > >>>>> In addition to the PSP use case that we discussed a few months ago,
> > >>>>> where the PSP is not behind an IOMMU and therefore exchanged addresses
> > >>>>> must be 1:1 mapped, we also have a new use case. We are running the full
> > >>>>> Xen-based automotive stack on an Azure instance where SVM (vmentry and
> > >>>>> vmexit) is available, but an IOMMU is not present. All virtual machines
> > >>>>> are configured as PVH.
> > >>>> Hmm. Then adjustments need making, for commentary and macro to be correct
> > >>>> on x86. First and foremost none of what is there is true for PV.
> > >>> As is_domain_direct_mapped() returns always false for x86, so
> > >>> domain_use_host_layout macro will return incorrect value for non-hardware
> > >>> domains (dom0?). And as PV domains are not auto_translated domains so are
> > >>> always direct-mapped, so technically is_domain_direct_mapped() (or
> > >>> domain_use_host_layout()) should return true in such case.
> > >> Hmm? PV domains aren't direct-mapped. Direct-map was introduced by Arm for
> > >> some special purpose (absence of IOMMU iirc).
> > > 
> > > I made such conclusion because of the comments in the code mentioned below:
> > >   - https://elixir.bootlin.com/xen/v4.21.0/source/tools/libs/guest/xg_dom_x86.c#L1880
> > >   - https://elixir.bootlin.com/xen/v4.21.0/source/xen/include/public/features.h#L107
> > > 
> > > Also, in the comment where it is introduced (d66bf122c0a "xen: introduce XENFEAT_direct_mapped and XENFEAT_not_direct_mapped")
> > > is mentioned that:
> > >    XENFEAT_direct_mapped is always set for not auto-translated guests.
> > 
> > Hmm, this you're right with, and XENVER_get_features handling indeed has
> > 
> >             if ( !paging_mode_translate(d) || is_domain_direct_mapped(d) )
> >                 fi.submap |= (1U << XENFEAT_direct_mapped);
> > 
> > Which now I have a vague recollection of not having been happy with back at
> > the time. Based solely on the GFN == MFN statement this may be correct, but
> > "GFN" is a questionable term for PV in the first place. See how e.g.
> > common/memory.c resorts to using GPFN and GMFN, in line with commentary in
> > public/memory.h.
> > 
> > What the above demonstrates quite well though is that there's no direct
> > relationship between XENFEAT_direct_mapped and is_domain_direct_mapped().
> 
> Let's start from the easy case: domain_use_host_layout.
> 
> domain_use_host_layout is meant to indicate whether the domain memory
> map (e.g. the address of the interrupt controller, the start of RAM,
> etc.) matches the host memory map or not.
> 
> It is implemented as:
> 
> #define domain_use_host_layout(d) (is_domain_direct_mapped(d) || \
>                                    is_hardware_domain(d))
> 
> Because on ARM there are two cases:
> 1) hardware domain is always using the host layout
> 2) non-hardware domain only use the host layout when directly mapped
> (more on the later)
> 
> 
> I think this can be generalized and made arch-neutral with the caveat
> that it should return False for PV guests as Jan mentioned. After all
> the virtual interrupt controller in a PV domain doesn't start at the
> same guest physical address of the real interrupt controller. The
> comment can be improved, but let's get to it after we talk about
> is_domain_direct_mapped.
> 
> 
> is_domain_direct_mapped is meant to indicate that a domain's memory is
> allocated 1:1 such that GFN == MFN. is_domain_direct_mapped is easily
> applicable as-is to PVH and HVM guests where there are two stages of
> translation.
> 
> What about PV guests? One could take the stance that given that there
> are no real GFN space, then GFN is always the same as MFN. But this is
> more philosophical than practical.
> 
> Practically, is_domain_direct_mapped() triggers a different code path in
> xen/common/memory.c:populate_physmap for contiguous 1:1 memory
> allocations which is probably undesirable for PV guests.
> 
> Practically, there is a related flag exposed to Linux
> XENFEAT_direct_mapped. For HVM/PVH guests makes sense to be one and the
> same as is_domain_direct_mapped(). This flag is used by Linux to know
> whether it can use swiotlb-xen or not. Specifically, swiotlb-xen is only
> usable when XENFEAT_direct_mapped is enabled for ARM guests and the
> principle could apply to HVM/PVH guests too. What about PV guests?
> They also make use of swiotlb-xen and XENFEAT_direct_mapped is set to
> True for PV guests today.
> 
> 
> In conclusion, is_domain_direct_mapped() was born for autotranslated
> guests and is meant to trigger large contigous memory allocations is Xen
> and permit the usage of swiotlb-xen in Linux. For PV guests, while we
> want swiotlb-xen and the XENFEAT_direct_mapped flag is already set to
> True, we don't want to change the memory allocation scheme. 
> 
> So I think is_domain_direct_mapped() should be always False on x86:
> - PV guests should be always False
> - PVH/HVM guests could be True but it is currently unimplemented (AMD
>   is working on an implementation)
> 
> For compatibility and functionality, XENFEAT_direct_mapped should be
> left as is.
> 
> The implementation of domain_use_host_layout() can be moved to common
> code with a change:
> 
> 
> /*
>  * Is the auto-translated domain using the host memory layout?
>  *
>  * domain_use_host_layout() is always False for PV guests.
>  *
>  * Direct-mapped domains (autotranslated domains with memory allocated
>  * contiguously and mapped 1:1 so that GFN == MFN) are always using the
>  * host memory layout to avoid address clashes.
>  *
>  * The hardware domain will use the host layout (regardless of
>  * direct-mapped) because some OS may rely on a specific address ranges
>  * for the devices. PV Dom0, like any other PV guests, has
>  * domain_use_host_layout() returning False.
>  */
> #define domain_use_host_layout(d) (is_domain_direct_mapped(d) ||
>                                    (paging_mode_translate(d) &&
>                                     is_hardware_domain(d)))

I'll add one thing. While I think it is clear that XENFEAT_direct_mapped
should remain as is and that is_domain_direct_mapped() should be always
False for PV guests, given that domain_use_host_layout is not currently
used on x86 it is debatable how it should be implemented.                                                                                    
 
For PVH/HVM, domain_use_host_layout() can easily be aligned with ARM.                                                                        
                                                                
For PV DomUs, it will return false and there is no issue.

For PV Dom0, I would argue it should return False because the concept
of "host memory layout" is about the address of virtual platform
devices (interrupt controller, UART, etc.) in the guest physical
address space. PV guests don't have virtual devices mapped at specific
guest physical addresses, they use hypercalls. But I can see it could be
argued either way, when you take into consideration EFI/ACPI tables.
For now, domain_use_host_layout() is unused on x86, so it doesn't make a
difference.


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-02-13 13:11       ` Jan Beulich
  2026-02-18 10:39         ` Oleksii Kurochko
@ 2026-03-17 12:49         ` Oleksii Kurochko
  2026-03-19  7:58           ` Jan Beulich
  1 sibling, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-03-17 12:49 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>> +
>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }

(cut)

> If all you want are 2Gb guests, why would such guests be 64-bit? And with
> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
> even a 32-bit hypervisor would suffice?

Btw, shouldn't we look at VPN width?

My understanding is that we should take GUEST_RAM0_BASE as sgfn address
and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
repeat this process until we won't map GUEST_RAM0_SIZE.

In this case for RV32 VPN (which is GFN in the current context) is 32-bit
wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-03-17 12:49         ` Oleksii Kurochko
@ 2026-03-19  7:58           ` Jan Beulich
  2026-03-20  9:58             ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-03-19  7:58 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 17.03.2026 13:49, Oleksii Kurochko wrote:
> 
> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>>> +
>>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
> 
> (cut)
> 
>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>> even a 32-bit hypervisor would suffice?
> 
> Btw, shouldn't we look at VPN width?
> 
> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
> repeat this process until we won't map GUEST_RAM0_SIZE.
> 
> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.

??? (IOW - I fear I'm confused enough by the question that I don't know how
to respond.)

Jan


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-03-19  7:58           ` Jan Beulich
@ 2026-03-20  9:58             ` Oleksii Kurochko
  2026-03-20 13:19               ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Oleksii Kurochko @ 2026-03-20  9:58 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel


On 3/19/26 8:58 AM, Jan Beulich wrote:
> On 17.03.2026 13:49, Oleksii Kurochko wrote:
>> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>>>> +
>>>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
>> (cut)
>>
>>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>>> even a 32-bit hypervisor would suffice?
>> Btw, shouldn't we look at VPN width?
>>
>> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
>> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
>> repeat this process until we won't map GUEST_RAM0_SIZE.
>>
>> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
>> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
> ??? (IOW - I fear I'm confused enough by the question that I don't know how
> to respond.)

You mentioned above that:
   "... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide) ..."

I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE definition.
(and maybe I just misinterpreted incorrectly your original message)
GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its physical
address space, i.e. it is a GPA, which is then translated to an MPA.

 From the MMU's perspective, the GPA looks like:
   VPN[1] | VPN[0] | page_offset   (in Sv32x4 mode)

In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and the MPA is
also 32 bits wide (or 22 bits wide in terms of PPN).

The distinction is not significant in Sv32x4, since PPN width equals VPN width, but
in other modes VPN < PPN (in terms of bit width).
So when we want to run a guest in Sv39x4 mode and want to give the guest the full
Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value for
Sv39x4, shouldn't we look at the VPN width rather than the PPN width?
In other words, GUEST_RAM0_SIZE should be (2^41 - 1) rather than (2^56 - 1)
for Sv39x4.

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-03-20  9:58             ` Oleksii Kurochko
@ 2026-03-20 13:19               ` Jan Beulich
  2026-03-20 14:30                 ` Oleksii Kurochko
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2026-03-20 13:19 UTC (permalink / raw)
  To: Oleksii Kurochko
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel

On 20.03.2026 10:58, Oleksii Kurochko wrote:
> On 3/19/26 8:58 AM, Jan Beulich wrote:
>> On 17.03.2026 13:49, Oleksii Kurochko wrote:
>>> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>>>>> +
>>>>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
>>> (cut)
>>>
>>>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>>>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>>>> even a 32-bit hypervisor would suffice?
>>> Btw, shouldn't we look at VPN width?
>>>
>>> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
>>> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
>>> repeat this process until we won't map GUEST_RAM0_SIZE.
>>>
>>> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
>>> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
>> ??? (IOW - I fear I'm confused enough by the question that I don't know how
>> to respond.)
> 
> You mentioned above that:
>    "... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide) ..."
> 
> I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE definition.
> (and maybe I just misinterpreted incorrectly your original message)
> GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its physical
> address space, i.e. it is a GPA, which is then translated to an MPA.
> 
>  From the MMU's perspective, the GPA looks like:
>    VPN[1] | VPN[0] | page_offset   (in Sv32x4 mode)
> 
> In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and the MPA is
> also 32 bits wide (or 22 bits wide in terms of PPN).

You mentioning Sv32x4 may point at part of the problem: For the guest physical
memory layout (and hence size), paging and hence virtual addresses don't matter
at all. What matters is what the guest can put in the page table entries it
writes. Addresses there are represented as PPNs, aren't they? Hence my use of
that acronym.

> The distinction is not significant in Sv32x4, since PPN width equals VPN width, but
> in other modes VPN < PPN (in terms of bit width).
> So when we want to run a guest in Sv39x4 mode and want to give the guest the full
> Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value for
> Sv39x4, shouldn't we look at the VPN width rather than the PPN width?

No, why? The guest can arrange to map more than 2^39 bytes. Not all at the same
time, sure, but by suitable switching page tables (or merely entries) around.

Jan

> In other words, GUEST_RAM0_SIZE should be (2^41 - 1) rather than (2^56 - 1)
> for Sv39x4.
> 
> ~ Oleksii
> 



^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS
  2026-03-20 13:19               ` Jan Beulich
@ 2026-03-20 14:30                 ` Oleksii Kurochko
  0 siblings, 0 replies; 39+ messages in thread
From: Oleksii Kurochko @ 2026-03-20 14:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Romain Caritey, Alistair Francis, Connor Davis, Andrew Cooper,
	Anthony PERARD, Michal Orzel, Julien Grall, Roger Pau Monné,
	Stefano Stabellini, xen-devel



On 3/20/26 2:19 PM, Jan Beulich wrote:
> On 20.03.2026 10:58, Oleksii Kurochko wrote:
>> On 3/19/26 8:58 AM, Jan Beulich wrote:
>>> On 17.03.2026 13:49, Oleksii Kurochko wrote:
>>>> On 2/13/26 2:11 PM, Jan Beulich wrote:
>>>>>>>> +#define GUEST_RAM0_BASE   xen_mk_ullong(0x80000000) /* 2GB of low RAM @ 2GB */
>>>>>>>> +#define GUEST_RAM0_SIZE   xen_mk_ullong(0x80000000)
>>>>>>>> +
>>>>>>>> +#define GUEST_RAM_BANK_BASES   { GUEST_RAM0_BASE }
>>>>>>>> +#define GUEST_RAM_BANK_SIZES   { GUEST_RAM0_SIZE }
>>>> (cut)
>>>>
>>>>> If all you want are 2Gb guests, why would such guests be 64-bit? And with
>>>>> (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide), perhaps
>>>>> even a 32-bit hypervisor would suffice?
>>>> Btw, shouldn't we look at VPN width?
>>>>
>>>> My understanding is that we should take GUEST_RAM0_BASE as sgfn address
>>>> and then map it to mfn's page (allocated by alloc_domheap_pages())? And then
>>>> repeat this process until we won't map GUEST_RAM0_SIZE.
>>>>
>>>> In this case for RV32 VPN (which is GFN in the current context) is 32-bit
>>>> wide as RV32 supports only Sv32, what is 2^32 - 1, what is almost 4gb.
>>> ??? (IOW - I fear I'm confused enough by the question that I don't know how
>>> to respond.)
>>
>> You mentioned above that:
>>     "... And with (iirc) RV32 permitting more than 4Gb (via PPN being 22 bits wide) ..."
>>
>> I wanted to clarify why you use PPN here in the context of GUEST_RAM0_BASE definition.
>> (and maybe I just misinterpreted incorrectly your original message)
>> GUEST_RAM0_BASE is the address at which the guest believes RAM starts in its physical
>> address space, i.e. it is a GPA, which is then translated to an MPA.
>>
>>   From the MMU's perspective, the GPA looks like:
>>     VPN[1] | VPN[0] | page_offset   (in Sv32x4 mode)
>>
>> In Sv32x4, the GPA is 34 bits wide (or 22 bits wide in terms of GFNs), and the MPA is
>> also 32 bits wide (or 22 bits wide in terms of PPN).
> 
> You mentioning Sv32x4 may point at part of the problem: For the guest physical
> memory layout (and hence size), paging and hence virtual addresses don't matter
> at all. What matters is what the guest can put in the page table entries it
> writes. Addresses there are represented as PPNs, aren't they? Hence my use of
> that acronym.

That's is what I came to after wrote and sent an e-mail. Now you 
confirmed that.

> 
>> The distinction is not significant in Sv32x4, since PPN width equals VPN width, but
>> in other modes VPN < PPN (in terms of bit width).
>> So when we want to run a guest in Sv39x4 mode and want to give the guest the full
>> Sv39x4 address space, setting GUEST_RAM0_SIZE to the maximum possible value for
>> Sv39x4, shouldn't we look at the VPN width rather than the PPN width?
> 
> No, why? The guest can arrange to map more than 2^39 bytes. Not all at the same
> time, sure, but by suitable switching page tables (or merely entries) around.
> 
Good point. Then the right limit is therefore the PPN width which 
reflects the actual physical addressing capability.

Thanks a lot.

~ Oleksii



^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2026-03-20 14:31 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-12 16:21 [PATCH v1 0/6] RISCV: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
2026-02-12 16:21 ` [PATCH v1 1/6] xen/riscv: implement get_page_from_gfn() Oleksii Kurochko
2026-02-16 12:38   ` Jan Beulich
2026-02-16 12:41     ` Jan Beulich
2026-02-17  9:01     ` Oleksii Kurochko
2026-02-17  9:10       ` Jan Beulich
2026-02-17  9:58         ` Oleksii Kurochko
2026-02-17 10:40           ` Jan Beulich
2026-02-12 16:21 ` [PATCH v1 2/6] xen/riscv: implement copy_to_guest_phys() Oleksii Kurochko
2026-02-16 14:57   ` Jan Beulich
2026-02-17 10:25     ` Oleksii Kurochko
2026-02-17 10:42       ` Jan Beulich
2026-02-12 16:21 ` [PATCH v1 3/6] xen/riscv: add zImage kernel loading support Oleksii Kurochko
2026-02-16 16:31   ` Jan Beulich
2026-02-17 11:58     ` Oleksii Kurochko
2026-02-17 13:02       ` Jan Beulich
2026-02-17 15:28         ` Oleksii Kurochko
2026-02-12 16:21 ` [PATCH v1 4/6] xen: move declaration of fw_unreserved_regions() to common header Oleksii Kurochko
2026-02-12 16:21 ` [PATCH v1 5/6] xen: move domain_use_host_layout() " Oleksii Kurochko
2026-02-16 16:36   ` Jan Beulich
2026-02-16 18:42     ` Stefano Stabellini
2026-02-17  7:34       ` Jan Beulich
2026-02-18 12:58         ` Oleksii Kurochko
2026-02-18 13:12           ` Jan Beulich
2026-02-18 14:38             ` Oleksii Kurochko
2026-02-18 14:50               ` Jan Beulich
2026-02-28  1:42                 ` Stefano Stabellini
2026-02-28  1:59                   ` Stefano Stabellini
2026-02-12 16:21 ` [PATCH v1 6/6] xen/riscv: enable DOMAIN_BUILD_HELPERS Oleksii Kurochko
2026-02-12 16:39   ` Jan Beulich
2026-02-13 12:54     ` Oleksii Kurochko
2026-02-13 13:11       ` Jan Beulich
2026-02-18 10:39         ` Oleksii Kurochko
2026-02-18 10:45           ` Jan Beulich
2026-03-17 12:49         ` Oleksii Kurochko
2026-03-19  7:58           ` Jan Beulich
2026-03-20  9:58             ` Oleksii Kurochko
2026-03-20 13:19               ` Jan Beulich
2026-03-20 14:30                 ` Oleksii Kurochko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.